The one line summary: OpenSolaris and ZFS is very, very cool.
Having created a ZFS pool "datapool" consisting of two 500GB disks in a RAID1 mirror, I then created a few filesystems, including datapool/filestore which was to be the new fileserver. The whole process took two commands:
# zpool create datapool mirror c3d1 c4d1
# zfs create datapool/filestore
To make the new filesystem available over NFS to my Linux machine took another command:
# zfs set sharenfs=rw,anon=0 datapool/filestore
Okay, but what about the Windows box that doesn't support NFS natively? For this, I had to install the SMB server package (along with the kernel extension) and reboot the server. But after that it was simply a case of one more command:
# zfs set sharesmb=on datapool/filestore
The Mac has a very nifty backup tool called Time Machine. To use it you must directly attach a second hard disk, or tweak the configuration to allow for "unsupported" devices to work. I wanted to add in another hard disk to allow me to run Time Machine. Again, OpenSolaris/ZFS to the rescue.
I installed the iSCSI target server software (a couple of ticks in the package manager) and then ran the following commands:
# zfs create -s -V 100GB datapool/mac_backup
# zfs set shareiscsi=on datapool/mac_backup
# iscsitadm list target
The first command creates a "zvol" - a block device that is built from the zpool but does not have ZFS on it. The -s creates a sparse volume so that the space is not-preallocated. The second command makes the volume available over iSCSI, while the third command lists the available iSCSI targets so you can easily get the iSCSI address.
I then downloaded and installed the free globalSAN software for the Mac which provides an iSCSI initiator. Five minutes later (including a reboot of the Mac because it's another kernel extension), I had a new block device ready for partitioning in the Disk Utility. I created a Mac HFS journaled filesystem which Time Machine is able to use.
So one server can now provide simulateous NFS, CIFS/SMB and iSCSI to all my servers. It really is a small SAN at home!
Tuesday, 30 December 2008
Wednesday, 24 December 2008
OpenSolaris, Courier IMAP and FAM
I started the final part of migrating to the new OpenSolaris server today: Moving the internal IMAP server.
This server exists to collect email that is sent to my old address, T's old Hotmail emails (slurped down using a Thunderbird webmail extension and imported into IMAP), as well as act as an archive of all my old email dating back a number of years.
The new mail server will be a Solaris zone called "mailserver" on the OpenSolaris host. I built the zone and installed Fetchmail (to collect mail from my ISP using POP3), Procmail (to filter the email into the correct mailboxes) and Courier IMAP (to serve the email internally over IMAP).
Once I got Courier IMAP installed, and the config file copied across, I hit a problem:
Dec 24 19:19:32 mailserver imapd: [ID 702911 mail.error] Error: I/O error
Dec 24 19:19:32 mailserver imapd: [ID 702911 mail.error] Check for proper operation and configuration
Dec 24 19:19:32 mailserver imapd: [ID 702911 mail.error] of the File Access Monitor daemon (famd).
Hmm, so I need to install FAM. I did the install, but this still didn't work. Turns out that the FAM package doesn't make the necessary additions to /etc/rpc (FAM is controlled by rpcbind) and /etc/inetd.conf.
For reference, this is what needs to go in /etc/rpc:
sgi_fam 391002
And this is what needs to go in /etc/inetd.conf:
sgi_fam/1-2 stream rpc/tcp wait root /opt/csw/bin/fam fam
To register the inetd config, the inetconv command needs to be run at which point, starting Courier IMAP works and the above error has gone.
Job done!
This server exists to collect email that is sent to my old address, T's old Hotmail emails (slurped down using a Thunderbird webmail extension and imported into IMAP), as well as act as an archive of all my old email dating back a number of years.
The new mail server will be a Solaris zone called "mailserver" on the OpenSolaris host. I built the zone and installed Fetchmail (to collect mail from my ISP using POP3), Procmail (to filter the email into the correct mailboxes) and Courier IMAP (to serve the email internally over IMAP).
Once I got Courier IMAP installed, and the config file copied across, I hit a problem:
Dec 24 19:19:32 mailserver imapd: [ID 702911 mail.error] Error: I/O error
Dec 24 19:19:32 mailserver imapd: [ID 702911 mail.error] Check for proper operation and configuration
Dec 24 19:19:32 mailserver imapd: [ID 702911 mail.error] of the File Access Monitor daemon (famd).
Hmm, so I need to install FAM. I did the install, but this still didn't work. Turns out that the FAM package doesn't make the necessary additions to /etc/rpc (FAM is controlled by rpcbind) and /etc/inetd.conf.
For reference, this is what needs to go in /etc/rpc:
sgi_fam 391002
And this is what needs to go in /etc/inetd.conf:
sgi_fam/1-2 stream rpc/tcp wait root /opt/csw/bin/fam fam
To register the inetd config, the inetconv command needs to be run at which point, starting Courier IMAP works and the above error has gone.
Job done!
Tuesday, 23 December 2008
New kit, new project, new stuff to learn
Although I bought the ML110 G5 a few months ago, I have only been using it as a "play" machine. Earlier this month, Ebuyer got the ML115 G5 back in stock, so I took the opportunity to snap up one of those, along with 2 x 500GB disks, a 4GB RAM upgrade, 8 port Gigabit switch and a 4GB USB key drive.
You see, this is the plan:
The ML110 G5 will be the "production" server, running OpenSolaris 2008.11and playing the role of a storage server along with some core infrastructure services. I took the 250GB disk out of the ML115 and put it in the ML110 and mirrored the existing 250GB disk. I also added the 2 500GB disks in a mirrored pool. All of this is using ZFS for resilience and only takes a handful of commands to setup.
The OpenSolaris server is now running my print server software (CUPS) and works with the Linux machines, the Mac and even T's Vista PC. I'm running a small BIND DNS server for keeping track of the internal machines, and have a number of NFS shares setup for file serving, an ISO store and (in the near future), a VM datastore. The beauty of ZFS is that in addition to serving filesystems using NFS (and CIFS using the new kernel based service), other filesystems can use the ZFS technology thanks to zvols (essentially block devices in a zpool that can be shared using iSCSI and FC).
The 4GB USB key drive has VMware ESXi installed on it and the ML115 boots this. The USB port is internal, on the motherboard, so everything is nicely integrated. The actual VMs will be on the storage server - initially using NFS, but potentially using iSCSI in the next OpenSolaris release as well. Sounds like a SAN at home? Yep! :-) (this is why I've upgraded the switch to gigabit).
One of the things I'm finding with OpenSolaris is that although I've been using Solaris for years, there are a lot of changes (that presumably will make it into Solaris 11). The biggest one I've hit so far is the Image Packaging System (IPS). This appears to use a network repository for installing software (so no local .pkg files) and creates some new differences with Zones (they are now branded as "ipkg" and I think I've lost the ability to create sparse root zones in this release).
Still a lot to do and play with, but OpenSolaris certainly seems very feature-rich, and nothing else currently provides stuff like ZFS.
I might even have a look at getting xVM installed as well, but that might take a bit longer...
You see, this is the plan:
The ML110 G5 will be the "production" server, running OpenSolaris 2008.11and playing the role of a storage server along with some core infrastructure services. I took the 250GB disk out of the ML115 and put it in the ML110 and mirrored the existing 250GB disk. I also added the 2 500GB disks in a mirrored pool. All of this is using ZFS for resilience and only takes a handful of commands to setup.
The OpenSolaris server is now running my print server software (CUPS) and works with the Linux machines, the Mac and even T's Vista PC. I'm running a small BIND DNS server for keeping track of the internal machines, and have a number of NFS shares setup for file serving, an ISO store and (in the near future), a VM datastore. The beauty of ZFS is that in addition to serving filesystems using NFS (and CIFS using the new kernel based service), other filesystems can use the ZFS technology thanks to zvols (essentially block devices in a zpool that can be shared using iSCSI and FC).
The 4GB USB key drive has VMware ESXi installed on it and the ML115 boots this. The USB port is internal, on the motherboard, so everything is nicely integrated. The actual VMs will be on the storage server - initially using NFS, but potentially using iSCSI in the next OpenSolaris release as well. Sounds like a SAN at home? Yep! :-) (this is why I've upgraded the switch to gigabit).
One of the things I'm finding with OpenSolaris is that although I've been using Solaris for years, there are a lot of changes (that presumably will make it into Solaris 11). The biggest one I've hit so far is the Image Packaging System (IPS). This appears to use a network repository for installing software (so no local .pkg files) and creates some new differences with Zones (they are now branded as "ipkg" and I think I've lost the ability to create sparse root zones in this release).
Still a lot to do and play with, but OpenSolaris certainly seems very feature-rich, and nothing else currently provides stuff like ZFS.
I might even have a look at getting xVM installed as well, but that might take a bit longer...
Tuesday, 2 December 2008
Vista PDF printing problem
T has a problem printing PDF files from her Vista PC. The print server is a Solaris VM running CUPS and all other documents seem to print without any issues. It appears that selecting Print from the Adobe Acrobat Reader application doesn't send anything to the print queue.
The reason I'm blogging about this is the result of T noticing that she never gets a mention and therefore "Why don't you blog about it and see if anyone knows the answer".
What is this? A Helpdesk? :-)
The reason I'm blogging about this is the result of T noticing that she never gets a mention and therefore "Why don't you blog about it and see if anyone knows the answer".
What is this? A Helpdesk? :-)
Subscribe to:
Posts (Atom)