Saturday, 31 January 2009

Getting NFS4 permissions working correctly

Following the installation of the OpenSolaris server, I've had some problems with the NFS mounts. I've created an export for a fileserver (datapool/filestore) and although I have setup identical UID/GID maps between my clients and servers, when I mounted the filesystem on my Linux server, I found that files created were owned by nobody:nobody.

At first I thought this was due to some configuration problem in the OpenSolaris installation, but after trying it with the Mac as well, I realised that the mapping was fine. This then pointed to the OpenSUSE 11.1 installation.

The problem turned out to be the Domain setting in /etc/idmapd.conf. The value in this file was different from the OpenSolaris NFS domain. Changing that and restarting the idmapd process (which I did by rebooting the server as I had a kernel update to do), fixed the problem and I now map correctly.

The next step is getting a Samba and Windows client to authenticate me correctly with the OpenSolaris CIFS server. That might be more difficult, but I'll update here when it's done...

Thursday, 15 January 2009

Building a test lab

Thanks to the Technet subscription that work has provided for me, I'm now in a position to build my own test Windows network. The purpose of this is to help me get a grip on Windows Server 2008, some Active Directory, Terminal Services etc., and potentially some other non-MS tech such as Citrix XenDesktop [Express].

I've been thinking through the planning of this test lab and recognise that I need to create a network. I can either create a new, completely virtual network and put my VMs on it, routing this to my physical network using a dual homed VM appliance, or I can create the test network in a different address range as my "live production" network and assign the IP addresses so that they don't overlap.

The latter seems the easiest way of doing it (although I may be proven wrong when it's built!).

So, assume my local network is 192.168.0.0/24 (it's not, but I'm not stupid enough to put my real subnet on the 'net!). I'm going to slice up the subnet as follows:

192.168.0.1 - 192.168.0.99 = static range for production network
192.168.0.100 - 192.168.0.150 = DHCP range for production network
192.168.0.151 - 192.168.0.200 = static range for test lab network
192.168.0.201 - 192.168.0.254 = DHCP range for test lab network

How do I determine whether a plugged in device gets a production or test DHCP address? Ultimately it will depend on which DHCP server responds, but the reality is that it shouldn't really matter. Both servers will allocate an address that is routable to the Internet and will resolve the DNS. For anything that will be permanent, I'll allocate a static IP anyway.

My production network has the DNS suffix of local.zone, and I contemplated creating the Active Directory as a sub-domain (windows.local.zone). I think that it will be easier though if I simply create a new domain (e.g., windows.zone) and manually create a DNS forwarder to local.zone when appropriate. This keeps the production network (primarily non-Windows based Solaris, Linux and Mac OS X with a non-domained Vista) from interfering, or depending on, the test lab.

If either of my readers(!) spots anything obviously wrong here, please let me know!

Friday, 9 January 2009

VMware Certified Professional

After a week of revision, plus a couple of years worth of hands on experience and the VMware Fast Track course, I took the VCP exam this morning. Pass mark is 70 and I managed to get 86 which was fine, especially as I remember the struggle that was the SCSA upgrade.

But the IT world does not stay still, and in these days of economic uncertainty, the market is only going to get more competitive, so with the VCP now under the belt, it's time to turn to the next cert... CCNA refresh? Something Citrix? Red Hat? Microsoft? Hmm.

Thursday, 8 January 2009

An OpenSUSE quickie

The command line tool for patch and package management on OpenSUSE is "zypper". I've used zypper to list patch updates using:

# zypper lu

The patches can be added (updated) using:

# zypper up

Because I never got around to reading the man page, I didn't realise that both the above commands have an implicit "-t patch". I also didn't realise that "-t package" applied to the above commands can be used to display and update packages to a later version.

# zypper lu -t package
# zypper up -t package

Currently installing 83 package updates...

Friday, 2 January 2009

Backing up ZFS to an external USB drive

Having a resilient, snapshot managed storage server is all very well, but what happens if your server catches fire? While ZFS is very capable of preventing data loss, and the RAID capabilities compensate for a physical disk failure, the lack of a ufsdump/ufsrestore was a bit troubling.

I'm not claiming to have found the perfect solution, but a bit of playing around today with an external USB disk looks promising. I plugged the USB drive in and OpenSolaris automatically detected it. Running the format command showed it was mapped to c6t0d0.

I partitioned the disk to create a single 500GB(ish) slice 0 and created a traditional UFS filesystem on it. I'm sure I could have used ZFS, but wanted the simplicity of a single filesystem without worrying about pools or other volume manager artifacts.

After creating the filesystem with newfs, I mounted it to /mnt.

# newfs -m5 /dev/rdsk/c6t0d0s0
# mount /dev/dsk/c6t0d0s0 /mnt

I had a test filesystem created (datapool/testfs) and copied a file into it. I then took a snapshot of the filesystem:

# zfs snapshot datapool/testfs@mytest

I backed up the snapshot using the ZFS send syntax:

# zfs send datapool/testfs@mytest > /mnt/testfs.backup

This created a single file (/mnt/testfs.backup) containing the filesystem.

With that completed, I deleted the file I copied across. Now for the restore. This was very easy:

# zfs recv datapool/testfs.recover < /mnt/testfs.backup

A new filesystem was created and mounted in /datapool/testfs.recover, containing the file I wanted to recover which I could then copy back. To test a bit further, I destroyed the original datapool/testfs filesystem and all snapshots. I then did another zfs recv and specified the original filesystem name:

# zfs recv datapool/testfs < /mnt/testfs.backup

And it all came back perfectly!

Obviously this is a simple test and doesn't deal with incrementals etc, but should be sufficient for a keeping a copy of the data on an external disk that can be stored off site. Although I haven't tried, adding encryption to the zfs send pipeline should be very simple to do.

ZFS just gets better and better!