Wednesday, 22 December 2010

Home network update

My home network has been faithfully served by an installation of OpenSolaris on my HP ML110 G5. Unfortunately Oracle's actions towards the open source community have sadly killed what was an excellent project. While there is a small hope that the Illumos folks might get a fork organised, remaining on OpenSolaris was not practical.

To replace OpenSolaris, I would need an environment that provided all the features I was currently running. This meant I needed a CIFS and NFS server, iSCSI target server, internal DNS server, CUPS print server, private IMAP server (for my old pre-Gmail mail archive) and Windows 7 virtual machine courtesy of VirtualBox. Yeah, OpenSolaris was a *very* capable platform.

The solution I opted for was to turn the ML110 G5 into another VMware ESXi server, running a number of virtual machines the provide the above services. I would also take this opportunity to fix a couple of niggling problems with the way it was setup.

This change coincided with a number of new purchases for the home network:
  • Acer Revo Aspire
  • Netgear ReadyNAS Duo
  • OCZ Vertex 2 SSD
The Acer Revo was bought because I was fed up with using Windows 7 over RDP on the Mac. Little issues like the backslash key not working with a UK keyboard (sounds trivial, but you try using Windows seriously without entering backslash) meant I wanted something I could connect to directly via my KVM. The Revo won't win any awards for high performance, is a capable enough machine (with 2GB RAM) and can run a number of apps (including the vSphere client and Office 2010) without any problems.

I bought the Netgear ReadyNAS Duo after looking at some of the alternatives. I was originally tempted by the Iomega ix2-200d, but was put off by the fact the filesystem is proprietary and requires you to send the unit back if there is a problem. In contrast, the ReadyNAS Duo uses Ext3, but the real dealmaker was an offer to get a second 1TB disk *for free* (via mail-in coupon). My initial playing with this unit has been positive and it's nice and quiet, but I've not spent a huge amount of time with it yet.

The OCZ Vertex 2 SSD (60GB) was purchased because I wanted to experiment with the NexentaStor [Community Edition] virtual storage appliance. Built on top of the open-sourced Solaris codebase, Nexenta have built a storage solution around ZFS. Although the SSD is pretty small for a disk, it can be added as an "L2ARC" (Level 2 Adaptive Read Cache) to boost performance. This will require a block post of it's own to detail.

Finally, although I am not pleased with the way that Oracle have gutted Sun, the Solaris operating system remains excellent and I will be using it in my work for the foreseeable future. The preview release of Solaris 11 Express demanded a look.

I'll be blogging about some of these developments in future posts, coming soon...

Saturday, 20 November 2010

Building a vLab Part 6: Configuring Lab Networking

In the last post in this series, we configured our two vESXi servers to connect to an OpenFiler storage appliance. This was done by creating a dedicated vSwitch connecting to a Storage LAN. We need to now finish the network configuration to add resilience to our VM network, and add another vSwitch for vMotion.

At the moment, our vESXi network configuration looks like this:
Although we could separate our Management Network from our VM Network (and in the real world, there are some good arguments for doing this), in this vLab, we will use vSwitch0 for both of these functions. vSwitch0 also has a single vmnic which would be unacceptable in a real-world environment.

To setup vSwitch0 for VM traffic, edit the Properties and click Add. Select the Connection Type of "Virtual Machine" and complete the wizard with the default options. Back at the vSwitch0 Properties window, select the Network Adapters tab and add the unclaimed adapter that is attached to the same LAN subnet. When this is setup, close the Properties window.

Next, to setup a private vMotion network, create a new vSwitch by clicking "Add Networking" and specifying the Connection Type of VMkernel. Assign both remaining unassigned vmnic adapters to the new vSwitch. Label the network (e.g., "vMotion LAN") noting that this is case sensitive! Tick the box for "Use this port group for vMotion". Enter an IP address on a new subnet (e.g., 192.168.30.0/24). Don't worry about the VMkernel Default Gateway as the vMotion network does not need to be routable. The resulting configuration should look like this:



Repeat the network configuration on the second vESXi host.

The networking is setup and the vLab should be ready for some VMs!

Sunday, 14 November 2010

Building a vLab Part 5: Configuring The Lab Storage

At this point in the vLab build, we have done the following:
  • Installed ESXi on a physical host
  • Created VMs for Active Directory and vCenter Server
  • Created a Vyatta VM to act as a router
  • Created an OpenFiler VM to act as a SAN/NAS storage array
  • Created two virtual (nested) ESXi server

At this point in the build, we need to connect everything together.

In order to allow the vESXi hosts to access the "SAN", they need to be connected to the correct LAN. To do this, first note down the MAC addresses of the interfaces you have assigned to the Storage LAN. These details can be obtained by using the vSphere Client connected to the pESXi server and editing the settings of the vESXi VM:



With the MAC addresses noted, switch to the vSphere Client connected to the vCenter Server, select one of the vESXi hosts, Configuration, Network Adapters. Identify the vmnics that correspond to the Storage LAN adapters.


Staying on the Configuration screen, select Networking and Add Networking.


Specify the Connection Type as VMkernel, Create a virtual switch and select the two vmnics that are connected to the Storage LAN. Give the network a suitable label (e.g., "Storage LAN"), then assign an IP address on the storage LAN subnet (e.g., 192.168.20.101).



The end result should appear similar to the following:


Repeat this for the second vESXi host.

At this point, the hosts are ready to connect to the storage. The next step is to configure our OpenFiler "SAN/NAS appliance" and share out some storage.



Setup SAN storage

Log into the OpenFiler web interface (https://192.168.20.1:446) as the openfiler user (default password is "password"). In the original build, I added two 100GB hard disks in addition to the 8GB install disk. We will create one as an NFS share and the second as an iSCSI target.

OpenFiler is built on Linux, so an understanding of Linux LVM is useful. A very basic summary of Linux LVM:
  • Physical disks are encapsulated and referred to as "Physical Volumes" (PVs)
  • One of more PVs are combined together into a Volume Group (VG)
  • A VG is carved up into Logical Volumes (LVs)
  • A filesystem is created on an LV

Click the Volumes button
Click the link to "Create new physical volumes"
Select the first non-OS disk (/dev/sdb on my configuration)
Create a partition with the following properties:

  • Mode: Primary
  • Partition Type: Physical Volume
Accept the start and end cylinders and click Create
Click Volume Groups
Under the "Create a new volume group" section, enter a name (e.g., "vmware"), put a tick next to the newly created physical volume and click "Add Volume Group".

First, let's create an NFS datastore. To do this, click Add Volume.
Under "Create a volume in ", enter a name (e.g., "ds01"), a description (e.g., "NFS datastore"), select the size (e.g., 50GB) and choose a filesystem type (I used ext3). Then click Create. This creates a new Logical Volume in the "vmware" volume group that is 50GB in size and formats an ext3 filesystem onto it. The create operation may take a couple of minutes.

When this is complete, create an iSCSI datastore by clicking Add Volume again.
Enter a new name (e.g., "ds02"), a description (e.g., "iSCSI datastore"), assign all remaining space in the volume group and select the partition type as iSCSI. Then click Create. The result should look similar to this:



The OpenFiler appliance will now have the two datastores configured, but they are not published yet. Click the Services button and enable the "NFSv3 server" and the "iSCSI target server".

Click the System button and scroll down to the section titled "Network Access Control". In order for a host to see the OpenFiler storage, it needs to match an ACL entry. The most secure way to do this is to enter the IP address of each VM host. The easiest way is to specify the entire storage LAN subnet (192.168.20.0/24):



Map the iSCSI volume to the ACL by clicking the Volumes button, then select "iSCSI Targets". The system will present a new iSCSI target name. Click Add. To assign the iSCSI volume to the target, click the LUN Mapping button and click Map.



Click the Network ACL button and change the host access configuration to Allow and Update.



Share the NFS volume by clicking the Shares button. Click the NFS Datastore and create a sub-folder (e.g., "VMs"):


Click on the new sub-folder and select "Make Share". Scroll to the bottom of the new window and change the NFS setting to RW. Click the Edit button and set the UID/GID mapping to no_root_squash:



With these options set, click Update.

Finally, change Share Access Control Mode to "Public guest access" and click Update.

Switch back to the vSphere Client connected to the vCenter Server, select the first VM host, select Configure, Storage Adapters. Select the iSCSI Software Adapter (probably vmhba33) and select Properties. Click the Configure button and put a tick next Enabled to turn on iSCSI. Click OK and then select the Dynamic Discovery tab. Click Add and enter the IP address of the OpenFiler server. With iSCSI enabled and configured, click Rescan All, scanning for new storage devices and new VMFS volumes. If all is successful, you should see the new iSCSI volume appear.

Click Storage and Add Storage. Select Disk/LUN and the iSCSI disk should appear. Select it, click Next and create a new partition. Enter a datastore name (e.g., OpenFiler iSCSI) and choose a maximum file size (doesn't matter which since the disk is only small). Finish clicking through the wizard and the new datastore should appear in the list.

 Click Add Storage again. Select Network File System and click next. Enter the IP address of the OpenFiler server (192.168.20.1) with a folder path in the format of /mnt/volumegroup/logicalvolume/sharename (e.g., /mnt/vmware/ds01/VMs). Give the Datastore Name something suitable (e.g., OpenFiler NFS).

The datastore view in vSphere client should now look similar to the following:


Repeat the adding of storage on the second vESXi host. After scanning for the iSCSI storage, the datastore should appear automatically. NFS storage will still need to be entered manually.

The next step will be to finish configuring out networks and setting up vMotion...

Friday, 5 November 2010

Building a vLab Part 4: Virtual ESXi install

In the last post in this mini-series, we created a VM for vCenter Server and set it up in such a way that the install could be rapidly rebuilt in the future. The next step is to create our virtual ESXi instances (called vESXi in this blog).

When sizing the VMs, it's worth thinking of the vESXi hosts as if they were physical. For example, most physical ESXi servers will have multiple network cards, so we'll put several in our vESXi servers.

Create the virtual machine as you normally would, specifying the following parameters:
  • Guest OS: Other (64-bit)
  • CPU: 2 vCPU
  • Memory: 4096MB
Use the datastore available to the pESXi server and not the OpenFiler VM. Accept the default virtual disk size of 8GB and tick "Allocate and commit space on demand (Thin Provisioning)".

Edit the settings of the VM and add more network adapters (remember, we need this VM to appear similar to a physical host). For the vLab, use a total of 6 network adapters in pairs for the LAN, Storage LAN and vMotion LAN respectively.



The other networking step required is to edit each vSwitch, select the port group and edit the security settings:

  • Promiscuous Mode: Accept
  • MAC Address Changes: Accept
  • Forged Transmits: Accept


Do this for all the vSwitches.

While this is enough to get ESXi installed, you won't be able to power on any nested VMs without making the following change. When the vESXi VM is created (but not powered on), right click on the VM and select "Edit Settings". On the Options tab, select Advanced, General, Configuration Parameters. Add a new row and give it the name monitor_control.restrict_backdoor with a value of true:



Now insert/connect the ESXi install CD/ISO and power on the vESXi VM. To be honest, this next bit should be simple (especially as you've already installed the physical ESXi server).

Once installed into the VM, assign a static IP address.

Now open a second vSphere client instance and connect to the vCenter Server. Within the vCenter Server, create a new Datacenter and add a new host, pointing to the IP address of the newly created vESXi VM. When added, the new server should look similar to this:



There's not much point in having a lab with only a single server, so repeat the process detailed in this post and create a second VM. In the next post, we'll configure it all up...

Friday, 29 October 2010

Building a vLab Part 3: vCenter Server

Previously on "Building a vLab": Part 1: The Design and Part 2: Infrastructure Build.

For a production environment, many people run vCenter on a server that connects to a SQL Server database on another server (possibly as part of a cluster).  However, as part of this vLab, we're going for the default install of a single VM using a local SQL Server Express database.

The vCenter VM has 1 vCPU, 4GB RAM, 40GB hard disk, 1 vNIC connecting to the vLab LAN with an IP address of 192.168.10.2/24. This specification is smaller than that recommended by VMware, but it's enough to get started with. The vCenter server is running Windows Server 2008 R2 as vCenter 4.1 requires a 64bit version of Windows.

Once built, the vCenter Server is assigned the default gateway of the Vyatta VM (192.168.10.33) and the DNS server of the domain controller. The vCenter Server is named "vcenter" and then joined to the vLab domain.

As I do not have permanent VMware vSphere licences in my home lab, I wanted to create an environment where rebuilding from scratch would be a fairly painless experience.

I first created a new 64bit Windows Server 2008 R2 virtual machine. After it was assigned a static IP address and given the correct hostname, I created a small command script called install-vcenter.cmd based on the VMware "Performing a Command-Line Installation of vCenter Server" and copied it to the administrator's desktop.

Having got the base Windows install done, I then exported the VM as an OVF template for future use.

The next step was to mount the vCenter ISO and run the install-vcenter.cmd script. This performed a silent default installation of vCenter including the installation of the .NET runtime and SQL Server Express install. There are many customisable options that can be passed to the setup but these work well enough for my needs:

set EXE=D:\vpx\VMware-vcserver.exe
start /wait %EXE% /q /s /w /L1033 /v" /qr DB_SERVER_TYPE=Bundled FORMAT_DB=1 /L*v \"%TEMP%\vmvcsvr.txt\""

This means that when the vCenter licence expires, I can wipe it out, re-import the template VM, rejoin the domain and run the install-vcenter.cmd script to rebuild a new vCenter installation. It won't keep all my previous settings, and won't configure all the VMs, but it's a start.

UPDATE (22 Jan 2011): If the lab isn't used for a couple of months, the Active Directory trust relationship will fail and the installation will fail. To fix, export the VM when built, but before it is on the domain. I then wrote a short cmd file to automatically join the domain with the following command:

netdom join %computername% /Domain:VSPHERE /UserD:Administrator /PasswordD:mypassword /REBoot:20


In the next part of this series, we'll build our virtual ESXi servers.

Sunday, 24 October 2010

Building a vLab Part 2: Infrastructure Build

The journey begins! In order to build the vLab as detailed in part one, I'll be using my HP ML115 G5. This is a quad core, single CPU tower server in which I've installed 8GB RAM. It's also got 2 x 500GB SATA drives of which I'll be using one for the vLab environment (the other will be used for other projects). The ML115 G5 has an internal USB socket and ESXi can easily be installed on it, reserving the disk space for the VMs.

There is little point in recreating the same installation instructions over and over again when there is a perfectly good reference point. In this case, I'm using the excellent "Installing VMware ESXi 4.0 on a USB Memory Stick The Official Way" post from TechHead (the install for 4.1 is pretty much the same).

As the only physical VMware server, I'll be referring to it as the pESXi box.

At the end of the install process, I have an empty VM host connected to my physical network on which to build the VMs that represent the "physical" items in my virtual infrastructure: The router and the SAN/NAS and the Active Domain controller that we'll need when vCenter Server is installed.


Building the LAN


VMware best practice is to use multiple networks for different types of traffic. The vLab will require four different VLANs (virtual machine network traffic, vMotion traffic, IP storage and access to the physical network). In order to enable this, the pESXi server needs four switches. All four of these switches contain "Virtual Machine" port groups. The switch containing the management interface to the pESXi box also has a VMkernel port.

  • vSwitch0: Connects to the physical (non-vLab) network
  • vSwitch1: The vLab LAN for management access and connecting VMs
  • vSwitch2: The vLab storage network for iSCSI and/or NFS traffic
  • vSwitch3: The vLab vMotion network
On the pESXi box, the networks look as follows:





You will notice that vSwitches1-3 are not connected to any physical adapter yet. We will use the Vyatta router to accomplish this.


    For reference, I'll be using the following subnets:

    • 192.168.192.0/24 - main network connected to the physical network
    • 192.168.10.0/24 - vLab Virtual Machine network
    • 192.168.20.0/24 - vLab storage network
    • 192.168.30.0/24 - vLab vMotion network


    Add the routes to the network on your management PC. Alternatively, it might be more useful to add these on your main router so that traffic to these networks route correctly.

    The router is necessary because I want to give my vLab a completely separate IP range to the rest of my kit. Therefore, in order for my non-lab kit to communicate with the vLab, I need a layer 3 router. Vyatta have a free "Core" edition that can be installed. For this, I created a new VM with the following sizings:
    • 1 vCPU
    • 256MB RAM
    • 8GB Hard Disk (thin provisioned)
    • 3 x Network Interfaces
      • 1 connecting to the default VM network (i.e., the physical LAN)
      • 1 connecting to the "vSphere Lab in a Box LAN"
      • 1 connecting to the "vSphere Lab in a Box Storage LAN"
    For information on installing Vyatta, see this guide. My Vyatta configuration (as displayed using the show -all command) is:

     interfaces {
         ethernet eth0 {
             address 192.168.192.33/24
             duplex auto
             hw-id 00:0c:29:50:0a:5b
             speed auto
         }
         ethernet eth1 {
             address 192.168.10.33/24
             duplex auto
             hw-id 00:0c:29:50:0a:65
             speed auto
         }
         ethernet eth2 {
             address 192.168.20.33/24
             duplex auto
             hw-id 00:0c:29:50:0a:6f
             speed auto
         }
         loopback lo {
         }
     }
     system {

         gateway-address 192.168.192.1
         host-name vyatta
         login {
             user root {
                 authentication {
                     encrypted-password $1$VBYqK71jAsu3bsoAznh22mx0pqp31nU/
                 }
                 level admin
             }
             user vyatta {
                 authentication {
                     encrypted-password $1$FdjsdebjGneXOIVw9exHrXRAcaN.
                 }
                 level admin
             }
         }
         ntp-server 69.59.150.135
         package {
             auto-sync 1
             repository community {
                 components main
                 distribution stable
                 password ""
                 url http://packages.vyatta.com/vyatta
                 username ""
             }
         }
         time-zone GMT
     }


    I mapped the Vyatta ethernet adapters (eth0, eth1 and eth2) to the correct network by comparing the MAC address with those listed against each adapter in the vSphere client. The default route (pointing to my Internet router) will make things like downloading software from within the vLab easier.

    Building the SAN

    VMware vSphere shines when the hosts have access to shared storage. The vLab ESXi servers will connect to an IP based (iSCSI) SAN. There are multiple ways to achieve this and one of the most common ways for home lab users to get shared storage is to use the Linux-based OpenFiler distribution.

    Again, in an attempt to avoid reinventing the wheel, I'll point to the excellent Techhead post on configuring OpenFiler.

    The specifics for the vLab is that the OpenFiler VM is connected to a the vLab storage LAN and not the vLab VM LAN. The IP address for the OpenFiler VM is 192.168.20.1. In addition to the install disk of 8GB, I've also created a 100GB thin provisioned disk on which to install VMs. The OpenFiler storage will be used for the VMs that I'll install on the virtual ESXi servers. The Active Directory Domain Controller, Vyatta VM and vCenter Server will also install directly onto the 500GB SATA datastore.


    Installing the Active Directory Domain Controller

    VMware vCenter Server requires Active Directory, so we'll need a domain controller for the vLab. Best practice requires at least two domain controllers for resilience, but we'll make do with just the one (this is a VM lab not a Windows lab). I sized the DC VM to be very small: 1 vCPU with 256MB RAM, 8GB hard disk and 1 vNIC connecting to the vLab LAN with an IP address of 192.168.10.1/24. Although I prefer Windows Server 2008, the DC will run Windows Server 2003 because of it's lower footprint.

    The setup was a standard Windows Server 2003 install, followed by running dcpromo. I called the host "labdc", the domain "vsphere.lab" and we're off.

    Okay, so we have everything ready apart from our virtual ESXi hosts and the vCenter server. We'll continue the journey in part 3.

    Friday, 22 October 2010

    Building a vLab Part 1: The Design

    Like many in the VMware community, having a home lab on which to try things out is something I've been working on for some time. As I prepared to update my VCP3 to the VCP4, I thought it would be good to build myself a new "vLab" to test - and break - things without worrying about production systems and complaining users.

    This mini series is partly inspired by a posting on TechHead's blog. I think there was originally going to be a series there, but for whatever reason (I'm guessing time was the issue), it didn't really develop.

    In order to be able to run through the VCP4 syllabus, I needed the following:

    • 2 x ESX servers
    • 1 x VMware vCenter server
    • 1 x Active Directory Domain Controller
    • 1 x SAN/NAS shared storage array
    • 1 x router/firewall that isolates the vLab from the rest of my home network

    This is what I'm aiming for:




    Obviously this is going to take up a fair amount of space, will be noisy and hot. So the approach I'm going to aim for is to run the entire vLab on a single HP ML115 G5 with 1 x quad core CPU and 8GB RAM. I'll run VMware ESXi as the base hypervisor and then install the following VMs:

    • 1 x Windows Server 2008 R2 VM to run vCenter Server (4.1 requires a 64bit OS)
    • 1 x Windows Server 2003 VM to run as an Active Directory Domain Controller
    • 1 x OpenFiler VM to run as an iSCSI and NFS server
    • 1 x Vyatta VM to run as a router/firewall
    • 2 x VMware vSphere Hypervisor (ESXi) VMs to run the lab VMs

    For the Windows Server 2003/2008 licences I'll use my Technet subscription and the OpenFiler and Vyatta installs are free. For the vCenter Server and enterprise features of vSphere, I'll have to use the evaluation licences.

    With the software components downloaded and ready to do, it's time to do the build...http://livingonthecloud.blogspot.com/2010/10/building-vlab-part-2-infrastructure.html

    Friday, 8 October 2010

    Sun X4100 M2 firmware upgrade

    This is a very short note that others might run into...

    I was trying to upgrade the firmware on a Sun X4100 M2 server to the latest release and the System BIOS upgrade was failing. I was picking the firmware image up from a network drive which may have been the problem, as copying the image to my C: drive and then installing the upgrade worked fine.

    Not sure why this should be the case, but the upgrade has now worked.

    Thursday, 7 October 2010

    Configure Solaris 10 for mail relaying

    We have a number of devices on our network that can send email alerts. It makes sense to have a central server that can act as a mail relay. We have a Solaris 10 server "sol10" that comes bundled with sendmail, but this is not configured to act as a mail relay.

    In order to make the Solaris server relay messages to another host involves editing the /etc/mail/sendmail.cf file and setting the value:

    # "Smart" relay host (may be null)
    DSmailserver.my.domain

    Obviously replace "mailserver.my.domain" with the FQDN of your real mail server that you want to send email through. Restart sendmail by running:

    svcadm restart /network/smtp

    This setting will allow mail that originates on "sol10" to be sent out, but does not help when you want other devices on your network to use sol10 as it's relay. The answer was surprisingly easy:

    Create a new file /etc/mail/relay-domains. In this file, put the networks you want sol10 to accept email from. For example, if you have devices on the 10.0.0.0/8, 172.16.0.0/16 and 192.168.20.0/24 networks and want to use sol10 as the relay, enter the following lines in /etc/mail/relay-domains:

    10
    172.16
    192.168.20

    Once done, restart sendmail again (same command as above), configure your clients to use the Solaris server as their SMTP server and check the output in /var/log/syslog while you send a test message.

    Monday, 4 October 2010

    Upgrading to ARCserve r15

    We've been running ARCserve 11.5 on UNIX for a number of years, but CA have effectively stopped development on it. The Windows version was not suitable for our environment because it was (at the time) unable to perform incremental and differential backups of UNIX/Linux filesystems.

    The latest ARCserve release (r15) now supports incremental/differential backups of UNIX/Linux filesystems, so we made the jump.

    As most of our Windows and Linux servers are now VMs running in our VMware cluster, we have opted to perform block level VM backups. To do this, we installed the ARCserve Backup Agent for Virtual Machines on our vCenter server.

    ARCserve uses the VMware Virtual Disk Development Toolkit (VDDK) to provide integration with the VMware Data Protection APIs. I installed this on our vCenter server and configured the ARCserve server to backup all the VMs using the vCenter server as a proxy (note: this is not a VCB proxy as it does not copy the files, rather the VDDK provides a direct way of accessing the underlying VMDKs).

    The problem we had was that the VM backups failed with the following error:

    VMDKInit() : Initialization of VMDKIoLib failed

    To cut a long story short, the problem was that the vCenter server is 64bit (required by vSphere 4.1). The fix, provided by CA support, was to extract the vddk64.zip file in C:\Program Files (x86)\VMware\VMware Virtual Disk Development Kit\bin.

    The problem here appears to be that the VMware installer for the VDDK does not create the 64bit files when installing on a 64bit version of Windows. By adding these and restarting the ARCserve processes, the backup worked successfully.

    Now to test the overnight backup of all our VMs...

    Edit: The backup appears to be working fine!

    Edit 2: Forgot to mention that the CA support representative also modified the system PATH variable to include the 64bit VDDK driver: C:\Program Files (x86)\VMware\VMware Virtual Disk Development Kit\bin\vddk64

    Saturday, 4 September 2010

    Running a Windows 7 client

    Although I never made a conscious decision to switch to the Mac, I've noticed that more of my home computing time is spent in Mac OS X. I've actually given up on using Linux as my main desktop OS, primarily because my Shuttle PC is so loud in comparison with the practically silent Mac Mini (and the Mac provides a decent UNIX environment in a few clicks if necessary).

    But when I'm doing home lab stuff, I need access to a Windows PC. I tried to run Windows 7 (thanks to Technet) in a virtual machine on the Mac, but the memory overhead was too great as I've only got 2GB of RAM.

    The solution I've decided on is to run Windows 7 under VirtualBox on my OpenSolaris server. As the server is always on, it means I've got quick and ready access to Windows 7 whenever I need it. The server has 8GB RAM, so running a 2GB VM is perfectly usable.

    For connectivity to the Windows 7 VM, I'm using the Microsoft RDP client for Mac OS X. This has a couple of nice features: A full screen mode activated by Command-2 (Command being the Windows key on my keyboard) and the ability to switch the windowed view between full size pixels and a scaled fit-to-window option with Command-1.

    The Windows 7 VM is used for running the vSphere client, PowerCLI etc., but also has the typical essential apps (Firefox, Evernote, Office 2010) installed as well. Performance isn't as great as a physical Windows 7 server (no Aero Glass for example) and I'm not using it for multimedia apps, but it doesn't require additional space or electricity and it doesn't generate extra heat in the office.

    Monday, 30 August 2010

    iPad first impressions

    When Apple announced the iPad, my reaction was one of "Looks nice, but it's just a big iPod Touch". I couldn't see where an iPad would fit into my workflow and thought I'd give it a miss.

    Two days ago I was out shopping and wandered into Currys. There were some iPads on display and I spent a few minutes playing around with one. As with most things created by Apple, the user interface was beautifully designed and this really impressed me. I had been considering an upgrade to my EeePC to a device with a better display, but after using the iPad, the netbooks on display looked decidedly underwhelming. Although the iPad was more expensive, the difference in capabilities were significant.

    The next day, T and I returned to Currys to have another look at the iPad. Despite my demoing the browsing and email capabilities, it was the version of Labyrinth for the iPad that won T over(!). We walked out of the shop with a 16GB Wi-Fi model.


    I opted not to get the 3G version as a) it was £100 more expensive and b) the data plans were not cheap. The vast majority of places I'll be using it will have Wi-Fi, but if I really need 3G, I'll probably look to get a Mi-Fi which will enable me to create a local wireless gateway for up to 5 devices.

    Here are the first impressions:

    • The iPad is a large screen iPod Touch. True, but it's amazing how much more you can do with a large screen. Apps look fantastic on the big screen.
    • The built in apps are beautifully designed. One of my complaints about the iPhone (coming from a Palm PDA background) is the calendar application is very lacking. The iPad version is much improved, finally adding a week view.
    • Evernote for the iPad is a killer app. It really is brilliant.
    • The on-screen keyboard is very usable and I can type pretty quickly on it.
    • Battery life so far is very good (although I've not done much audio/video playback yet).
    • It's quite heavy. Holding it in one hand for a long time will be uncomfortable.
    • The Apple foldback protective case is nice and you can create a decent angled stand for typing.


    I have synced my Google mail, calendar and contacts to the iPad, installed the Evernote client, installed the Toodledo client, MobileRSS, Twitteriffic, Box.net and many more applications. Within a hour or so, I had access to all my cloud data.

    I've also added Wikipanion (a really nice Wikipedia application), BBC News, YouVersion's Bible application, updated versions of RDP, telnet and VNC clients, Connect (for Google Docs reading and soon, editing), Whiteboard HD (for diagrams and doodles) and GoodReader (for PDF reading). Not all of these are free, but pay-for apps come in under £5.

    So first impressions are extremely positive. I spend a lot of time at home sat in front of my computer, but with the iPad I can get the same experience for many of these tasks from anywhere with wi-fi. The iPad interface is typical Apple: extremely well designed and very consistent. As a device, the iPad is occupying that vague space between the smartphone and the laptop, but despite being a first generation product, it is polished and is a very welcome addition to my kitbag.

    Wednesday, 14 July 2010

    Customising gVim

    While other text editors may be available, I prefer to use vi for my editing needs when running on a Unix or Linux box. I get my vi fix on Windows by running the excellent gVim. When I upgraded to a new work laptop running 64bit Windows 7, installing gVim was one of my first tasks.

    There are a couple of things that need to be done to make gVim work correctly. The first is a registry change to add an "Edit with VIM" context menu item in Explorer. The following is the contents of a gvim.reg file I ran to add this functionality:

    Windows Registry Editor Version 5.00

    [HKEY_CLASSES_ROOT\*\shell\Edit with Vim]

    [HKEY_CLASSES_ROOT\*\shell\Edit with Vim\command]
    @="C:\\Program Files (x86)\\Vim\\vim72\\gvim.exe \"%1\""


    The second thing to do is add some customisation. gVim can use a _vimrc file (the underscore is necessary at the start) and uses the HOME variable to locate it. I setup a HOME variable that was pointing to "%USERPROFILE%" (c:\users\jr) and created a text file in %HOME% with my desired settings:

    colorscheme slate
    set guifont=Lucida\ Console
    set columns=132
    set lines=50
    set nobackup
    set number


    This doesn't do much apart from set a nice colour scheme, font, window size, prevents backup files being created whenever a file is edited, and turns on line numbers. There are hundreds of options that power users can add to customise gVim, but it's a good start.

    With the final edition of adding the gVim icon to my task bar, I now have a comfortable working environment.

    Wednesday, 23 June 2010

    Passing the VCP for vSphere 4

    Tonight I took and passed the VCP410 exam, upgrading my VCP for VI3 to the latest release. The scoring is between 100 and 500, with 300 being the pass mark. I got 338 which wasn't great; I actually found the exam pretty tough (being at 6pm on a very warm day probably didn't help either!).

    As with my CCNA post, I thought it might be useful to share some of the resources I used to study.

    I used both Scott Lowe's Mastering VMware vSphere 4 and Mike Laverick's VMware vSphere 4 Implementation.  Both were very good at explaining the underlying technologies, but both had sections that were out of date. Always compare with the official VMware docs!

    I also used the following sites:


    The essential VMware exam blueprint and online documentation is a must read.

    I also set up a home lab using virtualised ESX servers on my ML115 using the 60 day trial licences. Getting hands on is essential and having an environment where breaking things is not a problem makes revision much easier.

    In addition to this, I work with VMware nearly every day (not every component and feature, but I get regular, hands on experience).

    The NDA prevents me from talking about questions in the exam, but I will say this: I thought the mock exams were much easier than the real thing. I was getting > 90% in the mocks, so was slightly disappointed to get such as low pass score. Having said that, I'm extremely relieved that I don't have to go back to more revision!

    Onto the next thing now...

    Friday, 21 May 2010

    Exporting and importing SharePoint sites

    A number of our users have SharePoint (WSS 3.0) sites hosted in another office and wanted to move the contents down to our local WSS 3.0 install. This was not as straightforward as you might imagine. We hit a number of gotchas and had to provide workarounds that are documented here so that others can benefit from our experiences.

    Running out of disk space on the C: drive

    When running an export using stsadm, we kept filling up the C: drive despite exporting to a separate drive. The reason for this is that SharePoint writes temporary files to the location defined by the %TMP% variable. This defaults to the C: drive!

    To fix, open a command prompt and type:

    set TMP=E:\Tmp

    (replace E:\Tmp with the drive and folder you want to use for your temporary storage). Then run the stsadm export and it should work!


    Commands to export and import a named site

    The command we used to export the site was:

    stsadm -o export -url http://old-sharepointserver/hostedsites/development -filename e:\development.cab -includeusersecurity -versions 4 -overwrite

    The above command will export the site called "development" referenced at http://old-sharepointserver/hostedsites/development to a file called development.cab. The security information will be included in the export as will all versions of documents.

    To import, the following command was used on the new server:

    stsadm -o import -url http://new-sharepointserver/development -filename development.cab -includeusersecurity

    Note that we are importing the site "development" into the top level and not as a subsite beneath hostedsites. If the name of the site is omitted, the top level site is overwritten!

    The gotchas

    When running the import, we received the following message:

    "The file cannot be imported because its parent web <site path> does not exist"

    This error is not helpful and for us the problem was permission related. We had used users (albeit domain admin accounts) to export and import the data that were different from the site collection administrators. To fix we had to do the following:

    Make sure the site collection administrator is the same on both the source and destination servers.

    When running the export and import, make sure you are running the stsadm commands as the site collection administrator. This ensures the permissions are aligned and the import should work.

    stsadm Import error: The 'ASPXPageIndexMode' attribute is not declared

    Not sure what the cause of this error is, but we found a fix online:

    To get round it I edited C:\Program Files\Common Files\Microsoft Shared\web server extensions\12\TEMPLATE\XML\DeploymentManifest.xsd on the destination server:

    under section

       <!-- SPWeb definition -->

    I added the following.


    <xs:attribute name="ASPXPageIndexMode" type="xs:string" use="optional"></xs:attribute>
    <xs:attribute name="NoCrawl" type="xs:boolean" use="optional"></xs:attribute>
    <xs:attribute name="CacheAllSchema" type="xs:boolean" use="optional"> </xs:attribute>
    <xs:attribute name="AllowAutomaticASPXPageIndexing" type="xs:boolean" use="optional"></xs:attribute>


    With these gotchas overcome, we were able to successfully import the new site.

    Wednesday, 5 May 2010

    IBM pSeries (AIX) to Sun StorageTek 2540 - Part 2

    The saga continues...

    At the end of my previous port, the SAN LUN was being successfully seen by AIX as a single device using the Cambex driver.

    With the multipathing fixed, it was time to build some WPARs. Everything went smoothly until we rebooted. At which point, hdisk10 was visible but I could no longer see the logical volumes on the disk. Furthermore, I couldn't activate the volume group I'd created "wparvg", getting the message:

    bash-3.2# varyonvg wparvg
    0516-013 varyonvg: The volume group cannot be varied on because there are no good copies of the descriptor area.

    To cut a long story short (that primarily consists of me rebooting, removing the device in smit and running cfgmgr is various combinations), the Cambex install (/usr/lpp/cbxdpf) includes some useful commands. Running the dpfinfo command showed that hdisk10 was configured in the following way:

    === /usr/lpp/cbxdpf/dpfutil listall ===
    # Device Active Standby
    hdisk10 cbx1 (fscsi0 0x040200,1) cbx0 (fscsi0 0x030200,1)

    This means that it's using path cbx1 as its active path, with cbx0 as the failover path. Some exploring with the dpfutil command showed it supports the following options:

    dpfutil []
    Commands may be abbreviated:
    HELP - Display this message
    LISTALL - List devices and path configuration
    ACTIVATE [cbxN] - Manually switch virtual disk to path [cbxN]
    VARYOFFLINE [cbxN] - Mark path [cbxN] unavailable
    VARYONLINE [cbxN] - Mark path [cbxN] available
    MARKFORDELETE [cbxN] - Force path off even if open (may crash)
    LIST_HBAS - List HBAs with DPF paths
    HBA_SET_WWN [cbxN] [no|yes] - Set WWN preferred path
    TARGET_SET_WWN [cbxN] [yes|no] - Set target preferred path


    I tried to manually switch over the paths:

    bash-3.2# ./dpfutil activate cbx0
    bash-3.2# ./dpfutil listall
    # Device Active Standby
    hdisk10 cbx0 (fscsi0 0x030200,1) cbx1 (fscsi0 0x040200,1)

    With this done, I then tried the varyonvg again:

    bash-3.2# varyonvg wparvg
    bash-3.2# lsvg wparvg
    VOLUME GROUP: wparvg VG IDENTIFIER: 00048ada0000d3000000012865034072
    VG STATE: active PP SIZE: 256 megabyte(s)
    VG PERMISSION: read/write TOTAL PPs: 999 (255744 megabytes)
    MAX LVs: 256 FREE PPs: 979 (250624 megabytes)
    LVs: 2 USED PPs: 20 (5120 megabytes)
    OPEN LVs: 0 QUORUM: 2 (Enabled)
    TOTAL PVs: 1 VG DESCRIPTORS: 2
    STALE PVs: 0 STALE PPs: 0
    ACTIVE PVs: 1 AUTO ON: yes
    MAX PPs per VG: 32512
    MAX PPs per PV: 1016 MAX PVs: 32
    LTG size (Dynamic): 1024 kilobyte(s) AUTO SYNC: no
    HOT SPARE: no BB POLICY: relocatable
    bash-3.2#

    Result!

    Not sure what this says about the failover capabilities of the driver... It appears that when the VG is active, manually failing over the paths works okay and the VG remains active.

    Fortunately this isn't a mission critical production box (it's a development compile box for porting our code from Solaris to AIX).

    Wednesday, 28 April 2010

    IBM pSeries (AIX) to Sun StorageTek 2540

    We have a IBM pSeries 505 running AIX 6.1 that we use for product compilation and testing. The 505 is a 1U, entry-level POWER server with the capacity for two internal disks. In order to provision additional disk space so we can run Workload Partitions (wpars), we've added a single port Fibre Channel HBA.

    The Common Array Manager (CAM) software that Sun provides to manage the 25x0 series of arrays (that form the heart of our SAN) allows the administrator to define an initiator which has a "host type" (i.e., what OS the host is running). Among the list of supported host types are the following for AIX:
    • AIX
    • AIX (with Veritas DMP)
    • AIX (Discretionary Access Control)

    The "right" option depends on the software running on the server. As I don't have Veritas DMP, and don't know what Discretionary Access Control is, I opted for AIX. Note, this doesn't appear to be documented in any of Sun's manuals...!

    So with a volume setup on the array and mapped to the AIX server, it was time to see what the AIX server discovered.

    In theory, booting the AIX server should find the new disks, but if you don't want to reboot, run the cfgmgr command. This appears to scan for new devices and the output can be checked by running:

    bash-3.2# lsdev -Cc disk

    hdisk0 Available 06-08-01-5,0 16 Bit LVD SCSI Disk Drive

    hdisk1 Available 06-08-01-8,0 16 Bit LVD SCSI Disk Drive

    hdisk2 Available 01-08-01 Other FC SCSI Disk Drive

    hdisk3 Available 01-08-01 Other FC SCSI Disk Drive

    hdisk4 Available 01-08-01 Other FC SCSI Disk Drive

    hdisk5 Available 01-08-01 Other FC SCSI Disk Drive

    hdisk6 Available 01-08-01 Other FC SCSI Disk Drive

    hdisk7 Available 01-08-01 Other FC SCSI Disk Drive

    hdisk8 Available 01-08-01 Other FC SCSI Disk Drive

    hdisk9 Available 01-08-01 Other FC SCSI Disk Drive

    hdisk10 Available 01-08-01 Other FC SCSI Disk Drive

    hdisk11 Available 01-08-01 Other FC SCSI Disk Drive


    The first two disks are internal SCSI. The disks labelled hdisk2 to hdisk9 are the management LUNs for the two 2540 arrays we have and can be ignored. The final two disks, hdisk10 and hdisk11, are two views of the LUN published from the array.

    The reason there are two LUNs is because the 2540 has two controllers. Although the array is asymmetric active/passive, the server can see the LUN through both controllers at the same time.

    To work around this, we need some multipath I/O software. Hunting around Sun's website found the "Dynamic Path Failover (DPF) Drivers for AIX Operating System 63 General Availability" of which there was a download for AIX 6.1. The download requires a software licence (more on that below...).

    With the file downloaded to /tmp, the package could be installed by running smit and selecting the download directory (/tmp) and installing the driver. The software installs into /usr/lpp/cbxfc.

    Now, a little diversion into the licensing of this driver. You need to register for a licence with Sun, providing your serial number, contract number and site id which proves you own a 2540 array. After that, the licence is free. Quite why you'd want the driver if you didn't own the array is beyond my powers of comprehension! A 30 day licence is provided in /usr/lpp/cbxfc and can be activated by copying the file "license.30day" to "license". It might be interesting to see what happens if you registered the licence a long way in the future and then changing the date back to today... (purely academic interest only, but might be useful if Sun take their time sending the licence through!).

    Having installed the driver, it's another trip into smit, devices and unconfiguring the previously discovered SAN disks, hdisk2 through to hdisk11. When removing, specify the option to remove from the device database as well. After doing this, lscfg should not show the SAN disks.

    With the SAN disks no longer visible to the AIX server, re-run cfgmgr and then run lscfg again. If everything is working correctly, the disks should reappear, but be labelled differently:


    bash-3.2# lsdev -Cc disk
    hdisk0 Available 06-08-01-5,0 16 Bit LVD SCSI Disk Drive
    hdisk1 Available 06-08-01-8,0 16 Bit LVD SCSI Disk Drive
    hdisk2 Defined 01-08-01 Sun StorageTek Universal Xport
    hdisk3 Defined 01-08-01 Sun StorageTek Universal Xport
    hdisk4 Defined 01-08-01 Sun StorageTek Universal Xport
    hdisk5 Defined 01-08-01 Sun StorageTek Universal Xport
    hdisk6 Defined 01-08-01 Sun StorageTek Universal Xport
    hdisk7 Defined 01-08-01 Sun StorageTek Universal Xport
    hdisk8 Defined 01-08-01 Sun StorageTek Universal Xport
    hdisk9 Defined 01-08-01 Sun StorageTek Universal Xport
    hdisk10 Available 01-08-01-01 StorageTek FlexLine with DPF V4.31P


    Ignoring the management LUNs (hdisk2 to hdisk9), note how there is only one SAN LUN visible, hdisk10. Running /usr/lpp/cbxdpf/dputil utility shows the multipathed disk:

    # Device Active Standby
    hdisk10 cbx0 (fscsi0 0x010200,1) cbx1 (fscsi0 0x030200,1)


    The new disk also appears as a physical volume:

    bash-3.2# lspv
    hdisk0 00048ada53748080 rootvg active
    hdisk1 none None
    hdisk10 00048ada40cd9d27 None


    The disk can now be added to a volume group and used by the AIX system.

    Special thanks to @cgibbo on Twitter who spotted my cry for help when I was struggling to get the multipathing working and got in touch. Thanks for the pointers Chris!
    .

    Saturday, 16 January 2010

    OpenSolaris: Very slow boot times

    Today I lost power to my servers due to a power outage. The UPS wasn't up to coping and almost instantly died (I was running five computers on the one, small UPS...).

    Booting the OpenSolaris server back up reminded me of the painfully slow boot times that can occur. We're talking *hours* to get the server up.

    The reason for this is due to the number of ZFS snapshots on a system. Here's the experiment:

    I booted off the OpenSolaris 2009.06 CD and ran the format command. This displayed the disks on the system. I then imported the zpools into the running installation:

    # zpool import rpool -f
    # zpool import datapool -f

    The -f is required because the system thinks the zpools have been assigned to another server (useful if the zpool is on a SAN LUN). The first command was relatively quick, the second was much, much slower.

    Running prstat revealed that devfsadm was consuming an entire CPU. The purpose of devfsadm is to dynamically add and remove devices on the system. It was stating each of the snapshots in the datapool and creating entries in /dev. After running for a few hours, it had created over 4000(!) devices in /dev/zvol/dsk/datapool and /dev/zvol/rdsk/datapool.

    The number of snapshots is thanks to the automatic snapshot service which takes frequent snapshots of the filesystems in the pool. This list is not automatically cleared down, so can grow huge. Not a problem usually because the uptime of OpenSolaris is fantastic, but is a real pain when you need the server to boot.

    So, in order to keep your OpenSolaris boot times down, keep an eye on the number of snapshots on your system.

    Sunday, 10 January 2010

    Book Review: OpenSolaris Bible

    The relationship between OpenSolaris and Solaris is similar to that between Fedora and Red Hat Enterprise Linux. OpenSolaris is Sun's "in development" operating system that introduces many new features that will eventually become available in a Solaris 10 update or in a future Solaris 11 release.

    So it would seem sensible for Solaris system administrators to have some familiarity with OpenSolaris and while it's possible to transfer a lot of existing Solaris knowledge across, having a comprehensive book alongside can be very useful.

    Enter, the OpenSolaris Bible by Solter, Jelinek and Miner; a book I received just before Christmas and have been reading through recently.

    The book covers the release of OpenSolaris as of 2008 which suggests the book was based around release 2008.05 or 2008.11 (OpenSolaris releases have a YYYY.MM version number). Since that date, there has been a 2009.06 release and 2010.02 is anticipated next month. However, do not let this put you off considering this book. OpenSolaris development is fast paced, but there is an awful lot of stuff in this book to absorb that still remains relevant in newer releases.

    The book is broken into six parts:

    • Introduction to OpenSolaris
    • Using OpenSolaris
    • OpenSolaris File Systems, Networking and Security
    • OpenSolaris Reliability, Availability and Serviceability
    • OpenSolaris Virtualization
    • Deploying and Developing on OpenSolaris
    The first part is a typical introduction and covers the history of Solaris, Open Source, as well as instructions on installing OpenSolaris and a basic "crash course" on using the GNOME desktop and the Unix shell. Experienced administrators will be able to skim this section.

    Part two covers using the desktop in more detail, printing and software management using the Image Packaging System (IPS). This is an essential read as IPS is a new feature in OpenSolaris and printing can sometimes be a bit tricky.

    Part three provides a very comprehensive introduction to Solaris disks, pseudo filesystems such as devfs, tmpfs, lofs and swap, UFS, Solaris Volume Manager, iSCSI, quotas, backups and restores, mounting and unmounting as well as a full chapter on ZFS, before moving onto network configuration including IPMP, link aggregation, virtual LAN interfaces, network services (DNS, DHCP, FTP, NTP, Mail, HTTP etc.), routing and the IP Filter firewall. Part three of the book then finishes with a chapter on network file systems and directory services (NFS, CIFS, NIS and LDAP) and security (PAM, RBAC, SSH, auditing and Kerberos). There is a lot of good content here.

    Part four details the Fault Management architecture in OpenSolaris, the Service Management Framework (SMF) introduced in Solaris 10 as well as monitoring with conventional tools and Dtrace, ending with a chapter on high-availability clustering.

    Part five covers resource management (projects, tasks, caps and pools) along with a number of Sun virtualisation technologies (Zones, xVM, LDOMs and VirtualBox). The xVM section is only relevant to x64 installs and the LDOM section requires Sun UltraSPARC T-series processors, while Zones can be used one either architecture and is certainly worth a read.

    The final part consists of a chapter on deploying a web stack (Apache, PHP, MySQL, Tomcat and Glassfish) and a chapter on software development (Java, C/C++, etc.). I have no strong interest in these subjects at the moment, so haven't read this section.

    While I have not read the whole book yet (Having ignored most of the coverage of GNOME desktop applications as if you are familiar with Linux, there's not a lot of new stuff to learn), there are plenty of sections that have made the book worthwhile. Whether this book is suitable for you or not, depends on where you're starting from:

    If you are a Windows administrator looking to develop some Solaris experience, the OpenSolaris Bible is well worth a read. The first two parts provide a gentle introduction to the Unix operating system to get you started, and subsequent chapters dive pretty deep into the capabilities of OpenSolaris.

    If you are experienced with Linux but have minimal Solaris experience, the OpenSolaris Bible is highly recommended! FMA, SMF, Zones, ZFS, UFS/SVM, Clustering, Dtrace and IPS are not found in Linux, so the OpenSolaris Bible provides a single point of reference for a whole lot of new learning.

    Even experienced Solaris administrators will find things to like in this book. The IPS is certainly a new feature that I assume will impact us when Solaris 11 is released, and while ZFS, Zones, FMA, SMF etc are already present in Solaris 10, the book provides a very good overview of these technologies that can otherwise only be found by attending a course or reading the online documentation.

    It's probably fair to say that if you read through the whole book, put into practice the features described, and you understand them, you'll have a wider understanding than many existing Solaris system administrators.

    The OpenSolaris Bible can be bought at Amazon.

    Highly recommended. 9/10.