Saturday, 20 November 2010

Building a vLab Part 6: Configuring Lab Networking

In the last post in this series, we configured our two vESXi servers to connect to an OpenFiler storage appliance. This was done by creating a dedicated vSwitch connecting to a Storage LAN. We need to now finish the network configuration to add resilience to our VM network, and add another vSwitch for vMotion.

At the moment, our vESXi network configuration looks like this:
Although we could separate our Management Network from our VM Network (and in the real world, there are some good arguments for doing this), in this vLab, we will use vSwitch0 for both of these functions. vSwitch0 also has a single vmnic which would be unacceptable in a real-world environment.

To setup vSwitch0 for VM traffic, edit the Properties and click Add. Select the Connection Type of "Virtual Machine" and complete the wizard with the default options. Back at the vSwitch0 Properties window, select the Network Adapters tab and add the unclaimed adapter that is attached to the same LAN subnet. When this is setup, close the Properties window.

Next, to setup a private vMotion network, create a new vSwitch by clicking "Add Networking" and specifying the Connection Type of VMkernel. Assign both remaining unassigned vmnic adapters to the new vSwitch. Label the network (e.g., "vMotion LAN") noting that this is case sensitive! Tick the box for "Use this port group for vMotion". Enter an IP address on a new subnet (e.g., Don't worry about the VMkernel Default Gateway as the vMotion network does not need to be routable. The resulting configuration should look like this:

Repeat the network configuration on the second vESXi host.

The networking is setup and the vLab should be ready for some VMs!

Sunday, 14 November 2010

Building a vLab Part 5: Configuring The Lab Storage

At this point in the vLab build, we have done the following:
  • Installed ESXi on a physical host
  • Created VMs for Active Directory and vCenter Server
  • Created a Vyatta VM to act as a router
  • Created an OpenFiler VM to act as a SAN/NAS storage array
  • Created two virtual (nested) ESXi server

At this point in the build, we need to connect everything together.

In order to allow the vESXi hosts to access the "SAN", they need to be connected to the correct LAN. To do this, first note down the MAC addresses of the interfaces you have assigned to the Storage LAN. These details can be obtained by using the vSphere Client connected to the pESXi server and editing the settings of the vESXi VM:

With the MAC addresses noted, switch to the vSphere Client connected to the vCenter Server, select one of the vESXi hosts, Configuration, Network Adapters. Identify the vmnics that correspond to the Storage LAN adapters.

Staying on the Configuration screen, select Networking and Add Networking.

Specify the Connection Type as VMkernel, Create a virtual switch and select the two vmnics that are connected to the Storage LAN. Give the network a suitable label (e.g., "Storage LAN"), then assign an IP address on the storage LAN subnet (e.g.,

The end result should appear similar to the following:

Repeat this for the second vESXi host.

At this point, the hosts are ready to connect to the storage. The next step is to configure our OpenFiler "SAN/NAS appliance" and share out some storage.

Setup SAN storage

Log into the OpenFiler web interface ( as the openfiler user (default password is "password"). In the original build, I added two 100GB hard disks in addition to the 8GB install disk. We will create one as an NFS share and the second as an iSCSI target.

OpenFiler is built on Linux, so an understanding of Linux LVM is useful. A very basic summary of Linux LVM:
  • Physical disks are encapsulated and referred to as "Physical Volumes" (PVs)
  • One of more PVs are combined together into a Volume Group (VG)
  • A VG is carved up into Logical Volumes (LVs)
  • A filesystem is created on an LV

Click the Volumes button
Click the link to "Create new physical volumes"
Select the first non-OS disk (/dev/sdb on my configuration)
Create a partition with the following properties:

  • Mode: Primary
  • Partition Type: Physical Volume
Accept the start and end cylinders and click Create
Click Volume Groups
Under the "Create a new volume group" section, enter a name (e.g., "vmware"), put a tick next to the newly created physical volume and click "Add Volume Group".

First, let's create an NFS datastore. To do this, click Add Volume.
Under "Create a volume in ", enter a name (e.g., "ds01"), a description (e.g., "NFS datastore"), select the size (e.g., 50GB) and choose a filesystem type (I used ext3). Then click Create. This creates a new Logical Volume in the "vmware" volume group that is 50GB in size and formats an ext3 filesystem onto it. The create operation may take a couple of minutes.

When this is complete, create an iSCSI datastore by clicking Add Volume again.
Enter a new name (e.g., "ds02"), a description (e.g., "iSCSI datastore"), assign all remaining space in the volume group and select the partition type as iSCSI. Then click Create. The result should look similar to this:

The OpenFiler appliance will now have the two datastores configured, but they are not published yet. Click the Services button and enable the "NFSv3 server" and the "iSCSI target server".

Click the System button and scroll down to the section titled "Network Access Control". In order for a host to see the OpenFiler storage, it needs to match an ACL entry. The most secure way to do this is to enter the IP address of each VM host. The easiest way is to specify the entire storage LAN subnet (

Map the iSCSI volume to the ACL by clicking the Volumes button, then select "iSCSI Targets". The system will present a new iSCSI target name. Click Add. To assign the iSCSI volume to the target, click the LUN Mapping button and click Map.

Click the Network ACL button and change the host access configuration to Allow and Update.

Share the NFS volume by clicking the Shares button. Click the NFS Datastore and create a sub-folder (e.g., "VMs"):

Click on the new sub-folder and select "Make Share". Scroll to the bottom of the new window and change the NFS setting to RW. Click the Edit button and set the UID/GID mapping to no_root_squash:

With these options set, click Update.

Finally, change Share Access Control Mode to "Public guest access" and click Update.

Switch back to the vSphere Client connected to the vCenter Server, select the first VM host, select Configure, Storage Adapters. Select the iSCSI Software Adapter (probably vmhba33) and select Properties. Click the Configure button and put a tick next Enabled to turn on iSCSI. Click OK and then select the Dynamic Discovery tab. Click Add and enter the IP address of the OpenFiler server. With iSCSI enabled and configured, click Rescan All, scanning for new storage devices and new VMFS volumes. If all is successful, you should see the new iSCSI volume appear.

Click Storage and Add Storage. Select Disk/LUN and the iSCSI disk should appear. Select it, click Next and create a new partition. Enter a datastore name (e.g., OpenFiler iSCSI) and choose a maximum file size (doesn't matter which since the disk is only small). Finish clicking through the wizard and the new datastore should appear in the list.

 Click Add Storage again. Select Network File System and click next. Enter the IP address of the OpenFiler server ( with a folder path in the format of /mnt/volumegroup/logicalvolume/sharename (e.g., /mnt/vmware/ds01/VMs). Give the Datastore Name something suitable (e.g., OpenFiler NFS).

The datastore view in vSphere client should now look similar to the following:

Repeat the adding of storage on the second vESXi host. After scanning for the iSCSI storage, the datastore should appear automatically. NFS storage will still need to be entered manually.

The next step will be to finish configuring out networks and setting up vMotion...

Friday, 5 November 2010

Building a vLab Part 4: Virtual ESXi install

In the last post in this mini-series, we created a VM for vCenter Server and set it up in such a way that the install could be rapidly rebuilt in the future. The next step is to create our virtual ESXi instances (called vESXi in this blog).

When sizing the VMs, it's worth thinking of the vESXi hosts as if they were physical. For example, most physical ESXi servers will have multiple network cards, so we'll put several in our vESXi servers.

Create the virtual machine as you normally would, specifying the following parameters:
  • Guest OS: Other (64-bit)
  • CPU: 2 vCPU
  • Memory: 4096MB
Use the datastore available to the pESXi server and not the OpenFiler VM. Accept the default virtual disk size of 8GB and tick "Allocate and commit space on demand (Thin Provisioning)".

Edit the settings of the VM and add more network adapters (remember, we need this VM to appear similar to a physical host). For the vLab, use a total of 6 network adapters in pairs for the LAN, Storage LAN and vMotion LAN respectively.

The other networking step required is to edit each vSwitch, select the port group and edit the security settings:

  • Promiscuous Mode: Accept
  • MAC Address Changes: Accept
  • Forged Transmits: Accept

Do this for all the vSwitches.

While this is enough to get ESXi installed, you won't be able to power on any nested VMs without making the following change. When the vESXi VM is created (but not powered on), right click on the VM and select "Edit Settings". On the Options tab, select Advanced, General, Configuration Parameters. Add a new row and give it the name monitor_control.restrict_backdoor with a value of true:

Now insert/connect the ESXi install CD/ISO and power on the vESXi VM. To be honest, this next bit should be simple (especially as you've already installed the physical ESXi server).

Once installed into the VM, assign a static IP address.

Now open a second vSphere client instance and connect to the vCenter Server. Within the vCenter Server, create a new Datacenter and add a new host, pointing to the IP address of the newly created vESXi VM. When added, the new server should look similar to this:

There's not much point in having a lab with only a single server, so repeat the process detailed in this post and create a second VM. In the next post, we'll configure it all up...