Sunday, 24 October 2010

Building a vLab Part 2: Infrastructure Build

The journey begins! In order to build the vLab as detailed in part one, I'll be using my HP ML115 G5. This is a quad core, single CPU tower server in which I've installed 8GB RAM. It's also got 2 x 500GB SATA drives of which I'll be using one for the vLab environment (the other will be used for other projects). The ML115 G5 has an internal USB socket and ESXi can easily be installed on it, reserving the disk space for the VMs.

There is little point in recreating the same installation instructions over and over again when there is a perfectly good reference point. In this case, I'm using the excellent "Installing VMware ESXi 4.0 on a USB Memory Stick The Official Way" post from TechHead (the install for 4.1 is pretty much the same).

As the only physical VMware server, I'll be referring to it as the pESXi box.

At the end of the install process, I have an empty VM host connected to my physical network on which to build the VMs that represent the "physical" items in my virtual infrastructure: The router and the SAN/NAS and the Active Domain controller that we'll need when vCenter Server is installed.


Building the LAN


VMware best practice is to use multiple networks for different types of traffic. The vLab will require four different VLANs (virtual machine network traffic, vMotion traffic, IP storage and access to the physical network). In order to enable this, the pESXi server needs four switches. All four of these switches contain "Virtual Machine" port groups. The switch containing the management interface to the pESXi box also has a VMkernel port.

  • vSwitch0: Connects to the physical (non-vLab) network
  • vSwitch1: The vLab LAN for management access and connecting VMs
  • vSwitch2: The vLab storage network for iSCSI and/or NFS traffic
  • vSwitch3: The vLab vMotion network
On the pESXi box, the networks look as follows:





You will notice that vSwitches1-3 are not connected to any physical adapter yet. We will use the Vyatta router to accomplish this.


    For reference, I'll be using the following subnets:

    • 192.168.192.0/24 - main network connected to the physical network
    • 192.168.10.0/24 - vLab Virtual Machine network
    • 192.168.20.0/24 - vLab storage network
    • 192.168.30.0/24 - vLab vMotion network


    Add the routes to the network on your management PC. Alternatively, it might be more useful to add these on your main router so that traffic to these networks route correctly.

    The router is necessary because I want to give my vLab a completely separate IP range to the rest of my kit. Therefore, in order for my non-lab kit to communicate with the vLab, I need a layer 3 router. Vyatta have a free "Core" edition that can be installed. For this, I created a new VM with the following sizings:
    • 1 vCPU
    • 256MB RAM
    • 8GB Hard Disk (thin provisioned)
    • 3 x Network Interfaces
      • 1 connecting to the default VM network (i.e., the physical LAN)
      • 1 connecting to the "vSphere Lab in a Box LAN"
      • 1 connecting to the "vSphere Lab in a Box Storage LAN"
    For information on installing Vyatta, see this guide. My Vyatta configuration (as displayed using the show -all command) is:

     interfaces {
         ethernet eth0 {
             address 192.168.192.33/24
             duplex auto
             hw-id 00:0c:29:50:0a:5b
             speed auto
         }
         ethernet eth1 {
             address 192.168.10.33/24
             duplex auto
             hw-id 00:0c:29:50:0a:65
             speed auto
         }
         ethernet eth2 {
             address 192.168.20.33/24
             duplex auto
             hw-id 00:0c:29:50:0a:6f
             speed auto
         }
         loopback lo {
         }
     }
     system {

         gateway-address 192.168.192.1
         host-name vyatta
         login {
             user root {
                 authentication {
                     encrypted-password $1$VBYqK71jAsu3bsoAznh22mx0pqp31nU/
                 }
                 level admin
             }
             user vyatta {
                 authentication {
                     encrypted-password $1$FdjsdebjGneXOIVw9exHrXRAcaN.
                 }
                 level admin
             }
         }
         ntp-server 69.59.150.135
         package {
             auto-sync 1
             repository community {
                 components main
                 distribution stable
                 password ""
                 url http://packages.vyatta.com/vyatta
                 username ""
             }
         }
         time-zone GMT
     }


    I mapped the Vyatta ethernet adapters (eth0, eth1 and eth2) to the correct network by comparing the MAC address with those listed against each adapter in the vSphere client. The default route (pointing to my Internet router) will make things like downloading software from within the vLab easier.

    Building the SAN

    VMware vSphere shines when the hosts have access to shared storage. The vLab ESXi servers will connect to an IP based (iSCSI) SAN. There are multiple ways to achieve this and one of the most common ways for home lab users to get shared storage is to use the Linux-based OpenFiler distribution.

    Again, in an attempt to avoid reinventing the wheel, I'll point to the excellent Techhead post on configuring OpenFiler.

    The specifics for the vLab is that the OpenFiler VM is connected to a the vLab storage LAN and not the vLab VM LAN. The IP address for the OpenFiler VM is 192.168.20.1. In addition to the install disk of 8GB, I've also created a 100GB thin provisioned disk on which to install VMs. The OpenFiler storage will be used for the VMs that I'll install on the virtual ESXi servers. The Active Directory Domain Controller, Vyatta VM and vCenter Server will also install directly onto the 500GB SATA datastore.


    Installing the Active Directory Domain Controller

    VMware vCenter Server requires Active Directory, so we'll need a domain controller for the vLab. Best practice requires at least two domain controllers for resilience, but we'll make do with just the one (this is a VM lab not a Windows lab). I sized the DC VM to be very small: 1 vCPU with 256MB RAM, 8GB hard disk and 1 vNIC connecting to the vLab LAN with an IP address of 192.168.10.1/24. Although I prefer Windows Server 2008, the DC will run Windows Server 2003 because of it's lower footprint.

    The setup was a standard Windows Server 2003 install, followed by running dcpromo. I called the host "labdc", the domain "vsphere.lab" and we're off.

    Okay, so we have everything ready apart from our virtual ESXi hosts and the vCenter server. We'll continue the journey in part 3.

    2 comments:

    Sarinyo said...

    Hi Julian Regel,

    seems like I have a lot of work to do now :-D. I'll recieve my home server tomorrow (it's a dual core CPU and ships with only 1 GB, but i've ordered 2x4GB to increase its RAM)

    I wonder how many physicall NICs your server has, I guess only one, isn't it?

    Thanks, and keep going, this rocks!

    JR said...

    Hi Sarinyo

    Thanks for your comment.

    I've only got one physical NIC in the server. The way the lab is configured, you won't need more as the physical NIC is only used to log into the lab via vSphere, RDP, SSH etc.

    The next post will be out in a few days.

    Thanks

    JR