We've had VMware Infrastructure 3 for over a year, but our usage of it has been "organic" (read: unplanned and unstructured). Yesterday I started making moves to rectify that.
One of the things we didn't get right at the start was setting up the SAN to store all VMs. This resulted in a number of VMs crammed onto internal storage. Some VMs were loaded on SAN LUNs, but we didn't have the LUN mappings setup to allow multiple ESX servers to see them.
As I said, it was an organic install.
So yesterday I created a new 250GB LUN on the SAN and mapped it to our "infrastructure" ESX servers (three HP DL360 G4 servers - not super fast, but sufficient for domain controllers etc.). For the first time, multiple ESX servers could see the same storage.
My first action was to create a small Linux server on the shared storage and test Vmotion. This worked fine, so I started copying the VMs from internal storage to the shared LUN. To do this, I shutdown the VM, browsed to the VM folder in the datastore browser, highlighted it and selected "Move to..." from the context sensitive menu. I then selected the new SAN LUN and started the move.
Once moved, the VMX file needs to have the execute bit set (using chmod from the ESX service console), and I then removed the VM from the Virtual Center inventory. I then re-added the VM by right-clicking the VMX file in the datastore browser and selecting "Add to inventory".
Upon starting the VM, a question needs to be answered (the icon for the VM changes to a speech bubble - right click and select Answer Question). Because the VM has been moved, elect to "Keep" the VM id. The VM will then fire up properly.
Or it did in most cases. One VM gave an error that it was "Unable to lock file". A quick Google suggested a LCK file, but I didn't have one of those. To resolve this problem, I edited the VM settings and removed the hard disk (from the VM, NOT the disk file!). I then re-added the same disk, and the VM started without problems.
Tuesday, 30 September 2008
Saturday, 27 September 2008
Migration to Google Mail
Although I've had a Google Mail account for a few years, I've really only used it as a test account, or when I've not had access to my personal email. Instead, my main (church) email address has been accessed by Thunderbird on my local machine. Obviously this is not ideal from a cloud computing perspective.
So tonight I took the step of migrating all my email from Thunderbird to Google. I did this using the IMAP interface to Gmail and dragging and dropping messages up to the cloud.
In order to ensure that all new messages go to Google, I configured Gmail to download my email using POP. I've also got the webmaster account on a Google Apps domain for a local charity, which I've now changed to forward to Gmail.
The net result was an inbox with > 1700 messages and taking 325MB (out of a quota of 7178MB).
The next step was to use the Gmail filter capabilities to add tags for my emails based on destination email address. Once this was done, I spent a couple of minutes archiving all email older than 1 month.
Now I have a tidy inbox, accessible from everywhere I can use a browser, collecting email from three different addresses and automatically tagging incoming messages.
So tonight I took the step of migrating all my email from Thunderbird to Google. I did this using the IMAP interface to Gmail and dragging and dropping messages up to the cloud.
In order to ensure that all new messages go to Google, I configured Gmail to download my email using POP. I've also got the webmaster account on a Google Apps domain for a local charity, which I've now changed to forward to Gmail.
The net result was an inbox with > 1700 messages and taking 325MB (out of a quota of 7178MB).
The next step was to use the Gmail filter capabilities to add tags for my emails based on destination email address. Once this was done, I spent a couple of minutes archiving all email older than 1 month.
Now I have a tidy inbox, accessible from everywhere I can use a browser, collecting email from three different addresses and automatically tagging incoming messages.
Saturday, 6 September 2008
VMware ESXi on an HP ML110 G5
I recently bought an HP ML110 G5 from Ebuyer because it was going very cheap (£220 for a dual core Xeon, 1GB RAM and 250GB hard disk). This box will be my new server, possibly for trying things out with, but maybe as a replacement for my Shuttle that currently runs VMware Server on OpenSUSE.
Unfortunately, the first attempt at installing ESXi failed with the message that no storage devices could be found. After some searching, I found that by changing the disk settings in the BIOS from "Auto" to "SATA" fixed the problem and the disk was found (full details here.
The installer again failed with the following error: "Unable to write image to the selected disk. This maybe caused by bad sectors on the device or another hardware problem."
One comment I read suggested the amount of memory could be a problem. As this is to be a VM server, I ordered another 4GB RAM (CT810056 4GB kit (2GBx2), 240-pin DIMM Upgrade for a HP - Compaq ProLiant ML110 G5 System) from Crucial and fitted it this morning.
ESXi installed without any problems. Now to install some VMs...
Unfortunately, the first attempt at installing ESXi failed with the message that no storage devices could be found. After some searching, I found that by changing the disk settings in the BIOS from "Auto" to "SATA" fixed the problem and the disk was found (full details here.
The installer again failed with the following error: "Unable to write image to the selected disk. This maybe caused by bad sectors on the device or another hardware problem."
One comment I read suggested the amount of memory could be a problem. As this is to be a VM server, I ordered another 4GB RAM (CT810056 4GB kit (2GBx2), 240-pin DIMM Upgrade for a HP - Compaq ProLiant ML110 G5 System) from Crucial and fitted it this morning.
ESXi installed without any problems. Now to install some VMs...
Subscribe to:
Posts (Atom)