Sunday 18 September 2011

HP Microserver: Building a SAN in a box

[Update 05/2013: I've since migrated my Nexenta install to a dedicated server. See here for the details.]

Following a recent upgrade to my home lab, my storage now looks like this:
  • A unified SAN capable of providing both block (iSCSI) and file (NFS/CIFS) data.
  • Eight disks:
    • 2 x SSD
    • 6 x SATA
  • Two discrete RAID groups: 
    • a 400GB (usable capacity) two disk mirror
    • a 1.2TB (usable capacity) four disk parity stripe
  • Both RAID groups have dedicated 20GB flash read caches.
  • LUNs can be configured to support compression and/or deduplication
  • Copy on Write (COW) snapshots for all filesystems and LUNs
  • Support for replicating filesystems and LUNs to a second SAN

All sounds pretty funky. Must be expensive right?

Actually, the above is all achieved using a very cheap HP Microserver running VMware ESXi and the Nexenta virtual storage appliance. I've assigned the Nexenta VM 4GB RAM, but it would happily use more for it's L1 read cache.

The HP Microserver has 4 x SATA disks (2 x 1TB and 2 x 500GB) with a single 60GB SSD disk.

The Nexenta virtual machine is then assigned VMDK files. The first RAID group is a mirror: one VMDK file on SATA disks 1 and 2. The second RAID group is a RAIDZ parity stripe: one VMDK file on SATA disks 1, 2, 3 and 4. The flash read caches are 20GB VMDK files on the SSD.

The compression, deduplication, snapshot and replication features are provided by the ZFS filesystem.

This is a pictorial representation of the configuration:


And this is what it looks like physically:




Oops. No, that's the NetApp at work. But functionality-wise they are quite similar (obviously the vastly more expensive NetApp is much faster!).

This is the real physical hardware (on the far right, next to the ML110 and ML115):



Pretty small for such a setup. I've currently only got the built-in NIC in the Microserver, but will look at adding another to create a dedicated storage network.

17 comments:

KB said...

Very interesting post. I was thinking of doing something similar but was worried about the read/write performance of using Nexenta via a VM (versus installing it directly on the MicrosServer).

Can you give an idea of the RW performance you are getting?

Thanks in advance!

JR said...

Hi KB. Thanks for the comment. I've just posted a new article detailing some initial benchmarking. You can find the article here.

mel said...

What are you using for the hypervisor?

Also, you say that you are using 6 SATA and 2 SSDs, and I am wondering how you squeezed all that in there? I just got the N40L and it has 4 3.5 inch bays + 1 5.25 on top. If you put SATA disks in all that that gives you 5 disks then another on the eSATA port on the back? I'd like to know how you squeezed in 6 SATA + 2 SSDs =)

Your diagram shows 4 SATA and 1 SSD; what's going on with the other 2 SATA and the other SSD?

I'll definitely appreciate your answer
Kashif

JR said...

Hi Kashif. Thanks for the comment.

The 6 SATA and 2 SSDs are a bit of "smoke and mirrors". It's what the Nexenta operating system sees and thinks it's got, but this is abstracted by the virtualisation layer (the hypervisor is ESXi 5.0 btw).

If you look at the diagram in the post, the bottom layer shows the physical devices (albeit with 1 mistake: It should read 500GB not 500TB for one of the disks). That's what's actually in the server.

Those devices are then sliced and diced at the VMware level to create multiple VMDK files and this is where it appears to Nexenta as 8 SATA disks and 2 SSDs.

Hope that makes sense.

Cheers

JR

mel said...

It does, thanks!

Mike said...

So I assume that you are only using 1 vCPU for the nexenta VM? I am installing nexenta manually (cannot see the vmware image for download) and plan to mirror pairs of drives rather than use raid-z. I'm hoping that the performance will be OK without an SSD for the cache ...

AJM said...

Hi, I was wondering about where you installed NexentaStor? Seems to me you'd have to install it on a single physical drive which creates a pretty significant single point of failure. Just curious how you, or if you, overcame that. Regardless, thank you for sharing this, it's a great article and has inspired me to do something similar, but protecting the NexentaStor install has me stymied.

Thanks!
AJM

AJM said...

Ah, sorry, I commented before having actually run through the install. I hadn't realized it lets you do a mirrored installation across multiple drives. That's really nice. It's up and running happily now, thanks again!

Best,
AJM

Unknown said...

Well, I'm still confused :)
Can you please explain on which vmdk(s)? you installed the Nexenta OS..?
(It has to be 1 or 2 of the 400GB vmdks but which one and does nextena then let you use the rest of the remaining diskspace to be used as storage in a pool..?)
Also curious how many VM's you are serving this storage to..?

Looking at building a similar setup but still undecided whether to serving VMDK's, RDM or even passing through the whole disks using VT-D to the Nexenta VM (not possible with a microserver)
Any tips you may have a welcome!
Thanks for your great articles about Nextenta..you have inspired me !

JR said...

Hi Aleks

Thanks for the comment. I understand your confusion!

In an attempt to keep the diagram clear and illustrate the datastores and filesystems that are exported from the Nextenta appliance, I did not include the VMDK used for the boot disk.

The boot disk is an 8GB VMDK (although if I did it again, I'd make it twice the size to accommodate updates more easily).

This 8GB VMDK belongs to a system defined zpool called "syspool" and is independent from the other disks and pools.

I'm not currently running my lab, so can't tell you how many VMs I've run from it, but it's probably around 10-12 at a time. This isn't a Nexenta limit, only that I didn't need to be running more at one time.

As for RDMs vs VMDKs, I've read that the overhead of a VMDK is pretty small, so I worked on the advantage of using VMDKs (such as snapshotting) outweighing the minimal performance gain of using RDMs.

Thanks for the kind comments about the blog.

JR

Unknown said...
This comment has been removed by the author.
Unknown said...

Thanks JR. Just wandering if you had a build guide for your setup handy... I am sure lots of us leeches would love to follow your hard word and replicate this nice looking system.. I guess my obvious questions are any ideas if there is any complexity's introduced by going esxi 5.1 with 3TB drives? I am having a nice time trying to understand how to present the 3TB's drives to VM's with esxi 5.1, only due to the fact I am a noob.. Thanks!

Unknown said...

Hi Julian,

surprisingly I've got, with minor changes, the exact same Setup at home: N40L and two ML110G6 attached to two dedicated ix4-200d iscsi boxes.

You've stated that the N40L has a SSD installed. Where did you attached it with 4 3,5" Harddisks? Did you use the DVD Bay or the eSATA? Did you use any Kind of 2,5" to 3,5" conversion kits?

Yesterday I've removed one of the drive 3,5" drive bays and attached a SSD directly to connector without any Kind of foothold :((.

Thanks in advance!

Greetings from Germany,

Marc

JR said...

Hello Mark

Thanks for the comment.

I had the SSD installed in the optical bay. I used a 2.5" to 3.5" adapter, and then a 3.5" to 5.25" adapter to fit it in place.

JR

JR said...

Hello Sam

I'm afraid I don't have a build guide. Actually, I've recently moved the Nexenta install from the Microserver to a physical ML110G5. This had been freed up when I bought my new ML110G7 servers.

The Microservers are great little boxes, but I was starting to push Nexenta a bit too far on them. Nexenta will happily eat all the RAM you give it and when you do compression or deduplication, you need a fair amount of CPU as well.

So this blog post is now out of date. I'll leave it up because it does work fine, just not when you want to push Nexenta further and try out bigger labs (such as vCloud Director).

joeblogs said...

You say you've got 6 SATA drives in there and 2 SSDs. I have one of these microservers as well, and there are only 4 SATA 3.5" drive bays.

Where did you put the extra (second) SSD? I know you fit one into the optical bay via an adapter. And what about the two extra SATA drives?

Thanks!

AJM said...

I'm pretty sure those six drives are four 3.5" spinning SATA's and 2x 2.5" SSD's. That's how mine has been running. I used a converter I found on Amazon to mount the two SSD's in the optical bay. One connects up to the SATA header which is meant for the optical and the other goes out the back and plugs into the eSATA connector, so you'll need a SATA-to-eSATA cable to make that work.

More importantly, to get the two extra drives running optimally, you have to flash the BIOS with a slightly modified version. This isn't all that scary, it just un-masks a hidden feature already there which allows you to run SATA 4 and 5 in ACHI mode instead of legacy IDE mode. Kinda dumb that they made that hidden in the first place.

Here's a link to get you there, but I can vouch for the fact that mine has been running nearly a year like this and it's rock solid dependable.

Good luck,
AJM

http://homeservershow.com/hp-proliant-n40l-microserver-build-and-bios-modification-revisited.html