Tuesday 20 October 2009

Building a ZFS filesystem using RAID60

We are starting to use ZFS as a production filesystem at $WORK. Our disk array of choice is the Sun StorageTek 2540 which provides hardware RAID capabilities. When building a ZFS environment, the decision has to be made on whether to use the hardware RAID and/or the software RAID capabilities of ZFS.

Having watched Ben Rockwood's excellent ZFS tutorial, my understanding of ZFS is much better than before. For our new fileserver, I've created the following:

On the StorageTek 2540, I've created two virtual disks in a RAID6 configuration. Each virtual disk comprises of 5 physical disks (3 for data, 2 for parity) and is assigned to different controllers. On top of each virtual disk, I've created a 100GB volume. This is published as a LUN to the Solaris server and appears as c0d3 and c0d4.

Each LUN is then added to a zpool called "fileserver":

# zpool create fileserver c0d3 c0d4

By default, ZFS treats the above in a variable width stripe, so the hardware and software combined result in a "RAID60" configuration; data is striped across 2 x RAID6 virtual disks for a total width of 10 spindles.

Why RAID6 and not RAID10? Apart from the cost implications, as a fileserver, the majority of operations will be read-only and RAID6 is very good at reads (while being less-good at writes).

Now, when I'm running out of space, I can create a new volume, publish the LUN and add it to the zpool:

# zpool add fileserver c0d5

A quick check of the zpool status shows the disk is added successfully:

# zpool status
pool: fileserver

state: ONLINE

scrub: none requested

config:

NAME STATE READ WRITE CKSUM

fileserver ONLINE 0 0 0

c0d3 ONLINE 0 0 0

c0d4 ONLINE 0 0 0

errors: No known data errors

Running zpool list reports the newly added space is available:

# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
fileserver 199G 69.4G 130G 34% ONLINE -


All told, very simple to do and results in a pretty fast filesystem.

No comments: