Monday, 12 August 2013

VNXe: Using Unisphere remotely

I'm not sure if this has always been in Unisphere or was added as a recent release, but I've only just discovered this feature...

The VNXe is managed by a web-based Flash application called Unisphere. This application makes use of smooth fades for menus and screen information. All very impressive, until you're on an RDP session at which point it's mildly annoying.

Fortunately there is an option to allow the fading to be turned off. To enable, go to Settings and then Preferences. Tick the box labled "Optimize for remote management access" and click "Apply Changes".


The application will now feel a lot snappier.

Tuesday, 6 August 2013

EMC VNXe volume layout

Because of the way the VNXe "simplifies" the allocation and management of storage, it can be difficult to work out just what's happening under the hood. This can result in some unexpected behaviour...

When we originally purchased the VNXe, we only had six NL-SAS disks which were automatically assigned to the "Capacity Pool". The six disks were configured by the VNXe to be part of a RAID Group in a RAID 6 (4+2) configuration (i.e., four data disks, two parity disks). From this pool, we carved out a volume. This volume is striped across all six disks:

Figure 1 - single RAID Group


Additional volumes could be created assuming there is capacity in the pool and would also be striped across the disks in the RAID Group.

Since the original installation, we have added a couple of expansion trays and now have a total of 24 NL-SAS disks. The VNXe has configured three extra RAID6 (4+2) Groups, but does not automatically rebalance existing volumes across all the RAID Groups in the pool.

When creating a new volume, the VNXe will now stripe across all four RAID Groups (in fact, four RAID Groups is the maximum that the VNXe will stripe over, regardless of the the number of RAID Groups in a pool). This means that although two volumes are created from the same pool, and may be of equal size, there may be different performance characteristics. Volume B should have four times the IOPS as Volume A:

Figure 2 - multiple RAID Groups



A further complication may arise if the first volume created completely filled the RAID Group. In this instance, the VNXe will not be able to use the first six disks and the second volume will be striped across 18 disks.

Figure 3 - Surprise!


Unfortunately, because the VNXe is "simplified", it's not possible to see this from the Unisphere web interface. EMC seem to assume that the only metric non-storage specialists care about is capacity. Not true!

Fortunately, there is a way to identify how many disks are actually assigned to a volume, even if it's not something that can be tweaked. To do this, open an SSH session (see my previous blog post on how to do this) and run the svc_neo_map command:


svc_neo_map --fs=volume_name

The list of filesystems configured on the VNXe can be viewed in Unisphere under System > Storage Resource Health. The svc_neo_map command runs a bunch of internal commands and outputs a lot of data, but it's the last part, the output of "c4admintool -c enum_disks", that interests us here. This shows the number and identity of the disks that have been assigned to the filesystem. In the following example, the volume is spread over six disks:

# Show disk info, based on wwns from RG.
root> c4admintool -c enum_disks

Disk #    0  Enc #    1  Type 10(DRIVE_CLASS_SAS_NL, RPM 7202) Flag ()  Phys Cap: 3846921026(0xe54b5b42) Disk WWID      wwn = 06:00:00:00:05:00:00:00:0f:00:00:00:00:01:00:03
Disk #    1  Enc #    1  Type 10(DRIVE_CLASS_SAS_NL, RPM 7202) Flag ()  Phys Cap: 3846921026(0xe54b5b42) Disk WWID      wwn = 06:00:00:00:05:00:00:00:10:00:00:00:01:01:00:03
Disk #    2  Enc #    1  Type 10(DRIVE_CLASS_SAS_NL, RPM 7202) Flag ()  Phys Cap: 3846921026(0xe54b5b42) Disk WWID      wwn = 06:00:00:00:05:00:00:00:11:00:00:00:02:01:00:03
Disk #    3  Enc #    1  Type 10(DRIVE_CLASS_SAS_NL, RPM 7202) Flag ()  Phys Cap: 3846921026(0xe54b5b42) Disk WWID      wwn = 06:00:00:00:05:00:00:00:12:00:00:00:03:01:00:03
Disk #    4  Enc #    1  Type 10(DRIVE_CLASS_SAS_NL, RPM 7202) Flag ()  Phys Cap: 3846921026(0xe54b5b42) Disk WWID      wwn = 06:00:00:00:05:00:00:00:13:00:00:00:04:01:00:03
Disk #    5  Enc #    1  Type 10(DRIVE_CLASS_SAS_NL, RPM 7202) Flag ()  Phys Cap: 3846921026(0xe54b5b42) Disk WWID      wwn = 06:00:00:00:05:00:00:00:14:00:00:00:05:01:00:03


What do you do if you find yourself unable to take advantage of all your RAID Groups? As mentioned above, the VNXe does not auto balance existing volumes when new disks are added. The only way to take advantage of the new disks is to create a new volume, copy everything across and delete the original. Not ideal. Even the VNXe's bigger sibling, the VNX, had this limitation until the recent "Inyo" release. Hopefully, the new auto balance feature will trickle down and appear in a future VNXe update.

I think as it stands, the current VNXe Unisphere is too simple. While I'm glad that the VNXe hides the FLARE LUNs and DART dvols, slices, stripes, metas etc from the user, it needs to go further in showing where a volume is going to be placed and what performance can be expected. This is something that even non-storage admins care about!

Any comments/corrections welcome.