The best way to get under the hood and see how the VNXe works is to enable SSH through the Unisphere web interface (it's under Settings > Service System). Then open an SSH session to the VNXe as user "service". First piece of information: It's running SUSE Linux Enterprise Server 11 (64bit).
The most interesting command to find out information about the VNXe storage system is "svc_storagecheck". This takes a number of parameters.
If we start at the bottom of the stack with the "svc_storagecheck -b" (for backend information), we get a lot of information about the array. Note that the word "Neo" crops up a lot and was the codename for the VNXe.
Some of the useful details revealed include the supported RAID types:
RAID Type | Min Disks | Max Disks |
---|---|---|
NEO_RG_TYPE_RAID_5 | 3 | 16 |
NEO_RG_TYPE_DISK | 1 | 1 |
NEO_RG_TYPE_RAID_1 | 2 | 16 |
NEO_RG_TYPE_RAID_0 | 3 | 16 |
NEO_RG_TYPE_RAID_3 | 5 | 9 |
NEO_RG_TYPE_HOTSPARE | 1 | 1 |
NEO_RG_TYPE_RAID_1_0 | 2 | 16 |
NEO_RG_TYPE_RAID_6 | 4 | 16 |
It's worth noting that not all of the above are directly accessible by the user (there is no way to manually create a RAID3 group to my knowledge). Also, the limit of 16 disks per RAID Group is also found on the CLARiiON.
The svc_storagecheck command also outputs details on "Flare's Memory Info" which shows two objects (one per SP?), each having a total capacity of 900 (MB?) with 128 (MB?) Read Cache and 768 (MB?) Write Cache. This might be a surprise if you were expecting the entire 8GB of memory to be available. A lot of this 8GB is used by the FLARE and DART subsystems, along with the Linux operating system itself.
There is also information on "2 Internal Disks" which presumably refer to the internal SSD on each SP which is used to store the operating environment.
Eight Ethernet ports are listed, as are two Link Aggregation Ports that I have setup within Unisphere.
Each of the 12 disks in the array are also detailed, including manufacturer (Seagate), capacity, speed, the slot number etc. Each disk is assigned a type of "NEO_DISK_TYPE_SAS" and also contains a reference to the RAID Group that it belongs to.
There are 2 Private RAID Groups, but no LUNs are presented from it and I cannot determine what this is for. I assume it's used by the operating system.
On my VNXe, there are an additional 3 RAID Groups:
Number | RAID Type | Number of Disks | Number of LUNs |
---|---|---|---|
0 | NEO_RG_TYPE_RAID_5 | 5 | 2 |
1 | NEO_RG_TYPE_HOTSPARE | 1 | 1 |
2 | NEO_RG_TYPE_RAID_6 | 6 | 16 |
The first of these RAID Groups is the RAID5 (4+1) of the 300GB SAS disks, the second isn't really a RAID Group and is the SAS hot spare disk. The final RAID Group comprises the six 2TB NL-SAS disks in a RAID6 (4+2) configuration.
The 2 LUNs are presented from the SAS disks in a way that looks similar to that done on CLARiiON (except on the CLARiiON, each LUN would typically be assigned to a different SP, whereas on the VNXe, this isn't the case and both LUNs are on the same SP).
I'm not sure why the NL-SAS RAID Group presents 16 LUNs, possibly due to the size of each disk. Each of these LUNs is striped across the RAID Group as follows:
The next part of the "svc_storagecheck -b" command details the 19 LUNs that have been defined above.
Each LUN has a name, prefixed with "flare_lun_" which gives a big clue to its history. The default owning and current controller is also defined.
The final part of the "svc_storagecheck -b" command details the Storage Groups used by the array. A Storage Group is an intersection of LUNs and servers. For example. LUN0, LUN1 and LUN2 could be added to a SG with ServerA and ServerB. In this example, both ServerA and ServerB can see LUNs 0, 1 and 2.
There are some in-built Storage Groups (~management, ~physical, ~filestorage and control_lun4) as well as dart0, dart1, dart2 and dart3. The 2 LUNs from the 300GB SAS RAID Group belong to Storage Group "dart0" along with the IDs of the Storage Processors. Similarly, the 16 LUNs from the 2TB NL-SAS RAID Group are mapped to "dart1" along with the two Storage Processors. Storage Groups dart2 and dart3 are unused and presumably for future use.
We can also get some more disk information by using the "svc_neo_map" command, specifying a LUN number:
# svc_neo_map --lun=0
This command can be used to help map the FLARE side to the DART side of the VNXe.
And this pretty much concludes the FLARE side of the VNXe. The resulting LUNs have been carved from the RAID Groups and presented to the Storage Processors in a configuration that the DART side of the VNXe will be able to understand. We'll look in more detail at the DART side in the next post.