Each test was run four times and the average of each result taken. All testing was performed against a four disk RAIDZ parity stripe with an SSD read cache.
For the first test, the ZFS filesystem was configured sync=standard and compression=off. This resulted in the following averages:
- Block Writes: 65MB/sec
- Rewrites: 39MB/sec
- Block Reads: 173MB/sec
- Random Seeks: 451.55
For the second test, sync=standard and compression=on:
- Block Writes: 148MB/sec
- Rewrites: 107MB/sec
- Block Reads: 218MB/sec
- Random Seeks: 2036.1
As the results show, enabling compression results in a huge performance boost.
Although not recommended in situations where data integrity is important, ZFS supports the option of disabling synchronous writes. The third test was run with sync=disabled and compression=on:
- Block Writes: 164MB/sec
- Rewrites: 113MB/sec
- Block Reads: 216MB/sec
- Random Seeks: 2522.76
As expected, disabling synchronous writes improved the write performance of the server and had a corresponding knock-on effect for the rewrites. Block reads, although marginally slower than the second test, were close enough to suggest the difference was environmental.
Although running with synchronous writes disabled resulted in the highest performance, in order to get the best possible data integrity, I opted to run with sync=standard and compression=on.
While running the benchmark with compression=on, I noted in the vSphere client that both the CPU cores in the Microserver ran at nearly 100% (the Nexenta VM has 2 x vCPUs assigned). This suggests that the performance here was limited, not by the disks, but by the rather weak CPU in the Microserver.
11 comments:
Is your Microserver the older N36L or the newer N40l?
Timo, I have the N36L.
How does deduplicaiton affect your transfer speeds?
Hi "Unknown".
I'm not using de-duplication. ZFS stores it's de-duplication table in memory so is very RAM intensive. Because of this, I've opted not to use it in this configuration.
Hi How did you check
Block Writes: 148MB/sec
Rewrites: 107MB/sec
Block Reads: 218MB/sec
Random Seeks: 2036.1
what is the command for that ..?
Hi Sam
I logged into the Nexenta VM as the admin user and then ran the "su" command (not "su -" in case you're familiar with Unix).
This got me to a proper Unix prompt and not the one that Nexenta use.
To change the property of the filesystem requires the "zfs" command. For example, to change the filesystem called "test" on the zpool called "tank" to enable compression:
# zfs set compression=on tank/test
Ditto for setting sync=standard:
# zfs set sync=standard tank/test
You can view your zpools by running:
# zpool list
And you can view your ZFS filesystem by running:
# zfs list
Hope this helps!
How big was the file being tested?
By the results of the benchmark, I'm thinking they weren't sufficiently larger than 4GB that you gave to nexenta.
Hi David
I don't have the specific details anymore, but from memory, Bonnie++ creates files that are twice the size of the RAM assigned to the VM on which it is running.
Hi, thanks for the helpful blog entry.
I have also tried to go with almost the exact same setup but im geting some strange write speed issues. Not sure if you can help at all.
I've gone for:
HP N40l Running ESXi 5, and Nexentastor as a VM (1CPU and 4GB Ram allocated to VM)
The plan was to have 3 x 2TB ZFS Raid-z1 pool.
3 seperate VMFS drives on seperate disks allocated by ESXi.
Dedup off and compression off.
However that didn't work so i've simplified thing for testing.
Currently i have:
1 x 2TB ZFS No RAID pool as a VMFS allocated by ESXi.
Dedup off and compression off.
However in all setups when i write the any share over Cifs or FTP the Max throughput and copy speed i can get is 2-3MB/s Write (copying a 400mb avi from windows 7 box).
Copying the same file back the other way gives approx 70-80MB/S read. As expected.
I cant get to the bottom of why the write speed is so slow.
I've installed Nexentastor from ISO as i couldn't find VM for Community version.
I also ran this test on Nexentastor VM:
admin@nas1:/volumes/data/Users$ time dd if=/dev/zero of=/volumes/data/Users/temp bs=1M count=1K
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 104.268 seconds, 10.3 MB/s
I've also opened a thread at: http://www.nexentastor.org/boards/2/topics/6744 if you are able to help at all and have any ideas.
I sorry to have to ask for your help like this but found your posts while trying to google similar setups.
Many Thanks
Jetto
Hi Jetto
Thanks for the comment. I checked the Nexentastor forum link you posted and it appears you have the answer?
JR
Post a Comment