Each test was run four times and the average of each result taken. All testing was performed against a four disk RAIDZ parity stripe with an SSD read cache.
For the first test, the ZFS filesystem was configured sync=standard and compression=off. This resulted in the following averages:
- Block Writes: 65MB/sec
- Rewrites: 39MB/sec
- Block Reads: 173MB/sec
- Random Seeks: 451.55
For the second test, sync=standard and compression=on:
- Block Writes: 148MB/sec
- Rewrites: 107MB/sec
- Block Reads: 218MB/sec
- Random Seeks: 2036.1
As the results show, enabling compression results in a huge performance boost.
Although not recommended in situations where data integrity is important, ZFS supports the option of disabling synchronous writes. The third test was run with sync=disabled and compression=on:
- Block Writes: 164MB/sec
- Rewrites: 113MB/sec
- Block Reads: 216MB/sec
- Random Seeks: 2522.76
As expected, disabling synchronous writes improved the write performance of the server and had a corresponding knock-on effect for the rewrites. Block reads, although marginally slower than the second test, were close enough to suggest the difference was environmental.
Although running with synchronous writes disabled resulted in the highest performance, in order to get the best possible data integrity, I opted to run with sync=standard and compression=on.
While running the benchmark with compression=on, I noted in the vSphere client that both the CPU cores in the Microserver ran at nearly 100% (the Nexenta VM has 2 x vCPUs assigned). This suggests that the performance here was limited, not by the disks, but by the rather weak CPU in the Microserver.