Despite this, I'm starting to suspect that the future of SAN connectivity will be iSCSI over copper Ethernet.
Ethernet and IP technologies have basically beaten everything else out there and now dominate computer networks and IP telephony. Why have a different standard for storage networks? Consolidation of the fabric for LAN, SAN and VOIP seems logical to me. Share the components and reduce the total cost.
So why continue to specify fibre channel? We already have a large investment in fibre channel, it's a known quantity and is well supported. Sun Solaris has a very mature FC implementation, and as of VI3, VMware works best on FC (not sure yet if vSphere changes this).
It's also faster (for us). We currently have 2Gbit switches but will be adding 4Gbit switches later this year and early next. Sure, 10Gbit Ethernet is available, but it's still too expensive for us to deploy (especially when adding the cost of switches and NICs).
But fast forward three years and I would expect the following:
- 10Gbit Ethernet switches at a reasonable price with 40Gbit or 100Gbit inter-switch links
- 10Gbit NICs with TCP Offload Engine (TOE) as standard and cheap
- iSCSI boot as standard on these 10Gbit NICs (some do already, but it's not guaranteed)
- Better support in the hypervisor / operating system for iSCSI
At the end of the day, managing a single fabric is easier than juggling a bundle of different cable types, protocols, HBAs and drivers.
It's always risky in this business to speculate how things might look in 3 years. If you disagree, please let me know why; it's always good to get alternative views...