Networking gear is trending away from custom ASICs to merchant silicon, and the newest generation of these switching chips has crossed the terabit per second threshold. A single chip can now switch 64 full-duplex 10 Gbps wire-speed flows without blocking, for a total of 1.28 Tbps, or just under one billion packets per second.  Switch latency is around one microsecond for both Layer-2 and Layer-3 forwarding, and the latency is consistent between any pair of ports because they are all driven off the same chip.

Vendors are now delivering this technology in top-of-rack (ToR) switches positioned for high-performance computing (HPC) clusters. One example is the new Force10 S4810 ToR switch which supports 48 dual-speed 1/10 GbE SFP+ and four 40 GbE QSFP+ ports in a 1 RU “pizza box” footprint. IBM and Cisco have similar offerings based on the same Broadcom Trident chip, but you must wait a while to get your hands on the Nexus 3064 from Cisco (unless you already have a substantial order booked).

Compare this to a legacy architecture Cisco 6509-V-E chassis that delivers similar throughput using 21 RU—that’s half a rack, with an order of magnitude greater power and cooling load. The single-chip solutions only draw a few hundred watts, so special power outlets are not needed. Standard equipment includes redundant hot-swappable power supplies and fans, with front/back airflow compatible with hot/cold aisle data centers.

The SFP+ and QSFP+ ports support Direct Attach cables without media conversion for ultra low latency on short reach connections. They also accept a range of pluggable optics suitable for metro optical networks, or directly driving wavelength division multiplex systems. Dual speed SFP+ slots support any mix of 1/10 GbE on copper or fiber, with a simple plug-and-play upgrade path.

Expect the economies of scale of ubiquitous Ethernet and PCI bus to squeeze InfiniBand (IB) out of its niche in HPC, the same way switched Ethernet crowded out ATM. Direct Attach provides switched connections between multiple devices, and PCIe handles point-to-point connections. We don’t see sustained interest in IB for high-frequency trading, where it should wash out relatively quickly because refresh cycles there are measured in months, not years.

Chassis-based Ethernet switches with pluggable cards will continue to be displaced by these fixed-port, modular interface boxes based off reference designs from the silicon merchants. This transition, limited only by Moore’s Law and the ability to productize apace, is likewise analogous to the move HPC made off custom supercomputer chassis to arrays of commodity PCs. Initial capital cost and ongoing power and space expense are lowered by dumping switch fabric backplanes for single-board designs.

Once basic switch functionality becomes commoditized by merchant silicon, vendors will have to differentiate their offerings with features, services, and relationships. That should be a positive development for everyone in the networking space.

Doug Haluza, C.T.O. Metro|NS

 

Share