Home    About   Contact
Twitter Facebook LinkedIn RSS

100G DWDM

Home » Archive by category "Networking Gear"

100G DWDM

100G DWDM

Last post, I reviewed how coherent optics allowed 40 Gbps waves to be dropped into existing 10 Gbps DWDM systems without major modifications. That was good news for network operators who had a much more difficult upgrade path from 2.5 to 10 Gbps. There’s more good news: the optical magicians have pulled another rabbit out of their hat. The new generation of 100 Gbps transponders will also play with 10 and 40 Gbps waves in existing 50 Ghz DWDM windows. The bad news is that it looks like there are no more rabbits in the hat.

At 100 Gbps, the optical to electrical conversion is problematic, because processing a 100 Gbps native stream would require very specialized electronics today. One way to mitigate these problems is to divide the 100 Gbps stream using wavelength division multiplexing into 10 x 10 Gbps or 4 x 25 Gbps optical channels. These lower speed streams can be transmitted by separate lasers and processed using less specialized optoelectronics. This works well for short-range links where fiber capacity is relatively inexpensive. For longer reach where system capacity is valuable, and suitable lasers are expensive, a native 100 Gbps optical channel using a single laser is desirable.

But increasing modulation rate using the same method is not a viable option for upgrading existing networks, making more sophisticated modulation schemes necessary. Encoding two bits per symbol doubles the data rate without increasing the optical bandwidth, or sensitivity to dispersion. Encoding two of these signals, one in each polarization mode of the fiber, allows a further doubling of the data rate, still with the same bandwidth and dispersion tolerance. This scheme, known as dual-polarization quadrature phase-shift keying (DP-QPSK), is now the standard for commercial development of long-haul 100 Gbps on a single wavelength.

Encoding four bits per symbol interval not only enables transmission using a single channel, it also facilitates signal processing without expensive ultra high-speed electronics. The four bits can be processed as four parallel and uncorrelated 25 Gbps payloads on the line side, and then multiplexed into a single 100 Gbps serial handoff on the drop side.

Decoding a polarization multiplexed signal presents a problem, though. Ordinary single mode fiber does not maintain polarization state along its length. So, complex and expensive dynamic polarization controllers were needed in the past to align the receiver with the transmitted polarization state in the optical domain. A coherent detector moves the polarization state into the electrical domain, allowing it to be estimated by the DSP algorithm. The problem of receiving the two scrambled polarization modes is analogous to transmitting data in free space using two antennas and two receivers, known in wireless communications as multiple-input, multiple-output (MIMO). Algorithms developed for MIMO have been adapted to decode the scrambled polarization state in a coherent receiver, making polarization multiplexing feasible.

With these advancements, 100 Gbps DP-QPSK waves can be added to an existing DWDM system engineered for 10 Gbps. In fact, 100 Gbps transponders using all digital dispersion compensation could be used on links that would require dispersion compensation just to pass 10 Gbps. This can bring new life to older fiber routes that are capacity limited and not easily upgraded, or add value to old fiber obtained on long-term IRU.

Of course, there has to be a down side, and naturally it’s cost. Dual polarization adds optical elements and doubles the number of transmitter and receiver elements. The coherent detector doubles the number of receiver elements again. Each of the four receiver elements must employ high speed ADC and sophisticated real-time DSP. So the cost of 100 Gbps DP-QPSK transponders will probably not be too much less than ten times the cost of 10 Gbps, when they become available. Right now the standard is just a multi-source agreement to develop common components that each optical equipment vendor can use in their proprietary implementation. These components are just entering production now.

That does not mean that you can’t deploy 100 Gbps over a single DWDM wave now. Ciena has 100 Gbps line cards for the OME 6500 platform that have been deployed for more than a year. The former Nortel engineers who developed these had to use an additional trick to split the payload into 12.5 Gbps slices so readily available integrated circuit technology could be used to decode the data. In addition to splitting the signal in phase and polarization, they also split the optical carrier into two sub-carriers using frequency division multiplexing in the optical domain. Each sub-carrier carries half the data a la WDM, but the two carriers are only separated by 20 GHz so they can fit in a single 50 Ghz DWDM window.

Technology adapted from wireless to optical communication has allowed an order of magnitude growth in capacity of existing DWDM networks, without costly and disruptive upgrades to the installed plant. But this has taken us pretty close to the theoretical throughput limit under the Shannon–Hartley theorem given the typical parameters of existing large-scale networks. It is possible to get higher data rates with better OSNR, or with more bandwidth; but it’s doubtful that we will see a 400 Gbps transponder suitable for general deployment in existing 50 Ghz DWDM amplified networks originally engineered to carry only 10 Gbps.

Doug Haluza,

CTO, Metro|NS

Ed. Note: this is the second post in a series. Click here for the first post. The next post is here.

Share

Terabit Switch on a Chip

Terabit Switch on a Chip

Networking gear is trending away from custom ASICs to merchant silicon, and the newest generation of these switching chips has crossed the terabit per second threshold. A single chip can now switch 64 full-duplex 10 Gbps wire-speed flows without blocking, for a total of 1.28 Tbps, or just under one billion packets per second.  Switch latency is around one microsecond for both Layer-2 and Layer-3 forwarding, and the latency is consistent between any pair of ports because they are all driven off the same chip.

Vendors are now delivering this technology in top-of-rack (ToR) switches positioned for high-performance computing (HPC) clusters. One example is the new Force10 S4810 ToR switch which supports 48 dual-speed 1/10 GbE SFP+ and four 40 GbE QSFP+ ports in a 1 RU “pizza box” footprint. IBM and Cisco have similar offerings based on the same Broadcom Trident chip, but you must wait a while to get your hands on the Nexus 3064 from Cisco (unless you already have a substantial order booked).

Compare this to a legacy architecture Cisco 6509-V-E chassis that delivers similar throughput using 21 RU—that’s half a rack, with an order of magnitude greater power and cooling load. The single-chip solutions only draw a few hundred watts, so special power outlets are not needed. Standard equipment includes redundant hot-swappable power supplies and fans, with front/back airflow compatible with hot/cold aisle data centers.

The SFP+ and QSFP+ ports support Direct Attach cables without media conversion for ultra low latency on short reach connections. They also accept a range of pluggable optics suitable for metro optical networks, or directly driving wavelength division multiplex systems. Dual speed SFP+ slots support any mix of 1/10 GbE on copper or fiber, with a simple plug-and-play upgrade path.

Expect the economies of scale of ubiquitous Ethernet and PCI bus to squeeze InfiniBand (IB) out of its niche in HPC, the same way switched Ethernet crowded out ATM. Direct Attach provides switched connections between multiple devices, and PCIe handles point-to-point connections. We don’t see sustained interest in IB for high-frequency trading, where it should wash out relatively quickly because refresh cycles there are measured in months, not years.

Chassis-based Ethernet switches with pluggable cards will continue to be displaced by these fixed-port, modular interface boxes based off reference designs from the silicon merchants. This transition, limited only by Moore’s Law and the ability to productize apace, is likewise analogous to the move HPC made off custom supercomputer chassis to arrays of commodity PCs. Initial capital cost and ongoing power and space expense are lowered by dumping switch fabric backplanes for single-board designs.

Once basic switch functionality becomes commoditized by merchant silicon, vendors will have to differentiate their offerings with features, services, and relationships. That should be a positive development for everyone in the networking space.

Doug Haluza, C.T.O. Metro|NS

 

Share

Search

Recent Tweets

  • No tweets were found.