Sundar Krishnaraj, Technical Leader Field Applications, Microsemi Corporation

The ubiquitous networking technology – Ethernet is the de facto connectivity standard across market segments in enterprise, service providers, and data centers (DC). Popular ethernet speeds today run at 1/10/40/100 Gbit/s over fibers or copper. How ethernet link speeds have evolved over years since the 1980s is shown below.

The existing ethernet speeds are not sufficient with growing traffic in cloud services, big data, 5G wireless, video streaming, e-commerce, and augmented/virtual reality. These trends are causing traffic explosion both within the DC and on to transport networks interconnecting DCs. Google datacenters see bandwidth doubling every 12–15 months. The situation is not much different at Indian DCs due to massive government digitization efforts and exponential growth in smartphones and e-commerce. India’s ethernet services market is likely to reach approximately USD 1 billion by 2020, according to Ovum. There is a clear need to increase the speed of the network interconnects at DCs and transport network, while lowering costs and keeping frugal power consumption.

The newly standardized ethernet rates at 25/50 Gbit/s are very promising and gaining the market share. The 25GbE (25 Gigabit Ethernet) standardized by IEEE 802.3by is seeing a rapid adoption. It offers a direct upgrade path to existing 10GbE ports by reusing the same fiber. According to Crehan Research, 25GbE and 100GbE would comprise over half of all DC ethernet switch shipments by 2021.

 Today the predominant network architecture inside DCs. It consists of 10GbE connectivity between the server and the Top-of-Rack (ToR) ethernet switch at the access layer. Aggregation switch connectivity with ToR is 40GbE or 100GbE and core router connectivity is mostly 100GbE.

As the server workload increases, the connectivity with the ToR switch becomes a bottleneck to utilize the full computational power of modern servers. Today ToR links with servers are multi-mode fibers (LC-type OM3 or OM4 type) designed for 10GbE using SFP+ optical module. To alleviate the bottleneck, DC managers have an option to increase more 10GbE links to the servers but that involves installing more 10GbE server adapters, ToR switches, and additional cables, all requiring more space, cooling, and power – a significant CapEx/OpeX apart from increased administration complexity. The other option is to move to higher speeds using 40GbE or 100GbE. However, both 40/100G rates are just an overkill (at least in next few years) at the access layer really not justifying any expensive upgrades. In fact, the fiber upgrades to 40/100G rates are significantly expensive, as 40GbE actually uses four parallel 10G lanes (4×10G) and 100 GbE use ten parallel 10G lanes (and recently with 4×25 Gb/s).

Naturally, 25GbE became an obvious choice as it offered an instant 2.5× speed up without any expensive upgrades – even to the fibers! The SFP28 optics used for 25GbE is a scaled down version of QSFP28 used at 100GbE (Fig. 3). This maintained the same port density with 10GbE while 2.5× speed-up is achieved. It even worked on same MMF fibers used by 10GbE. This is a significant advantage to reduce TCO as it means the same rack sizes, rack units, and front panels. This gives 25GbE a 40 percent lower cost/bandwidth advantage over 10GbE. Mellanox and Intel are the top two suppliers making 25GbE adaptors today for the server market. Mellanox alone expects more than two million adaptors shipping in 2018.

Beyond 25GbE

Once the bottleneck at the access layer is overcome the aggregation switches and core switches will be the next focus. As seen in Figure, the Ethernet Task Force has already begun work on 50/100/200 Gb/s rates. IEEE 802.3cd was created in 2014 to standardize these rates. The task force has already released draft specifications and few companies are sampling products. The final specifications are expected by September 2018. The physical medium for 50GbE would be either on Twinax copper or MMF/SMF fiber using SFP56 or QSFP56 optic modules. The products are expected to hit the market in late 2018.

Another important trend that is driving network speeds further is the NVMe storage technology. NVMe SSD storage is finding adoption to achieve high performance storage acceleration for cloud applications. Tests have shown that just three NVMe drives within a server could achieve a full line rate on a 100 Gb ethernet. With NVMe prices reducing it is only going to demand massive scale-out clusters in the DCs.

Flex Ethernet – Panacea for Service Providers

While DCs and enterprises adopt newer rate ethernets, the Data Center Interconnect (DCI) in the transport network becomes equally important to carry the newer rate clients. It needs to scale, provide flexibility and efficiency in utilizing the fiber wavelengths. It should support for a mix of rates not only in legacy speeds (1/10/40/100G), but even at a newer 25G rate and yet to standardized rates at 50G, 200G, and 400 G.

Flex ethernet or FlexE exactly achieves the stated purpose for the transport network. FlexE was standardized by OIF in the first quarter of 2016. Silicon/system vendors like Microsemi and Cienna are working toward FlexE complaint transport products that will hit the market in late 2018. It offers flexibility for service providers to efficiently utilize the fiber wavelengths and bandwidth at the transport layer.

 Flex also provides higher capacity and flexible DCI options. FlexE defines a shim layer between the ethernet MAC and PCS layers. This shim defines a TDM structure that offers bonding, multiplexing for clients, and deskew on the line side. It allows any client ethernet layer rates in multiples of 25 Gb/s to be transported while keeping the same physical layer rates in the transport network. The green lines over the transport cloud are the standard OTU4 PHY rates in today’s transport network. By bonding 4– OTU4 transport pipe the network is ready for a 400 GbE thus making transport network ready even before the client eco-system is in place. This bonding is also very efficient and is an alternative to LAG protocols (Link Aggregations) and does not have the inefficiencies associated with LAG protocols.

 Another advantage for the service provider is to offer a 400 GbE connectivity today while still utilizing standard QSFP28 optics at 100G rates. Transport service providers like TATA Communications would have immense cost savings as a single 400G optics would be very expensive initially during its launch.

Like bonding, FlexE offers Subrating service. The service provider can offer bandwidth in Nx25GbE client rates. This improves network efficiencies. It also allows an option to underfill the transport network so that client traffic never sees a flow control from the transport layer. With FlexE, end users will have a dynamic control to adjust the client service rate to any flexible line rate. This makes it operationally simpler for the service provider a simple scalable service management.


 

matrix-telesol-till-20171126

spotforum-upto-20171129

Smartindia

iot

iot

India m2m-20171102

Future generation

iot

Read Current Edition of Communications Today