Intel 360x203I started blogging about 100 Gigabit Ethernet (GbE) in 2016, and gave an update on the 100GbE landscape in 2017. In the past two years, the market has observed a tremendous growth in 100GbE in the data center.

“What’s next?” I asked myself.

From the recent Optical Fiber Communication Conference (OFC) 2018 and Open Compute Project (OCP) Summit 2018, I learned that 400GbE will most likely be the big thing in the cloud data center networking. Why?  

First, the unquenchable thirst for bandwidth keeps driving higher speed networking technologies. 

Second, the market would like to repeat the success of 40GbE breaking out to four 10GbE links with 100GbE and 400GbE.  

CLICK TO TWEET: How far away is 400G? Closer than you think. Read more in Frank Yang's blog.

The third reason lies in the availability of switch silicon per lane speed at 50 or 100 Gigabit per second (e.g. SERDES speed, a very technical term). The 50/100Gbps silicon per lane speed provides much better scalability to build 400GbE networks. Andy Bechtolsheim, the well-known entrepreneur, investor and self-made billionaire, predicted at the OCP Summit 2018 that 50 and 100Gbps will take more than 50% of the SERDES mixes starting in 2020.  

The DC core networking technology is trending from 10/40GbE to 100/400GbE. Data center customers are advised to examine the underlying connectivity infrastructure against the goal of supporting the technology transitions while maximizing investment preservation.  

400GbE will be required to support several use cases in the cloud data center. From the distance perspective, the use cases include:

  • Short reach (less than 100 meters), e.g. within one server rack row or across multiple rows nearby;

  • Middle reach (100m to 500m/2km), e.g. within a compute hall or across multiple halls;

  • Campus reach (500m/2km to 10km); and

  • Metro reach (40km or longer).  

Edge core 360x203Each of these use cases may require one or more variations of 400GbE optics. With the combination of reach, media, form factor and cost, the selection of 400GbE optics could easily get complicated. Another side effect of too many variations of 400GbE optics is market fragmentation, which was a lesson learned from 100GbE. 

I will keep monitoring 400GbE market and post updates when appropriate.  What do you think the industry should do to make 400GbE optics easy to choose and simple to use? Let me know your thoughts in the comments below.

About the Author

Frank Yang

Frank Yang is manager, Market Strategy Development, for the ISP Fiber business unit of CommScope. Frank leads the market strategy development for data center, central office and enterprise campus markets. Prior to CommScope, Yang worked at Dell and was responsible for server hardware development. He serves as Marketing Chair for Next Generation Enterprise Cabling Subcommittee of Ethernet Alliance. He received a Master of Electrical Engineering from Texas Tech University, and has several patents, articles, white papers and publications under his name. Frank is a frequent speaker at various global and national level opportunities, for example, Data Center Summit, Ethernet Technology Summit, OFC conference, the Ethernet Alliance’s Technology Exploration Forum, Cable Installation and Maintenance (CI&M) Webinars, BICSI conference, etc. Frank holds CloudU, Cisco Certified Network Design Professional (CCDP) and Cisco Certified Network Professional (CCNP) certificates.

See all posts by this author

Comments

1 comment for "The Next Big Thing in Cloud Data Center Networking"
Dan Kennefick Tuesday, March 27, 2018 6:04 PM

Frank,

What about short switch to server connections using direct attached copper cables. 50G/lane chips are available. QSFP+ X4 or X8 twinaxial pairs???

Thanks,

Dan Kennefick
Daikin America

Add Your Comment

Please submit your comment using the form below

 
(required)