Not that it should come as a shock, but cloud applications are mushrooming like mid-summer thunderstorms. For data centers, this is the uneasy calm before the storm.
According to the 2018 Global Cloud Index[i], global cloud data center traffic is projected to reach 19.5 zettabytes (ZB) per year by 2021—up from just 6 ZB in 2016. That 19.5 ZBs will account for 95 percent of all data center traffic by 2021. That’s not just data passing through the data center; by 2021, about 1.3 ZBs will be stored in the data center—more than five times the storage volume needed in 2016.
It’s not just a question of how much data is being generated, but where it is coming from and the network requirements needed to support it. Increasingly, applications at the network edge—Internet of Things, artificial intelligence, machine-to-machine communications and the like—are generating tremendous amounts of data. Many such applications demand ultra-reliable low-latency (mid, single-digit millisecond) performance. The challenges of coping with this growing flood of data—to and from the edge—are keeping data center managers awake at night. Here’s what we do know.
According to the Global Cloud Index, 94 percent of all workloads and compute instances will be processed in cloud-based data centers by 2021. To adapt, data centers need to augment their centralized capacity with geographically distributed cloud resources that can promptly transport and process the growing volume of data. This will involve significant investment in decentralized public, private and hybrid cloud infrastructures that can be distributed to the edge, where it can handle the data using localized resources.
Such a highly decentralized and distributed cloud infrastructure is also critical for enabling the east-west, mesh-type traffic necessary to support ultra-reliable, low-latency performance. As more processing capability is deployed on the edge, the any-to-any connectivity which created new opportunities in cloud data centers, will enable parallel scale-out in the access network layer.
At the hyperscale level, data center operators are beginning their migration from 100 Gb/sec Ethernet to 400G. For many, this involves deploying distributed network systems inside the data center to handle the crush of internal east-west traffic which dwarfs the amount of external network traffic.
Yet, there are concerns that the increase of data traffic at the edge will overwhelm the access network capacity. At some point there simply will not be adequate resources to transport edge-sourced data to central data centers. Herein lies the irony.
5G—and all the data-hungry applications it will enable—will force a radical reimagining of the scope and environment in which data centers operate. Challenges in the data center and access network have the same familiar ring. It comes down to fiber, capacity and the ability to manage and grow physical layer infrastructure. Few partners are better positioned to answer those challenges than CommScope. It’s what we do. Stay tuned and we’ll help you weather the storm.
We also encourage you to catch the replay of the Datacenter Dynamics webinar "Is your data center ready to meet 5G demands?" In this webinar, you will understand how to:
- Enchance connectivity and efficiency to futureproof for 5G
- Meet latency demands and reliable power-needs
- Provide the analytics and flexibility that customers require
The panel includes the following experts: Mike Wolfe and Jamie Birdnow, CommScope; Brenden Rawle, Equinix; and Russell Shriver, Digital Realty.
[i] Global Cloud Index: 2016 – 2021; Cisco; February 2018
Chart 1: Source: 83% Of Enterprise Workloads Will Be In The Cloud By 2020 Forbes; Jan 7, 2018
Chart 2: Diagram of Facebook’s Fabric Aggregator Source: Facebook Engineering