Cloud RAN and the Evolution of Infrastructure

Since the beginning of cellular systems more than 40 years ago, the two forces acting on the wireless network have been efficiency and latency. Efficiency in all its forms tends to drive centralization, while latency, or time to take action, demands localization. Cloud RAN is one stop on the wireless infrastructure continuum that oscillates back and forth between localization and centralization. Morgan Kurk explains more in today’s blog post.

Cloud-RANSimple harmonic motion describes a situation when the restoration force is directly proportional to the displacement force and creates mathematically a sinusoid, continuously oscillating from some minimum to some maximum.

Did I lose you yet?

My point is that things we observe in the natural world also exist in systems created by humans, including wireless networks. Cloud RAN (radio access network) is one stop on the wireless infrastructure continuum that oscillates back and forth between localization and centralization. Since the beginning of cellular systems more than 40 years ago, the two forces acting on the network have been efficiency and latency. Efficiency in all its forms tends to drive centralization, while latency, or time to take action, demands localization.

Today we sit in transition. Advances in computing power and architectural changes to the latest generation of networks make centralization of many network functions both possible and cost effective. There are three fundamental stages to Cloud RAN:

  1. Centralization of current baseband equipment,
  2. Splitting of processing between commercial off-the-shelf (COTS) equipment and specialized chips, and
  3. Total network virtualization.

Each of these stages has costs and benefits. Centralization of current baseband equipment can be done without equipment change, but requires dedicated high-speed links and has only limited efficiency benefits, including site-to-site decision making advances such as eICIC (enhanced Inter-cell interference coordination).

In the second phase, most of the call processing which takes place on specialized chips (layer 2 and 3) can be moved off the baseband unit and put onto COTS equipment such as standard servers. Simultaneous to this will likely be movement of layer 1 processing moving directly into the remote radio head to facilitate a compromised latency requirement allowing for a high speed, low latency, but not necessarily dedicated link to the core from the remote site.

The final stage of centralization continues to virtualize everything from processing to applications. Some advances are necessary before COTS chips are efficient enough for some of the specific call processing, while the network’s backhaul latency is still at issue.

As this network centralization is going on, a decentralization is also occurring on the data side of the world. Today’s data—from YouTube to Facebook to any other large content provider—has been stored in ever increasing mega data centers, available everywhere, just a click away. However, as consumers demand faster and faster access to data, the distance and access time to these centralized locations becomes a limiting factor. It is expected that cached data will need to spread from the core far closer to the edge than ever before. Locations that serve a cluster of cells may very well contain on site storage, constantly being updated with predicted use data.

And so it goes, like a never ending tide, the oscillation between centralization and localization in a battle for efficiency and latency.

When do you expect Cloud RAN to be implemented? How fast will the trend toward centralization become virtualization?