Insight and analysis on the data center space from industry thought leaders.
Time to Aim Lower: Zero Latency at the Edge Is Achievable at Scale
The last key to achieving zero latency is completely dependent on where the most popular content and compute services reside.
July 27, 2018
Anton Kapela is the Chief Technology Officer of EdgeMicro.
The edge computing market is full of hype. Megawatts of hype. Gigawatts of hype. But when you push aside the marketing buzz, most of what’s being done in edge computing is rather modest in scope: small micro data centers that will move parts of the cloud a bit closer to users.
Micro data centers at the edge are very important, but they are not enough. What the edge market needs is boldness rather than just hype. We need a vision that truly paves the way for the next wave of the internet. I am talking big vision. Something even John Legere would give an approving nod to.
I want to suggest a vision and goal that is truly worthy of buzz: zero latency at the mobile edge. That would make the future of mobile computing possible today for a long list of groundbreaking applications that are not practical with the kind of latency that exists today. The goal of zero latency is no longer theoretical. It is within reach, and I see three key components to achieving it:
Accelerating LTE airlink speed
Eliminating the latency in network “back ends”
And moving content much closer to end users using a proven peering/colocation model
Reducing LTE Airlink Latency
Let’s start with what’s happening to the LTE airlink. Currently, it adds 20 to 40 milliseconds of system round-trip delay, which is not trivial. That alone has an impact on user experience, but enhancements to that protocol will enable sub-frame access that produces single-digit milliseconds of round-trip delay. That is a significant step toward a zero-latency experience for end users.
And as real 5G deployments happen – on millimeter wave or otherwise – we’ll expect to see that figure reduced to sub-millisecond round-trips. This advancement puts wireless data traffic into the realm of wired network traffic speeds, which is truly amazing.
Routing Data through a More Efficient 'Back End'
Making the LTE airlink faster is a critical step toward zero latency, but let’s look at an even bigger source of latency: the network “back end” that delivers data to and from the edge of cell networks. That network is plagued by end-to-end delays that create frustrating jitter and buffering issues for end users. Why? Getting to the LTE antenna is an arduous journey for packets. Most data travels back and forth across miles of fiber, microwave links, outdated gateways, ratty VPNs, xDSL last-mile “hacks” and other third-party transport networks.
But a zero-latency model for this back end has been proven in vertical markets where latency interferes with their operations, including financial trading networks and high-performance computing. To meet the needs of companies in those industries, important advancements are being made in off-the-shelf packet switching ASICs from vendors like Broadcom and Intel, which are achieving sub-microsecond packet forwarding latencies.
This occurs while supporting rich layer 2 and 3 features in hardware at rates far exceeding hundreds of gigabits and tens or hundreds of millions of packet operations per second. That same ultra-high-speed network hardware is going to be a centerpiece of delivering a zero-latency network experience. Yes, technically there will still be a few hundred nanoseconds or so of forwarding delay through edge networking hardware, but this is more like “zero” than our 40 or more milliseconds round-trip today. To the end user, that will feel instantaneous.
Moving Content Closer to End Users
The last key to achieving zero latency is completely dependent on where the most popular content and compute services reside. Currently, high-demand content lives very far away from users. It resides in centralized mega-data centers that are hundreds or thousands of miles away. As data travels from those central hubs to end users, it picks up a significant amount of latency that degrades user experience.
Latency is all about distance. If we do not eliminate multiple hops, through numerous gateways and interconnections – each adding delay – by moving content closer to consumers, the goal of zero latency cannot be met. Despite the great promise of 5G and the previously mentioned technological advances, latency would continue to interfere with those next-generation mobile computing applications. It is simple physics: Moving packets too far with too many hops is a latency killer.
The only solution is to move the most frequently used latency-sensitive content and compute services closer to the consumer. We need a neutral environment where Content Providers, MNOs and users all connect at the edge of networks where end users are. The good news? The solution already exists. It is highly-proven, cost-effective, repeatable and efficient. It is the same colocation and peering ecosystem that married networks, data centers and users together during the internet boom of the 21st century. It is the same model that delivered the world the incredible internet experience many of us take for granted today. Let’s just repeat what is already proven and apply neutral micro site collocation and interconnect at the edge.
The faster all stakeholders in the mobility-based internet race to adopt what already works, the sooner my goal of zero latency will become a reality. And not just in a handful of large cities, but for every market and every consumer.
Now that’s a goal worthy of all the hype and buzz surrounding edge computing.
Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Informa.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating.
About the Author
You May Also Like