AT&T to Put Its 5G Edge Computing Ideas to Test in Silicon Valley
Plans “test zone” to explore practical implications of edge computing model in Palo Alto
November 14, 2017
AT&T like many others is betting that 5G holds the key to using edge computing to power latency-sensitive workloads like augmented reality and self-driving cars. It’s trying that out with an edge computing “test zone” centered on its new Palo Alto Foundry innovation center, launching in early 2018, Igal Elbaz, VP of Ecosystems and Innovation at AT&T told Data Center Knowledge.
Partners and potential customers will use the test zone to try out applications and services, letting AT&T to explore practical implications of the edge computing model and work through what (mainly open source) software will make up the stack – much of it from companies based in Silicon Valley, although he wouldn’t name any of them yet.
Elbaz described the planned test zone as “a place where we can start testing and identifying and iterating on the right architecture, the right companies to work with, who will be interesting in offloading their computation, and what would be the business case.” Building on Palo Alto’s existing startup ecosystem, the plan is to partner with companies that want to offload computation from devices to the cloud without adding significant latency to their applications. “It’s where we believe companies participating in emerging verticals are located. We’re not only in constant dialog with them but also helping by providing a new direction and a learning experience.”
The test zone will start out with an LTE 4G network, but AT&T will upgrade it to 5G, exploring the convergence of computing and the radio network.
Elbaz sees edge computing as being a way of overcoming power and computation limits when delivering immersive experiences to mobile devices, experiences that would be significantly degraded by the latency between mobile networks and the public cloud, like immersive mobile AR and other real-time applications, but where you may not want to add more mobile compute to devices for reasons of size or power consumption. The core functions of a self-driving like braking and acceleration will always be on board, “but maybe there are some capabilities you want to offload to start saving on the power of the car and how much computation you want to load into your trunk.”
While AT&T won’t talk about speeds for future networks, Elbaz did point out that 5G connections “represent different characteristics in potentially single-digit-millisecond latency.” The question is what latency you might see from the device to the edge computing node and back over 5G connections. “We want to make sure how this moving from one network to another can influence the overall latency perspective. We can have assumptions but it’s too early to tell. Obviously though, we know what is required; people are talking about 15-20 milliseconds in terms of some of the experiences, so that or even better needs to be the goal.”
Building on AT&T’s existing work in edge computing, the plan is to blend cloud-native computing concepts, like containers, with network-scale SDN and ONAP, the Open Network Automation Platform. Expanding on this idea Elbaz said, “We want to bring and leverage everything we’ve done so far with SDN and ONAP and our roadmap to 5G to understand what we can do with data analytics and distributed data centers, combining it with what we have with cell towers and central offices.”
AT&T’s edge computing network will offer shared computing resources either in the central office or at cellular base stations, more accurately called Multi-access Edge Computing or MEC. Treating existing telecoms infrastructure hosting facilities as distributed pocket data centers makes a lot of sense, especially as switches move to Open Compute and other x86-based platforms. The resources for hosting compute are there, and it makes sense for AT&T to understand how it’s going to use them.
But the company is also considering putting compute that it manages onto customer sites. “FlexWare [AT&T’s managed platform for offloading services onto white-box commodity servers] is the representation of what could be at the customer edge,” Elbaz suggested.
Extending software defined networks from data centers out into the mobile network will be key to a successful MEC deployment, and that requires a model of how applications will operate and how workloads will migrate between the different layers of the platform in a hybrid-device/edge/public cloud model. “We want to start thinking about use cases, and how do you construct an experience where some of the computation is done on the cloud and some on the edge,” Elbaz said.
The network could even begin to predict what bandwidth will be required. “Over time we could develop machine learning capabilities to predict how users will consume the next workloads for their immersive experience and everything that’s happening, in real time.”
Using its Foundry centers builds on existing relationships with the startup ecosystem in Silicon Valley, helping AT&T identify candidate companies for the program who will work with AT&T to develop both software architectures and user experiences, which will help the network understand how to give developers the right tools and services. As Elbaz put it, “we can explore the level of abstraction we need, do we want APIs, or do some developers need deeper integration or more access? These are the kind of questions we want to understand.”
Computation offload is a good fit to existing development of tools like containers and serverless computing that move compute across the network. AT&T plans to use the test zone and the Foundry to help understand the impact on applications of where those computing resources operate.
The movement back to the edge isn’t a rejection of hyper-scale cloud; it’s all about getting the advantage and abstractions of the cloud model closer to where the computation is needed. Not every deployment will fit into just one edge computing model and by engaging with the developer community to find out what works best, AT&T can start to understand the needs of a range of different applications to determine how edge computing resources need to be deployed.
About the Author
You May Also Like