How Enterprises Could One Day Use Their Data Centers to Be Their Own 5G Providers

A demo shows feasibility of 5G and LTE edge with containers, Kubernetes, and no NFV.

Scott Fulton III, Contributor

December 10, 2019

11 Min Read
The Lenovo edge cluster used in the NFV-less 5G wireless demo at KubeCon 2019
The Lenovo edge cluster used in the NFV-less 5G wireless demo at KubeCon 2019Scott Fulton

Two of the many burning questions about the continuing evolution of 5G wireless technology have centered around virtualization: Can a large enterprise with abundant campus space attain both the resources and the wherewithal to build and maintain its own wireless telephone networks onsite, in its own data centers? (This is what a telco would call the “customer edge.”) And how will telecommunications operators be able to partition their networks’ virtual infrastructure to prevent incursion from one tenant’s space into another’s, ensuring call security while blocking potential attack vectors?

The debates over both of these questions could determine whether an enterprise data center could attain the equipment to become its own phone company, complete with radio access networks (RAN), transmitters, and virtual base stations. An organization with enough space could potentially provide its own 5G connectivity to employees, and even to roaming guests.

Up to now the common factor in both debates has been network functions virtualization (NFV) and the low-level partitioning of server infrastructure for running networking functions in isolation. Some engineers with major 5G stakeholders have argued that NFV should not be relied upon to separate customer functions from telco functions on the same platform, or in the same data center. Others have argued there are ways to pull it off, particularly by implementing security and isolation techniques at the lowest level of the virtual network.

Related:Edge Computing: Where’s the Edge Moving to Now? Nokia Offers Clues

Points of Presence

This whole argument may be rendered moot, however, if the latest experiments conducted by collaborative researchers at Red Hat, China Mobile, and France-based telecom research facility Eurecom continue to bear fruit. A public demonstration in late November at the KubeCon 2019 conference in San Diego, involving an isolated communications network plus elements of 5G and 4G LTE wireless technologies, accomplished a video call using consumer-grade phones, x86 server components, and no NFV.

KubeCon 2019 demo

fulton 5g photo 1

A San Diego-based edge cluster served as the originating point of presence (PoP) for the call. The facilitator was stationed in Montreal. The PoP at the receiving end was located in a research laboratory in Sophia Antipolis, France, affiliated with France Unicom and involving engineers from both Red Hat and the OpenAirInterface Software Alliance. Though the connection could be described at best as dodgy (the picture above says enough), live metrics corroborated that packets were indeed being transferred.

Related:Edge Computing: AT&T Says the Edge Isn’t Where You Think It Is

 Heather Kirksey, Linux Foundation; Azhar Sayeed, Red Hat

Heather Kirksey, Linux Foundation; Azhar Sayeed, Red Hat

Each component in the demonstration was physically shielded from all other wireless networks. The phones, for example, were enclosed within Faraday cages. Most importantly though, no NFV technology and no virtualized function was used anywhere in the demonstration. Its virtual infrastructure was provided by Red Hat OpenShift, the commercial Kubernetes platform now backed by Red Hat’s new corporate owner IBM. The software therein was entirely containerized, and orchestrated by Kubernetes in a way that telco engineers were claiming would be impossible as recently as this summer.

“All network functions from different partners were containerized network functions (CNF), from the radio to the core,” confirmed Hanen Garcia, Red Hat’s telco solutions manager, in a note to Data Center Knowledge, “running on the latest release of the Red Hat OpenShift Container Platform deployed on bare metal.” Other partners in the project, said Garcia, included Intel, application delivery controller provider A10 Networks, systems engineering consultancy Altran (formerly Aricent), SD-WAN provider Turnium, and cloud-native network fabric maker Kaloom. Lenovo provided Open Cloud-automated servers for the demonstration.

Qiao Fu, project manager with China Mobile Research Institute

Qiao Fu, project manager with China Mobile Research Institute

“While we’re building this 5G cloud, there are three major challenges we’re now facing,” Qiao Fu, project manager with China Mobile Research Institute, said from stage at KubeCon. Infrastructure remains insufficiently decoupled, binding one brand of software to the same brand of hardware, she remarked. “That gives us trouble when we want to share global resources of the cloud. We expect that infrastructure to evolve from VNF-bounded to a white box, with the API capability defined by some community-driven efforts.”

Secondly, she continued, telcos lack a common stack of services at the platform level, partly because pluralities of these services are provided by single VNFs.

And thirdly, “It’s funny to say, but operations is actually now quite the obstacle for telco operators. Compared with the increasing complexity of the network, we’re still lacking in sufficient ways to do operations. So now we are thinking we can evolve the operations to not only software-defined but also intelligence-defined, utilizing technologies including automation and artificial intelligence.”

Catalytic Conversion

China Mobile is largely responsible for catalyzing 5G in the first place, conducting the initial research into denser installations of smaller, lower-cost transmitters that got AT&T and others involved while 4G was still supposed to be in its prime. The company is a major contributor to 3GPP, the organization of wireless industry stakeholders that collectively determines the substance and the agenda of 5G technologies. And it has been the backer of an open source laboratory for staging the Open Network Automation Platform. Like Kubernetes, ONAP is a project backed by the Linux Foundation.

Now China Mobile is going forward with experiments into network operations management that would steer ONAP even more in the Linux Foundation’s direction. It could not only put Kubernetes at the heart of telco operations centers but effectively remodel existing enterprise data centers into telecom-capable facilities. Add a transmitter and a radio access network, and any enterprise (a manufacturer, hospital, insurance provider, or energy producer) could be its own phone company.

“We said, ‘How interesting would it be if we were to actually bring an entire PoP right on the stage here with us that is remotely connected to the core in Montreal?’” recalled Azhar Sayeed, Red Hat’s chief architect for service providers. Indicating the server setup at his feet, he continued, “That’s what you see here at the back, with those noisy fans: that full edge PoP with some servers that do edge compute, that do the radio capability with the Faraday cage that was custom-built for this particular demo, and a 5G radio.”

The question bears asking more emphatically: Was this really a 5G technology demonstration? The answer is a firm “kind of.”

5G New Radio (5G NR) technology was involved in the wireless connectivity. However, some networks on the backend maintain their 4G LTE buildout – which 3GPP permits telcos to do when transitioning to 5G. It is permissible, under 3GPP’s guidelines, to mix certain 4G technologies with 5G NR, and call the resulting setup “5G.”

The one question on the table that was not answered, emphatically or otherwise, was how long the US government would permit US-based companies to continue collaborating in research projects on global standards. Last May, the Federal Communications Commission issued an order blocking China Mobile from providing services through US-based telephone networks. China Mobile’s equipment on stage at KubeCon was shielded by the Faraday cage for legal reasons as well as technical ones.

Brass Ring

Up to now, telcos’ objections to the idea of Kubernetes in their data centers — or any mode of containerization for that matter — dealt primarily with how containerized architectures conflicted with the goals of infrastructure-level service isolation. One such objection dealt with how enterprise Linux containers (the original “Docker containers”) required the inclusion of network interface card (NIC) drivers to have any visibility into the network on which they were being used. Ironically, Linux containers were initially designed to run in isolation, with network access provided by a single gateway maintained by the Linux kernel that was hosting it. (Windows containers would later mirror this relationship.) But network management software can’t function under such isolation, rendering supplemental NIC drivers necessary to establish connectivity. That would be a problem because it would open distributed, multi-tenant systems to threats of incursion at a level that hypervisor-hosted virtualized platforms did not allow.

Tom Nadeau, Red Hat’s technical director of network virtualization

Tom Nadeau, Red Hat’s technical director of network virtualization

“We had to have like a Swiss Army knife of drivers, basically, to make this work,” Tom Nadeau, Red Hat’s technical director of network virtualization in an interview with Data Center Knowledge. (Nadeu is considered one of the founding fathers of SDN.) He was referring to the plethora of NIC drivers that each container in a distributed network would need to make connectivity feasible at the scale NFV would normally require.

Senior Red Hat engineer Billy McFall

Senior Red Hat engineer Billy McFall

Nadeau’s team is behind the creation of Virtual Data Path Acceleration (vDPA), an evolved version of a framework for managing communications on containerized platforms, including those orchestrated by Kubernetes — and thus, Red Hat hopes, by its OpenShift platform. As senior Red Hat engineer Billy McFall explained, every network interface controller vendor today maintains its own proprietary ring layout — its own mechanism for shuffling packets through memory as they are being transferred. As a result, in order for a container to receive incoming packets, it must include the specific driver that allows data ingestion from that NIC. The more NIC brands are represented in the network, the more drivers each container requires. This not only renders such networks non-portable between server clusters but makes microservices-style distribution — the variety that inspired data centers to adopt containerization in the first place — unwieldy if not impossible.

The vDPA provision would amend the existing Data Plane Development Kit (DPDK) framework. In a first-generation VM-based environment (like in most data centers) vDPA would provide fast-path access between processes in the user space and the physical NIC by pairing the virtual function (VF) for the data plane directly with the NIC. Software in the user space would recognize a single, virtual ring layout (virtio,) with the NIC’s VF sorting out the differences. As Nadeau pointed out, this allows network architects — in many cases, for the first time — to tailor the specifications of control and data plane channels for specific networking purposes.

vDPA and Containers slide

fulton 5g photo 6

Yet in its full fruition vDPA would have a Kubernetes plugin (CNI) inject the DPDK message passing component vHost-User directly into a container, as a socket. The VF paired with the NIC would communicate directly with this socket, with the result being faster communication speeds. Red Hat boasts the opportunity here to achieve wirespeed latency — the equivalent of the peak theoretical bitrate of the same connection set up on all-physical components.

“You can tailor these channels going up to the container,” said Nadeau. “Without this, today, you either have to use some SR-IOV voodoo thing, or you just have to hope for the best, with the fake VM that’s underneath Kubernetes... When you’re on a $12,000 server, it’s maybe not so critical that this is tuned right,” he told us.  “But as you shrink the form factor down, this is really important. You can exactly tailor, curate the whole thing so that it works exactly the way you need it to.”

Enabling wirespeed latency in an environment with IEEE 1588 synchronization, Nadeau explained, will be critical as engineers build telecommunications capabilities into their edge data centers. When network equipment vendors become involved with edge deployments, he said, they prescribe not only what other hardware to use and how to set it up, but how it should be synchronized, and which VNFs are prohibited from running on its equipment. It’s an issue that Red Hat discovered for the first time as recently as last May.

It’s “brittle,” he continued.  “There’s a lot of telcos now that are asking us to run virtualized RAN workloads, and they don’t want a brittle situation. They want a situation that works, but they want the flexibility to buy whatever parts they need. So at least what we’re doing here is saying, if you pick from these two or three sets of line cards, run it on this kind of server, you can expect this kind of performance, no matter what.”

It’s a solution that could free telcos to build out their own data centers into powerful customer-facing clouds that could compete in some regards with AWS, Azure, and Google. But that same solution could give everyday enterprises the keys they need to steer clear of those telcos altogether. How fast this vDPA solution comes to production, if at all, may depend on whether the telcos that will be its first customers are willing to take the risk.

About the Author

Scott Fulton III

Contributor

Scott M. Fulton, III is a 39-year veteran technology journalist, author, analyst, and content strategist, the latter of which means he thought almost too carefully about the order in which those roles should appear. Decisions like these, he’ll tell you, should be data-driven. His work has appeared in The New Stack since 2014, and in various receptacles and bins since the 1980s.

Subscribe to the Data Center Knowledge Newsletter
Get analysis and expert insight on the latest in data center business and technology delivered to your inbox daily.

You May Also Like