Edge Computing and Data Center Networks – Understand the Basics

The data center network is transforming, and both the cloud and the edge are becoming absorbed by it.

Scott Fulton III, Contributor

April 17, 2019

8 Min Read
Edge Computing and Data Center Networks – Understand the Basics

We call it “the edge” partly because it sounds cool and partly because it evokes the idea of a wild frontier where the rules are not yet established. But in the context of the data center network, the edge has a definite meaning: those addressable points along the periphery where data is either entering or exiting. Transporting data from the edge to the core, transforming and processing that data at the core, and then transporting the results to the customer back through the edge are all expensive processes.

Are they needless processes? Do they only serve the purposes of the architecture for which they’re designed rather than re-architecting the network to serve the needs of the users more directly and effectively? These are the questions network engineers are still asking. Yet if they look around, they may notice some competitors have already come up with some persuasive answers.

Customers at the Edge

“If you have one data center, and you put all of your efforts and technology into optimizing that for performance and reliability, sure, that’s a good thing to do,” says Steven Carlini, VP for innovation and data center with Schneider Electric. “But the applications that we’re dealing with, from the big box stores like Home Depot — they’re competing with Amazon, and they need to raise the experience of their customers to the point where people are still going to want to shop there.”

Home Depot’s heavily revised application, Carlini says, includes “wayfinding” capabilities, such as a function that can estimate what aisle an item may be located on, based on a picture rather than text. Customers have low tolerances for experiences that make themselves out to be Artificial Intelligence but aren’t fast. One way to facilitate such apps is through processors that are as close to the customer as possible. A micro data center in the back warehouse may be an attractive option.

“The applications are driving the need to move the processing and content delivery closer,” Carlini says. “Everybody talks about Netflix. I hate to bring it up, but that’s the best example of the quality-of-service that people are demanding. If it takes five minutes to download a movie, they’re not going to be very happy about that.”

Just How Many Edges are There?

Like an oval, the topology of an enterprise data center network has one edge. But, like the Indianapolis Motor Speedway, you can’t see the entire circuit from anyplace you happen to sit.

The authors of the annual State of the Edge report limit their definition of the edge to one type of location, but with two sides: one for infrastructure and the other for servicing client-side devices. At the Open Compute Project Summit last year, a Mellanox representative asserted the existence of four edges: for enterprise computing, media, the Internet of Things and mobile apps.

“Ask 50 people, ‘Where is the edge?’ You’re probably going to get 50 different answers,” says Matt Trifiro, chief marketing officer with micro data center maker Vapor IO. “As a person trying to advance an ecosystem, that’s always been very frustrating to me. Vendors and analysts all grab onto their particular version of the edge, and we all think we’re talking about the same thing, but we’re actually not.”

Trifiro points to one key distinction that, in the end, actually matters. There are edge applications for two principal cases:

  • Outbound content delivery to consumers — The internet itself is the most prominent example of a network made orders of magnitude more powerful by processing power at its periphery. Content delivery networks cache frequently accessed content at the edge in order to deliver Web pages and multimedia streams to you faster and with a higher quality of service (QoS). You’re using a CDN right this moment. (Otherwise, you might still be waiting for the page to come up.)

  • Inbound data processing and hygiene to databases — When high volumes of data (especially video) are being acquired from the field, shuttling all that data directly to the central data center for processing can lead to bottlenecks. And consuming all that bandwidth can be a drain on operating expenses. Having processing power at or near the point of data ingress creates the opportunity to reduce or eliminate latencies caused by both the volume and the “raw” state of freshly collected data, shifting much of the burden of workload processing to points in the network that consume the least time.

“I think what we tend to do is separate a couple of very, very key trends into their own silos,” argues Kevin Shatzkamer, VP for enterprise and service provider strategy and solutions at Dell EMC. Specifically, he pointed to these historical trends:

  • Cloud computing drove the broad trend towards consumption-based delivery models.

  • Telecommunications providers are moving away from their legacy, proprietary stacks and infrastructure onto open-source, cloud-inspired delivery models, sparked by the success of network-functions virtualization (NFV).

  • Software-defined networking led to the realization that virtualization could be made elastic to meet fluctuating customer demands. In recent years, SDN has made its way to site-to-site communications with the rapid proliferation of SD-WAN.

  • Hybrid cloud has re-emerged as the savior of data center networking. It isn’t affordable or practical to push everything, especially data, onto public cloud platforms. Meanwhile, Microsoft’s Azure Stack and soon Amazon Web Services’ Outposts are enabling a kind of hybridization in reverse, where the cloud providers are building their own assets inside customers’ facilities. And customers are happy to pay premiums for that privilege.

These issues have all germinated within their own silos. As they have emerged onto the public stage, they give the impression that, suddenly, there are several co-existing edges, perhaps countless ones. These may be tricks of perception, however.  As core data centers,’ micro or modular data centers,’ telcos’ and cloud service providers’ edges all collide, a single edge emerges — one point of contact for inbound and outbound traffic.

“There are lots of edges, but the edge we care about — the one we’re all talking about — is the edge of the last-mile network,” Vapor IO’s Trifiro says. “That can be from the radio tower to the phone or a Wi-Fi hotspot or a car, or it could be fiber underground from the headend to the house — the link between the infrastructure and the device.”

Is Latency Driving a Common Edge Server Form Factor?

Fundamentally, communications networks’ objective is to meet their users’ requirements for QoS, which makes latency the bane of their existence. When a network is operating at its greatest efficiency, latency is minimized.

Optimizing the processing of incoming data and maximizing the throughput of outgoing data – at the point that Trifiro identified – both reduce latency, in some cases greatly. This balancing-out of the various perceived edges in the network could lead to a single set of standardized form factors, or “white boxes,” for edge server deployments. Indeed, work toward that objective has already begun, spearheaded by contributions from the following:

  • The Telecom Infra Project’s Edge Computing group, co-chaired by representatives from Intel and Spanish telco Telefónica;

  • The Open Compute Project, one of whose members — Nokia — recently suggested a more compact form factor for edge deployments than the OpenRack v2 design that others had previously advanced, probably as a NEBS-compliant, 1U to 3U modular appliance with front-facing ports;

  • The Open19 Foundation, launched by Microsoft-owned LinkedIn, Hewlett Packard Enterprise and Vapor IO to lead their own discussions into standardized or common form factors.

For Dell Technologies’ part (including Dell EMC and VMware), Shatzkamer believes neither telco customers nor enterprises are asking for a single, standardized form factor for every server or appliance every place along the edge. In fact, enterprise customers have expressed a danger in making edge servers seem too much like any other general-purpose system or even hyperscale server (in the case of something fitting into OCP OpenRack v2), lest management delegate responsibility for such components to someone other than the network team.

“If I just deploy a standard server at my edge, if I’m enterprise, IT takes ownership of the standard server and does IT things with it,” he says. “If it looks like a networking node, like it did before, IT keeps their hands off of it. Now, what’s inside of that edge computing node? It’s a server. But for SD-WAN use cases, what we see is, ‘I want this looking like a networking node.’” For this and other reasons, Shatzkamer says, Dell EMC offers a portfolio of different server and appliance types, rather than imposing one standard on customers — a standard that may look too hyperscale-ish, if you will, for a particular customer’s tastes.

So there may be a number of persistent reasons — not just architectural but political ones as well — that will perpetuate the appearance of multiple edges in the data center network. However, no matter how many edges you see at once, the shape of the data center network is transforming, and both the cloud and the edge are becoming absorbed by it.

About the Author

Scott Fulton III

Contributor

Scott M. Fulton, III is a 39-year veteran technology journalist, author, analyst, and content strategist, the latter of which means he thought almost too carefully about the order in which those roles should appear. Decisions like these, he’ll tell you, should be data-driven. His work has appeared in The New Stack since 2014, and in various receptacles and bins since the 1980s.

Subscribe to the Data Center Knowledge Newsletter
Get analysis and expert insight on the latest in data center business and technology delivered to your inbox daily.

You May Also Like