Skip navigation
colo
Digital Communication on the Internet with Data

The Effect of Connectivity on the Colo Ecosystem

The challenges of facilitating new real-time streaming applications are prompting enterprises to relocate and redistribute their data center assets. This pretty much changes the colo providers’ plans.

When the data center colocation market began, the only digital connection that truly mattered was the one between the customer premises and the service site.  Since that time, colo has become a powerful segment of the communications industry, driving traffic at an exponentially increasing rate.

Changing the entire traffic profile for data across the world, is the rapidly climbing number of sensory devices and logic-equipped appliances that comprise the “Internet of Things.”  Instead of being just a conduit, data centers are evolving to become harvesting operations for data, as it’s streamed through parallel pipelines and processed by analytics engines even before it’s warehoused.

While the major colocation providers continue to acquire and construct sprawling spaces in strategic metropolitan locations, customers of all sizes are demanding greater flexibility in the distribution of their computing resources.  Instead of centralizing everything in one location, the particular requirements of streaming data analytics require that processors and servers be brought closer to where the data is being gathered.

This requirement has given rise to what’s being called edge computing — the relocation of processing closer to where the data is being collected.  The edge model is compelling colo providers to offer less centralized, more distributed spaces where customers can deploy more compact systems with higher processor core densities.  For example:

·      Campus computing models that used to be reserved for branch offices or public sector facilities, are being reconsidered for remote locations.  This way, a ground transportation provider can gather logistics on its shipping operations from locations closer to its sources and destinations, without incurring latencies from outsourcing all of its operations to the public cloud.

·      So-called “compute hubs” are being deployed by colos closer to existing connectivity points, such as carrier hotels where multiple connectivity providers make themselves accessible to tenants.  These hubs contain just enough server power to run highly distributed data services — analyzing, processing, and filtering data as it’s being collected from the field, before shipping it to central data stores.

·      Modular hyperscale blocks are among the new components that major data center providers and investment trusts are offering, to keep their large complexes competitive.  Such blocks are typically preconfigured with rounded-up power capacity (e.g., 2 megawatts at a time), with the goal of attracting mid-level enterprises with more affordable, interlocking “starter” units that can be easily scaled over time.

In the meantime, new market entrants are capitalizing on new demands for smaller and more distributed computing space.  Content delivery networks (CDN) have begun offering — mostly on a trial basis — support for limited edge computing services at their customer connectivity points.

HPE and Schneider Electric have already co-produced specifications and working models for micro data centers — pre-configured stations small enough to fit into a trailer, or at least be transported in one.  Dell EMC’s Extreme Scale Infrastructure division recently responded with a modular data center (MDC) of its own, designed to consume only half the width of an ordinary parking place.

And a startup called Vapor IO — launched by some of the people who launched the Open Compute Project — has constructed a cylindrical “Vapor Chamber.”  Working with cell tower facilities owner Crown Castle, Vapor IO aims to install thousands of these chambers in the facilities shacks alongside cell towers, where both bandwidth and electrical power are already plentiful.  Here, customers can deploy edge computing resources that process data from wirelessly connected resources in the field, without having to ship it to hyperscale data centers first.

While these efforts do not threaten the existence of hyperscale data centers, or the colo market as a whole, they do strike at their identity, at least in its current incarnation.  Dell perceives a forthcoming subdivision of the outsourced computing market into three tiers, which it calls “cloud,” “core,” and “edge.”  The public cloud would be less of a staging area, and more of a provider of services that coalesce to build an application; at the edge, servers run critical-needs functions on freshly arrived data in real-time.

The “core” in this model is what remains for outsourced data centers in both the colo space, and what we currently call the public cloud.  It’s less expansive than the original goals of hyperscale, but that’s not to say it isn’t both essential and lucrative, especially if colos can position themselves as the core points of customer contact for both the edge and the cloud.

 

TAGS: CoreSite
Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish