Equinix VP: New Power Models Make Open Source Necessary

Changes to the way newer, highly scalable servers consume power made it necessary for the world’s leading colo provider to change the way networks connect to those servers, says Equinix’ chief scientist.

Scott Fulton III, Contributor

November 4, 2016

7 Min Read
Inside an Equinix data center in Silicon Valley
Inside an Equinix data center in Silicon ValleyEquinix

The 100 Gbps router and transponder device called Voyager, announced last Tuesday, may be recorded in history as the first such device ever to have been created by a social network and a colocation provider.  Facebook’s and Equinix’ joint laboratory are data centers SV3 and SV8 in Silicon Valley, in two of Equinix’ prime locations.

In an exclusive interview with Data Center Knowledge, Dr. Kaladhar Voruganti, Equinix’ vice president for technology innovation and formerly an IBM researcher, told us his company’s participation in Facebook’s Open Compute Project, and its networking offshoot Telecom Infra Project (TIP), is not some little experiment on the side.  It’s a campaign necessitated by a perfect storm of conditions: the status of the cloud services market, the architecture of servers, and the laws of physics.

“How you design a data center to support this type of new hardware, is different than how traditional hardware is supported,” said Dr. Voruganti.

“For example, traditional hardware relies on power from a centralized UPS system in the data center.  However, with the new hardware that is coming in — which is the TIP/OCP hardware — the power distribution has a decentralized model where the batteries are on the racks.  So how you do the AC-to-DC power conversion is different in this model, than how you would do it for traditional hardware.”

The Voyager and the Jeep

For Equinix to support the lower-cost, highly modular OCP class of server hardware, it needed a more adaptable data center model — one that aligned better with modular devices using very localized power sources.  Alternately, Equinix could have stuck with traditional models for all its data centers going forward.  But that would have made it more difficult, Voruganti said, for customers to move off of their traditional workloads, onto a new class of more scalable, containerized workloads that use distributed servers and more automated orchestration.

There’s a gathering multitude of hardware providers in the OCP space.  At the same time, new Equinix customers are demanding faster provisioning for their services, he told us.  That makes it incumbent upon Equinix to work with a broad number of hardware vendors simultaneously, on behalf of customers who no longer have the option of waiting weeks for their data centers to be provisioned.

The implication here is that Equinix (or a large colo provider like Equinix, of which admittedly there aren’t all that many) must have a bigger stake in negotiations.  OCP gives this entity a role that an ordinary customer could never have had before: an architectural role.  Voruganti denied that Equinix wants to assume the role of “calling the shots;” nevertheless, by participating with Facebook, it is definitely assuming a place for itself at the table.

Not too often in history has the customer been capable of stepping into the architect’s role, or been compelled by economic or other circumstances to do so.

But there is one historical parallel, whose repercussions have affected manufacturing even to this day:  Just prior to World War II, when the War Department was preparing to help arm the Allies in Europe, the U.S. Government awarded contracts to Willys-Overland and Ford Motor Co. for prototyping, and eventually building, the concept we now call the Jeep.  The customer specified the urgency of the timeframe, and had the purchasing power to make design decisions.

Today, open source has given the power to major corporate customers — what Dr. Voruganti classifies as the hyperscale users — to lay down specifications for servers and racks.  Microsoft and Google are among those users.  Because their specs are now becoming the standard for massive data centers, the networking specifications must follow suit, keeping up with newer, less traditional server architectures.

Here is where Equinix seized the initiative: making itself available to Facebook (which leads OCP) as both the architect and the laboratory for a new kind of modular network router — one that functions more like SDN, but is manageable like hardware because it is hardware.  If software can be made smarter, Voruganti believes, hardware can be made faster.

“If you have a centralized UPS, it takes up to a quarter of your data center space, and you need very specialized, skilled operators for the batteries,” he explained.  “So all these hyperscalers — Facebook, Microsoft, Google — said, ‘Let’s redesign this and take a fundamental look differently at how this is done.’  Then they started to publicize their designs for how they’re doing this.”

Customer Demand

Once the Tier-2 cloud service providers got wind of the OCP’s first specifications, they wanted a piece of the action.  These are Equinix’ customers, as Voruganti described them; they want freedom from the hyperscalers’ realm, but they don’t want to own and manage their own data centers.  They want the benefits of OCP’s advanced power distribution model.  And to the extent that Equinix couldn’t answer their demands, one gets the impression they weren’t very happy.

“If that’s where the puck is going,” said the Equinix VP, “we want to be there.”

Voruganti believes in the benefits of disaggregation from a network architecture perspective.  He’s noticed that it’s enabled software developers to enter what had historically been a hardware space, staffed with physicists and guarded by lawyers.  But he knows that customers don’t purchase disaggregated services.

Kaladhar Voruganti - Equinix

Kaladhar-Voruganti-Equinix

“All of these disaggregated components need to be aggregated somewhere,” he said.  “And in many cases, the management software is being provided as a SaaS model.  So we want to make sure that the ecosystem for the disaggregated model actually occurs and resides at Equinix.”

Granted, when new customers become Equinix tenants, they don’t all flock to these new and disaggregated systems.  Voruganti acknowledged that they actually demand more conventional systems first, because they’ll be transitioning their existing, conventional (“legacy”) workloads into leased systems.  Those customers then want Equinix to be the one managing the transition to newer, more modular, more cost-effective systems that handle newer workloads.  If the risk in this software transition can be specified and quantified, they believe, it can be more efficiently marshaled.

Rack Space, As It Were

This creates a new and unanticipated problem, having to do with consistency.  Any major colo provider, but certainly Equinix most of all, must provide customers with consistent service levels across all of its data center facilities.  Equinix can’t afford to render its SV3 and SV8 facilities as some kind of “hard-hat zone” for customers; it has to maintain the same service levels there as for all its other facilities, even as they receive Voyager routers for the first time.

The company’s strategy for addressing this potential customer headache point is by equipping its servers for classes of use cases, on a per-rack basis.

There will be a small window of time, Dr. Voruganti told us, in which certain metropolitan areas may observe service variation.  But the goal is to immediately ameliorate those effects, staggering some of the variations across particular racks, and assigning preferred use cases to those racks.  “Based on what the customers want, we will give them the proper types of racks and proper types of data center solutions,” he said.

“We are going to help come up with a deployment model and an operational model, but we will not be doing it alone,” said the Equinix VP.  “We will be doing it with the CSPs, the MSPs, the hardware and software vendors, working as part of a consortium.  At the end of the day, the CSPs and MSPs are the major guys deploying their stuff in our data centers, and other software vendors deploying their software need to agree to deploy to that operational model.  And the community as a whole needs to make sure they’re all in agreement — that the model will operate and will be supported.  I think we’ll have a major role to play, but I would not say that we will be calling the shots.”

It’s a sweet sentiment.  But Microsoft’s and Google’s contributions to the OCP have already dramatically altered every aspect of the server business, down to the semiconductor level.  Like it or not, Equinix is now occupying a seat at the same level of the networking table.

Read more about:

EquinixMeta Facebook

About the Author

Scott Fulton III

Contributor

Scott M. Fulton, III is a 39-year veteran technology journalist, author, analyst, and content strategist, the latter of which means he thought almost too carefully about the order in which those roles should appear. Decisions like these, he’ll tell you, should be data-driven. His work has appeared in The New Stack since 2014, and in various receptacles and bins since the 1980s.

Subscribe to the Data Center Knowledge Newsletter
Get analysis and expert insight on the latest in data center business and technology delivered to your inbox daily.

You May Also Like