How Hyperscale Cloud Platforms Changed Data Center Design and Function

In their designs, they’ve taken a different approach to risk, reliability, and redundancy. Have their principles propagated outside their small club?

Scott Fulton III, Contributor

February 28, 2019

10 Min Read
How Hyperscale Cloud Platforms Changed Data Center Design and Function
Alamy

It was August 2014 when a team of Gartner analysts declared that original design manufacturers (ODMs) were having a greater influence on the implementation of data centers than traditional OEMs. This was on account of the growing purchasing power of hyperscale customers — organizations looking to build big, but also in modular segments that scale more rapidly to changing traffic demands.

Those observations about expenditures have since been validated by financial analysts. It makes sense that the biggest customer in a given market would set its rules, so when that customer’s preferences are motivated by design factors, those factors should take precedent in the market.

History informs us that innovation in any market starts at the premium tier and trickles down from there. “Hyperscale data centers are innovating solutions because of the sheer scale,” Joe Skorjanec, product manager at Eaton, writes in a note to Data Center Knowledge. “They have the buying power to justify custom solutions from vendors, so more often the innovation is occurring there first.”

Yet as other data center design practitioners and experts tell us, just because hyperscale is associated with “big” doesn’t mean it’s necessarily a premium tier. It is a way of implementing resource scalability on a very large scale, and it has been championed by the largest consumers of those resources. Facebook — among the world’s largest consumers of data and the resources that support it — is responsible for catalyzing the Open Compute Project, which has produced what many consider the formula for hyperscale computing.

Related:Cloud vs. On-Prem AI Accelerators: Choosing the Best Fit for Your AI Workloads

There’s nothing about scalability as a science that lends itself – or should necessarily gravitate – to the biggest customers. Theoretically, the concept should apply to everyone.

Does it? Has hyperscale impacted the design and implementation of all data centers, everywhere in the enterprise, in the way that conventional wisdom anticipated?

“I think there’s a couple of things that are not so readily apparent, and maybe not so obvious that a lot of people talk about it,” said Yigit Bulut, partner at EYP Mission Critical Facilities. “One of the things most notable from my perspective is the whole approach to risk, reliability and redundancy, as it applies to the data center infrastructure and design. Hyperscale, just by necessity of economics, scale, and fast deployment, really has challenged on multiple levels this whole notion that every data center has to be reliable and concurrently maintainable. Because of that fact, it’s allowed the enterprise designers and operators to rethink their approaches as well.”

Related:Microsoft Ignite 2024: New Azure Data Center Chips Unveiled

This is the point of impact, where the hyperscale blueprint collides with the standards and practices of the enterprise dating back to the client/server era. The influence is undeniable, but the directions that enterprise data centers have taken — which, in turn, define the data center industry as a whole — may not be what anyone predicted.

Scalability from First Day to End State

For many enterprises, hyperscale introduced the notion of incremental scalability: a building-block approach to starting smaller and scaling out. Here, the scale is not so much about space than it is about time. Enterprises that don’t want or need the bigness commonly associated with hyperscale can still adopt a hyperscale customer’s approach to planning, starting with justifiable first-day costs and working towards a desired end-state.

“Instead of a megawatt or half-a-megawatt, I’m looking at a hundred kilowatts or 300 KW blocks,” said Steve Madara, vice president of thermal management for Vertiv. In this permutation of the data center market, the enterprise buyer (who typically designs facilities for a single tenant) thinks more like the hyperscale buyer (who plans for multi-tenancy), except in smaller bite sizes.

Related:FTC Prepares to Investigate Microsoft’s Cloud Business

As Madara’s colleague, Vertiv’s Vice President for Global Power Peter Panfil, tells us, “The enterprise guys say, ‘How can I take all of those things that the hyperscalers have deployed, and apply them to my enterprise location? And I’m not a multi-megawatt user; I’m a 1 MW user, or a couple-of-hundred kilowatt user.’”

It is this reconsideration of scalability, Vertiv’s execs believe, whose side-effect is the phenomenon of moving away from 2N +1 and even 2N power redundancy back toward the more conventional N+1. Panfil told us about one Vertiv customer that, upon accepting a buildout plan involving building blocks sized below 1 MW, adopted a cloud-like structure of availability zones, where high-availability workloads were backed up with 2N power redundancy, and less critical workloads stepped down to N + 1.

A high power-density, scalable UPS enabled that customer to adopt a smaller power increment of 100 KW (Vertiv proposed 50 KW), which could step up gradually as its capacity and traffic handling needs grew. That particular UPS is an architectural outgrowth of hyperscale’s influence, but one that directly addresses not the hyperscale customer but the smaller enterprise.

“The bottom line was, their first-day costs for the new system versus the old system is going to be about one-fourth,” continued Panfil. Vertiv managed to convince this customer to move from 2 KW per rack to 5 KW, though the customer would not budge to 10 KW. Its end-state, as Vertiv planned, was 800 KW with a mixture of 2N and N + 1 redundancy. “Their end-state cost per IT kilowatt went down about a third,” he said.

More Resilient Equipment for Longer Buying Cycles

Common sense would tell you that smaller, more incremental building blocks would lend themselves to a design plan that’s more agile and adaptive to near-term needs. But if that were a phenomenon, our experts tell us, it’s being overwhelmed by a more powerful economic force: the dominance of hyperscale purchasing power in the data center market.

“One of the dominant influences of hyperscale is the fact that it’s consuming all of the supply chain on power, cooling, and compute,” said Don Beaty, president of engineering consulting firm DLB Associates. “People who have had good purchase power in the past, don’t have it, so the sphere of influence for a buyer has gone down significantly.”

Arguably, this should be an economic phenomenon rather than a design trend. But every technology market since the rolling stone was first upgraded to a wheel has been a response to economic phenomena. In this case, it leads to what experts perceive as a hardening of enterprise buying cycles, which in turn compels suppliers to adjust their products and services to meet these evolved demands.

“Large projects are always going to be a priority,” remarked Eaton’s Skorjanec, “because of the scale it can bring to builders, consultants and suppliers. The flip side is margin compression. The smaller enterprise projects will generally be more profitable.”

Although smaller building blocks could enable enterprises to make design decisions more frequently, their business managers would remind them that these decisions must align with the needs of the business at the time and the flexibility of the business model. More decisions mean greater risk, explains Bruce Edwards, president of engineering consulting firm CCG Facilities Integration, and enterprise risk managers prefer working with fewer variables.

“In some cases, when you have a problem, you don’t know which of those variables caused the problem,” remarked Edwards. “I’d much rather see individual variables addressed one-by-one in a sequential fashion, so that if something goes awry, you know what variable caused it.”

This leads to a design phenomenon that’s reminiscent of software development in the 1990s and 2000s, when vendors bunched up their major releases in two- or three-year product cycles to minimize risk. As Vertiv’s engineers told us, the same efforts that enterprise facilities managers and IT specialists are making to drive up utilization are being leveraged to maximize service life, and thus to make decisions about data center design last longer, not shorter.

“The enterprise guys have to go through a budget cycle to get the money approved,” noted Vertiv’s Madara. “If I were the CFO and I made a decision to keep it on-prem, I would build as I need and find a way to execute that. There are ways to do that at a smaller scale that some of the hyperscale guys do, but it gets into getting money approved in that shorter increment of time, and re-approved and a lot of them just don’t want to go through that. So they’re like, ‘I’m going to go get that big chunk of money that’s going to take care of me for the next ten 10 years.’”

While the success of hyperscale innovations do inform decisions that enterprise data center builders make, Madara said, “they’re not in a position to keep advancing that at the rapid pace.”

Aligning Design Decisions Around Business Functions

The impression among enterprise data center and IT professionals that they may are not be keeping up with the innovation cycles being of hyperscale builders, several of our experts believe, may be one more factor driving their organizations to move their workloads and data off-premises.

“The impact is on the supply side,” explained DLB’s Beaty. “Hyperscale’s biggest impact is creating a shortage of equipment supply — generators, UPS, switch boards. It’s a fundamental supply-and-demand scheme.”

As more pressure is exerted upon enterprises to move toward service-based infrastructure, they are forced — in some cases, for the first time — to examine their data centers in the context of the applications and services they provide rather than servers, generators and blowers. Although colocation providers offer space and management as leasing options, even decisions about relocating to colo become discussions about applications and services.

The result for enterprise customers is what Vertiv’s Panfil describes as “a lot of heartburn.” “That requires them to segment their loads,” he continued, “and none of them want to segment their loads.”

Want, however, may be giving way to need.

“When faced with the decision of, ‘Do I rebuild my data center — do I go down that path of an on-prem enterprise data center — or do I go to colo or to a cloud-only environment?’” remarked EYP’s Bulut, “I think by and large, a lot of the enterprise data centers are [saying], ‘It doesn’t make sense to keep doing things the way we’ve been doing.’ There’s been a move more towards colocation.”

But rarely will any enterprise, Bulut notes, move everything off-premises. As a result, organizations’ data center assets may end up fewer in number and less central to their business operations. This move ends up impacting overall data center design in two opposite ways: Colo facilities end up looking and working more like hyperscale, in some cases providing gateways to select services from cloud providers, while at the same time, enterprise facilities tailored to specific needs end up looking less like hyperscale.

Beaty sees an overall benefit to this phenomenon, where the enterprise data center — or what remains of it — ends up more closely aligned with its organizations’ business model and business objectives.

He drew a mental architectural picture for us in terms that an office designer might appreciate. When designing an office, an architect refers to “FFE” — fixtures, furnishings and equipment. These are the items that are tailored to the needs of the occupant. Historically, a data center’s occupants have been servers, and applications have been tailored to their needs and capacities for service delivery.

Now those roles have been swapped, said Beaty. Applications are becoming the occupants, which is a situation much more befitting of his ideal state. That could be leading to servers acting more like furnishings and fixtures — quite possibly, the real trend behind hyperconvergence. That suits Beaty just fine, since he believes the finest enterprise data centers are ones that more closely reflect the business models of their operators.

Hyperscale’s impact on the general enterprise data center may leave it downsized, though at the same time re-empowered. Because enterprises may have less purchasing power unto themselves, at least in the near-term, their needs are more well-defined and backed up by business objectives. They’re looking more for functionality than abundant resources.

But it is this realignment that may yet have the greatest impact on data center architecture, including on hyperscale itself. Hyperscale’s original founding tenets were based on standard server designs and standardized plans for space and resource consumption. If data center customers as a whole rethink and revise their concepts of requirements and objectives, even hyperscale could take on a whole new dimension.

About the Author

Scott Fulton III

Contributor

Scott M. Fulton, III is a 39-year veteran technology journalist, author, analyst, and content strategist, the latter of which means he thought almost too carefully about the order in which those roles should appear. Decisions like these, he’ll tell you, should be data-driven. His work has appeared in The New Stack since 2014, and in various receptacles and bins since the 1980s.

Subscribe to the Data Center Knowledge Newsletter
Get analysis and expert insight on the latest in data center business and technology delivered to your inbox daily.

You May Also Like