Insight and analysis on the data center space from industry thought leaders.
Be Like the Big Guys: Optimizing Hardware in the Software-Defined Data Center
Just like the big guys, smaller companies can use custom white box servers to fully optimize their hardware.
May 24, 2018
Steve Grady is VP of Customer Solutions at Equus Compute Solutions.
It is only natural for companies across the world to look at Google, Amazon, Facebook, and other “big guys” who have their server infrastructures customized and fully cost-optimized. There are significant differences in capabilities between most companies and a company like Google, but enterprise IT staff can still learn a lot from what the big guys do when it comes to optimizing hardware in the software-defined data center (SDDC). First, we will examine what the big guys do to be so successful.
Big Guy Success Factors
The big guys – Google, Netflix, Amazon, Facebook, etc. – use optimized white box servers in their SDDCs. They do this because white boxes are less expensive, infinitely more customizable, and often more effective than standardized servers from big-name vendors. For example, a company like Google has very specific needs in their servers that standardized servers cannot offer, so the ability to customize and only buy servers to fit their exact specifications enables Google, and anyone else using white boxes, to optimize their infrastructure. Trying to customize standard off-the-shelf servers to fit the needs of a large company takes a great deal of effort, and with servers not doing exactly what they’re intended for, problems will arise eventually. Both of these issues can be costly in the long run. By using white boxes, which are cheaper from the outset, and meet specifications exactly, the big guys have found a way to save money and create infrastructure that is exactly right for what they want.
However, it is nonsensical for almost every company to directly emulate the practices of massive companies like Google, as there is no comparison to make in terms of server infrastructure. Google famously has eight data center campuses in the United States and seven more positioned around the world. The largest of these facilities in the United States, located in Pryor Creek, Oklahoma, is estimated to have a physical footprint of 980,000 square feet, and costs Google about $2 billion to build and bring live. These data center facilities worldwide support near incomprehensible amounts of data. For example as of March 2017, Google’s data centers process an average of 1.2 trillion searches per year. Google doesn’t disclose exact numbers regarding its data centers, but the total number of servers in the data centers worldwide has been estimated at roughly 2.5 million. All of these facts perfectly illustrate the difference between Google and its peers (and every other company).
To nurture white box compute initiatives across many industries, the big guys work together to create standards and release technical information to be used throughout the world. Facebook – another big guy in the world of white box servers – began the Open Compute Project in 2011. This project is now an organization made up of many large corporations (including Apple, Cisco, Lenovo, Google, Goldman Sachs, and others) in which the open sharing of data center technology is encouraged. This sharing promotes innovation, and pushes the big guys further ahead of the pack in the server infrastructure race. Therefore, smaller companies now can leverage the big guys’ expertise in their data centers.
How 'Not-As-Big Guys' Can be Successful
Despite the unique capabilities and infrastructures the big guys have deployed, not-as-big guys can leverage learnings from the big guys. Most companies will never have 15 global data centers or be part of an organization promoting unique and innovative server designs, or be able to spend $2 billion on server infrastructure. However, every company can still utilize perhaps the most important aspect of the big guys’ massive data centers: the custom white box servers inside of them.
No matter how customized the Open Compute Project has made big guys’ server infrastructure, the components inside the servers the big guys use are best of class commodity parts. They are available for purchase by anyone. Secondly, while the configurations of an organization like Google’s servers are often unique, and often have some unique components, the use of virtualization software, such as VMware and vSAN can be instrumental in allowing companies much smaller than Google to fully optimize their servers. The first step for these small companies is to invest in white boxes.
White box servers are custom built from commodity parts to meet the specifications of each customer. In the past, the impression of white boxes was that they were of a low build quality, with little care for quality assurance. That may have been the case decades ago, but today, white boxes are built with high quality and in many cases to higher standards than machines from big-brand server companies.
Leveraging the Power of White Box
The power of white box is that they are fully customizable. Just as the big guys do, smaller companies can purchase white box servers from a vendor like Equus to meet their exact specifications. Perhaps a company needs lots of storage space, but not much compute power. Perhaps a company wants dual high core count CPUs and numerous expansion slots built into the motherboard to anticipate growth. A legacy server company cannot offer servers optimized in these ways. But a white box vendor can do exactly what a buyer wants, and build them a server that has, for example, 8 SSDs and 8 rotating disk drives, all in a 1U form-factor chassis. This kind of hybrid storage server is actually quite common among white box buyers, and is simply one example of how white boxes can lead to total hardware optimization.
Once an enterprise has made the leap forward to using white box servers, virtualization is the next method to use in order to emulate the successful methods of the big guys, such as Google. The recent progress in hardware virtualization, largely spearheaded by VMware, has enabled the development of the Software Defined Data Center (SDDC), an entirely virtual data center in which all elements of infrastructure – CPU, security, networking, and storage – are virtualized and can be delivered to users as a service. The software-defined data center enables companies to rely no longer on specialized hardware, and removes the need to hire consultants to install and program hardware in a specialized language. Rather, SDDCs allow IT departments to define applications and all of the resources they require, like computing, networking, security, and storage, and group all of the required components together to create a fully logical and efficient application.
One such virtualization software package that can enable the effective use of an SDDC is VMware vSAN (virtual storage area network). A vSAN is a hyper-converged software defined storage software product that combines direct-attached storage devices across a VMware vSphere cluster to create a distributed shared data store. vSAN runs on x86 white box servers, and because vSAN is a native VMware component, it does not require additional software and users can enable it with a few clicks. vSAN clusters range between 2 and 64 nodes and support both hybrid disk and all-flash disk white box configurations. This combines the host’s storage resources into a single, high-performance, shared data store that all the hosts in a cluster can use. The resulting white box based vSAN SDDC has much lower first cost and up to 50 percent savings in total cost of ownership.
Cost Optimizing a Software Defined Data Center
Another strategy smaller companies can use to emulate the big guys is to cut licensing costs by utilizing VMware intelligently on their white box server. For example, if a company uses a standardized server from a legacy manufacturer that comes with two CPUs out of the box and has to run the legacy software that comes with the server, they may end up only using 20-30% of their total CPU capacity. Despite this, that company will still have to pay for the software licensing as if they were using 100% of their 2 CPU capacity, because legacy software used in standardized servers is usually deployed using per CPU (socket) pricing with no restrictions on CPU core count.
If that company instead uses a custom white box with only one CPU with a high core count, and runs VMware, they can effectively cut their licensing in half, as VMware uses a socket licensing policy. Cutting licensing costs in half will often constitute a large amount of savings for a company that they can spend elsewhere to further optimize their servers. This utilization of virtualization software, as well as using it to put virtual back-ups in place, are both key ways in which smaller companies can approximate the methods used by the big guys.
Be Like the Big Guys in your IT Infrastructure
Google, Amazon, Facebook, and others do many things with their server infrastructure that not-as-big companies can only dream about. However, companies can emulate the big guys in the server world in significant ways. Just like the big guys, smaller companies can use custom white box servers to fully optimize their hardware. Smaller companies can also utilize virtualization software to save large sums of money in the form of virtual storage servers and in cutting licensing costs. The result of companies using these methods will not rival the scale of the huge data centers used by the big guys, but in substance, the result will be the same: your own high efficiency cost-optimized software-defined data center.
Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Informa.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating.
About the Author
You May Also Like