Insight and analysis on the data center space from industry thought leaders.

Servers Will Lead the Data Center Evolution

While virtualization has helped in increasing hardware utilization, two socket servers remain the most commonly deployed server hardware in the data center, yet offer enterprises little flexibility when it comes to swapping out parts that best suit emerging workloads or giving enterprises the ability to procure components from multiple vendors without concerns of interoperability, writes Young-Sae Song of AMD. Now, that is changing and open source designs are key to the evolution.

Industry Perspectives

September 19, 2013

5 Min Read
DataCenterKnowledge logo in a gray background | DataCenterKnowledge

Young-Sae Song is Corporate Vice President of Product Marketing at Data Center Server Solutions for AMD. In this role, he leads the outbound marketing, branding, and demand generation functions for AMD’s push into next generation fabric based computing systems.

YS-ADM

YS-ADM

Young-Sae Song
AMD

The last decade has seen the data center focus on a number of key technologies in order to improve efficiency. Since 2000, virtualization has been at the heart of increasing server utilization, allowing businesses to consolidate hardware and reap significant cuts in operating expenses. This was followed by a holistic focus on data center design, from the layout of suites to the efficiencies of HVAC and electricity supply. However, data center design will turn its focus on the server in order to make the next step to increase efficiency.

Data centers have always been evolving, from placement of cables 1 to the way servers are positioned in a rack to provide hot and cold aisles 2. In a bid to increase efficiency within the data center, the server has largely been overlooked in favor of low-hanging fruit, such as cooling infrastructure and on a macro scale, data center location.

Virtualization tapped unused resources and its popularity grew as processor performance gains mitigated the overhead of running a hypervisor. Servers are set to be the focus of the data center as virtualization expands from general compute to networking and storage, demanding more from hardware, alongside a need for increased density and improved manageability.

While virtualization has helped in increasing hardware utilization, two socket servers remain the most commonly deployed server hardware in the data center, yet offer enterprises little flexibility when it comes to swapping out parts that best suit emerging workloads or giving enterprises the ability to procure components from multiple vendors without concerns of interoperability.

The Facebook-initiated Open Compute Project will give power back to enterprises, allowing them to work around the familiar two socket server platform with silicon they are accustomed to. The advantage of open source hardware such as AMD’s Open Server 3.0 3 doesn’t end at familiarity, rather it offers enterprises the ability to shop for a motherboard that meets their hardware and budget without having to purchase a new chassis.

Empowering enterprises with the ability to make decisions with server hardware beyond simply buying a badge means the notion of ripping and replacing infrastructure as new workloads or use cases appear is banished. This allows enterprises to focus capital expenditure on components that directly results in revenue growth rather than continually ripping and replacing infrastructure.

The Open Compute Project is much more than giving enterprises access to open source hardware, because investing in Open Compute servers also results in access to open source management tools. Gone are the days of system administrators having to learn different management tools for different server vendors. Instead Open Compute offers a single specification for hardware management4, greatly reducing the complexity and enabling system administrators to deal with management tasks quickly and effectively.

Changing the Paradigm of Equipment

While Open Compute is focusing on modernizing the ownership of traditional two socket servers, high density servers such the SeaMicro SM15000 server 5 offers enterprises the ability to tackle new workloads in a cost effective manner through a step change in the number of sockets and cores that can be placed in a single rack. The upshot of this is more compute be squeezed into a single rack thanks to power efficient processors and the ability for servers within a single chassis to serve completely different workloads.

Going High-Density

Dense servers will be expected to do more than just undertake menial data center workloads such as serving up web page front-ends. Network virtualization is set to be the next big consolidation in the data center, but this can only occur if server has the performance and cost effectiveness to make it a viable proposition. Key to meeting performance and economic goals is the interconnect between the processors, and it is this technology that will differentiate servers, with bandwidth and protocol design playing pivotal roles in overall system performance.

The ability to have thousands of cores in a rack increases the need for tighter integration between bare metal and management software. In the same way that Open Compute is bringing together server hardware and software, dense servers will raise the bar when it comes to offering system administrators greater control over the provisioning and management of bare metal.

The Power Will Be in The Customers' Hands

We have seen significant efficiency improvements to the data center over the last decade, but it is the workhorse of the data center - the server - that will evolve in the coming years. Servers will stop being a prescribed piece of hardware that enterprises will have to work around, and instead it will be the customer that will have greater power in demanding how the hardware meets their needs.

Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission processfor information on participating. View previously published Industry Perspectives in our Knowledge Library.

Endnotes

[1] How overhead cabling saves energy in data centers – Avelar, Victor APC
[2] Hot and Cold Aisle Layout – ENERGY STAR
[3] AMD Open 3.0 modular server specification – AMD and Open Compute Project
[4] Hardware management specification – Open Compute Project
[5] SeaMicro SM15000 Fabric Compute Systems

Subscribe to the Data Center Knowledge Newsletter
Get analysis and expert insight on the latest in data center business and technology delivered to your inbox daily.

You May Also Like