Insight and analysis on the data center space from industry thought leaders.
Six Server Technology Trends: From Physical to the Virtual and Beyond
Consider the fact that what we in IT call servers today are vastly different from what we called them just 10 years ago.
December 7, 2016
Satya Nishtala is the Founder and CTO of DriveScale.
Server technology has come a long way in the last 10 years. Why 10 years? No real reason, really. I chose that figure pretty much arbitrarily. Because whether you pick 20 years or two years, as with all things in technology — and in life — change is the only constant. Just consider the fact that what we in IT call servers today are vastly different from what we called them just 10 years ago. In fact, a “server” today isn’t even necessarily an actual physical device. With this in mind, let’s take a look at six of the biggest trends now operative in server technology.
The Move from Single-Processor Systems to Multi-Processor Systems
At the highest level, application and market needs drive trends in server technology. Remember when, decades ago, the performance requirements of enterprise applications like databases, ERP and CAD programs started to stress the capabilities of single-processor server systems? In response, the industry developed multiprocessor servers as well as the programming models to go with them. Not surprisingly, as the needs of large enterprises grew, server vendors responded with larger and larger multiprocessor systems.
Big Data and the Scale-Out Model of Computing
Where are we today? Very much in an environment of big data and the scale-out model of computing. And the new breed of applications for the web-based economy — what many call big data applications and the latest generation of NoSQL database applications — have similarly stressed the capabilities of even the largest multiprocessor servers that we can build. This gave rise to the development of programming models that enabled applications to use hundreds or even thousands of networked servers as a compute cluster platform. This is what Google calls “Warehouse Scale Computer.” It’s also known as the scale-out model of computing, as opposed to the scale-up model that uses larger and larger multiprocessor systems. In this scale-out context, a single physical server is a component of a compute cluster, and that compute cluster is, in turn, the new server.
Advances in High-Performance Networking Technology
The notions of scalability, failure resiliency, on-line fault repair and upgrade have also moved from server hardware to cluster software layers, enabled by advances in high-performance networking technology. 10Gb Ethernet enables I/O devices that before had to be directly integrated with servers for performance reasons can now instead be served over the network. Consequently, the architecture of a single physical server component has been simplified significantly. At the hardware level, it is the most cost-efficient compute platform with one or two processors, memory and network interface. At the same time, Linux has become the most widely accepted base software platform for these servers. The “design” of a server now consists of composing a network of simplified physical servers and I/O devices in software. Such a server can be sized (scaled up or down) as needed, and as often needed, in software based on enterprise workflow requirements — a functionality that was before impractical. The downside of this model is there are a lot of hardware and software components that must be configured properly to work together. This model requires new management systems and hardware architectural elements that didn’t exist until just recently.
Virtual Machines and Container Technologies
Virtual machine (VM) and container technologies enable abstraction and encapsulation of a server’s compute environment as a software entity that can be run as an application on a server platform. These two technologies are becoming the norm in public cloud providers. Multiple such VMs and containers can be deployed on a physical server, enabling consolidation of multiple servers onto a smaller number of physical servers. This thereby improves hardware efficiently and reduces data-center footprint. In this context, a “server” is a VM or a container software image and not a hardware entity at all! Such a “server” can be created, saved (or suspended), or transferred to a different hardware server — concepts that are totally alien to the traditional notion of a server, but which create deployment capabilities unavailable with physical servers. Additionally, a VM or container image of a fully configured and tested software stack can be saved and distributed, encapsulating the learning and expertise. This helps in rapid application deployment, saving manpower costs and time. This is one of the major value propositions of the VM and container model.
Much like a fully configured and tested software stack of a VM/container can be managed as a software image that can be saved and redeployed, the software stack of a scale-out environment — including the configuration of the underlying logical servers — can be abstracted, saved and redeployed. This enables rapid deployment of scale-out applications, helping enterprise end users deal with the complexity of scale-out systems. This is particularly valuable given the fact that the underlying compute platform can be modified based on workflow needs.
Advances in Memory Technology
But let’s not forget entirely about hardware. Advances in memory technology, such as Phase Change Memory and ReRAM, are enabling a new class of memory with access times similar to DRAM in present-day servers yet offer two to 10 times the capacity, cost advantages and persistence. This forthcoming class of memory will create a new layer of memory hierarchy between DRAM and disk storage, known by the terms Storage Class Memory and Persistent Memory. The high capacity, coupled with low latency offered by the new memory technology, will enable an entirely new class of applications with performance that is orders of magnitude higher than present-day servers. But at the same time, it presents a number of architectural challenges that need to be overcome to achieve full potential and widespread use. These include (a) application awareness of a region of memory in the system being persistent, while a portion of that memory space is in volatile caches either on the processors or in DRAM and (b) dealing with failed servers with persistent, and potentially valuable data. The Linux community is actively working on these issues and we should see solutions starting to pop up within the next 12 to 18 months, if not sooner.
Machine Learning and Mobile Applications Linked to Enterprise Databases
Until now, general processor architectures such as x86 have been exclusively used in the design of servers, and these general-purpose microprocessors would be programmed for every application need. However, newer, more robust and demanding applications like machine learning, security functions and high-bandwidth compression perform very inefficiently on general-purpose processors. As a result, newer servers being deployed today are based on a hybrid of GP and GPU processors, machine learning processors and crypto processors. These newer servers offer vastly improved performance levels (orders of magnitude) than the standard general processor architecture. As a result, enterprise data centers will move to an increasingly heterogeneous compute environment, with application-specific servers.
Additionally, mobile applications linked to enterprise databases that respond in real time drive the market need for this new kind of server. While the end-users of traditional enterprise applications were generally limited to the enterprise employees and potentially some partners, these new applications now enable millions of online customers to have access to enterprise applications such as in health care, financial, travel, and social media. They demand orders of magnitude higher transactional throughput and millisecond response times. The new scale-out applications, such as NoSQL databases, combined with flash-based storage are being deployed to address this need.
Expect to see More Change Ahead
So as the cursory summation of current trends in server technology above shows, there’s no shortage of change and innovation, of finding new solutions to the new problems that the advances themselves introduce. In the last few years, big data, scale-out computing, advances in high-performance networking technology, virtual machines and container technologies, advances in memory technology, machine learning and other advanced applications linked to enterprise databases have all contributed to the progression of servers. And now that a server isn’t even what we knew as a server just a decade ago, who knows what one will look like 10 years from now?
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.
About the Author
You May Also Like