Insight and analysis on the data center space from industry thought leaders.
How Data Center Trends Are Forcing a Revisit of the Database
Our approach to infrastructure has morphed in response, and yet databases remain largely unchanged -- processing jobs in the same lowest common denominator fashion as they always have and wasting compute and storage resources in the process.
February 2, 2016
Ravi Mayuram is Senior Vice President of Products and Engineering at Couchbase.
Data centers are like people: no two are alike, especially now. A decade of separating compute, storage, and even networking services from the hardware that runs them has left us with x86 pizza boxes stacked next to, or connected with, 30-year-old mainframes. And why not? Much of the tough work is done by software tools that define precisely how and when hardware is to be used.
From virtual machines to software-defined storage and network functions virtualization, these layers of abstraction fuse hardware components into something greater and easier to control.
That's a startling change from the early days of data center design, when the industry's major players poured billions into data centers that were to be the factories of a then-burgeoning digital economy. Their rise gave birth to standards committees that defined how space and power were to be used, ensuring that major suppliers wouldn't accidentally build a server too tall or too wide to find a home.
Today, in 2016, disaggregated services running on cheap equipment accomplish as much or more than the mighty machines of old. They also occupy less space and form natural connections to external resources accessed in the cloud.
Our approach to infrastructure has morphed in response, and yet databases remain largely unchanged -- processing jobs in the same lowest common denominator fashion as they always have and wasting compute and storage resources in the process. Data center operators are handling far too much information for this to go on much longer.
A Data Center Deluge
Oddly, most of the industry is well aware that over-provisioning is a problem. A study last year estimated the value of idle servers -- digital sentries standing by, doing nothing -- at north of $30 billion. In a similar study from last September, The Wall Street Journal uncovered a facility that had more than 1,000 unused machines powered on and ready for work that would never come. How can we justify such an astounding waste of capital when the need for data center efficiency has never been greater?
Information is moving to and through data centers with unmatched volume and variety, and at a velocity never before seen. Cisco tracks the changes in its Visual Networking Index (VNI). The latest figures put Internet-based traffic at 2 Exabyte per day last year, equivalent to 40 percent of the words ever spoken by human beings since the dawn of existence. Cisco sees that total more than doubling, to 5.5 Exabyte per day, by 2019.
Without the tools for software-defining and deploying data center resources efficiently, we'd have no choice but to rely on brute force over-provisioning to handle all that information without suffering downtime. That we still need sheer muscle for most database services makes these crucial systems a bottleneck. Disaggregation is key to solving the problem.
Scaling to Workload, Not Infrastructure
Databases handle any one of three kinds of functions. Data services are core to the system and define the schema used to store information. Index services categorize data for fast retrieval, and query services extract it according to defined parameters. Most systems will handle many different types of requests at once.
The difficulty comes in how databases leverage hardware; it's all blunt force with requests distributed equally across an infrastructure. There is no accounting for what systems would be better suited for handling an I/O intensive data service. Nor is there a mechanism to handle a memory-intensive index service in a separate system, or provisions to manage compute-intensive queries in a distinct, optimized cluster of machines. Database platforms don’t enjoy the separation of compute, storage, and network services that bring efficiency to modern data centers. We need to change that; we need systems to evolve to scale database services to different subsystems independently and on-demand.
Interestingly, this is as much a problem for the majority of NoSQL databases as it is for all relational systems. Inefficient NoSQL databases may even have it worse because of how often they're paired with a massively distributed infrastructure. With no way to assign queries to different nodes, jobs collide into each other, consuming massive amounts of memory and compute in a bare-knuckles fight for resources. Solving for that is like disaster planning, which is why we still have so much over-provisioning in the data center.
Yet it doesn't need to be this way. Multi-dimensional database scaling was born from the same spirit that brought us virtualization, software-defined networking, and other disaggregation techniques that have transformed the data center for the better. In its simplest form, multi-dimensional scaling is software-defined workload optimization wherein administrators assemble the compute, memory, and storage needed for their workload characteristics. Systems are then optimally provisioned to avoid idling while maintaining the elasticity required to handle spikes when they occur.
Think of it as scaling according to the needs of the workload -- and optimizing for every piece of available hardware in the process -- rather than designing to the lowest-common-denominator limits of the infrastructure. In a world that's awash with data, that's a change that can't come soon enough.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.
About the Author
You May Also Like