Insight and analysis on the data center space from industry thought leaders.
Bringing Hyper Scale-Out to the Masses: The Power of Open Optimized Storage
We’re entering an era when data enterprise storage is stealing the spotlight from the historic stars of the data center - servers and networking, writes Mario Blandini of HGST. Data has become the currency of business insight and needs to be stored and readily accessible for companies to realize its value.
June 11, 2014
Mario Blandini is the senior director of product marketing, storage systems, at HGST, a Western Digital Company.
Data growth estimates by percentage may differ from one prognosticator to the next, but everyone agrees that more data will be stored in 2014 than was stored in 2013. So how will infrastructure scale to meet the insatiable consumption of unstructured data in the coming years? In short: optimization.
As we enter the age of analytics, data is only valuable if you can get to the information and knowledge locked inside of it. Storing data is part of the challenge, and readily accessing massive amounts of unstructured data from archives for analytics or compliance is more challenging. Stakeholders therefore need a new data storage system architecture with high-density “peta-scale” capacities, accessible to the applications that must leverage it and approachable for organizations of any size.
Web-scale in the enterprise
As outlined by Gartner, the biggest names in “Web-Scale IT” and Web 2.0 have already achieved new storage efficiencies by designing standard hardware that is highly optimized for their very specific software workloads. Few data centers have equivalent human resources to do the same, though the emergence of open software-defined storage options make optimized architectures for scale-out much more approachable. Optimized data storage is more than just hardware – storage software is as much a part of the opportunity for optimization.
These new technologies will enable enterprise data centers to gain the same CapEx and OpEx benefits enjoyed by “Web-Scale IT” players—an investment that Gartner identified as a Top 10 Strategic Technology Trend in 2014. They’re re-inventing the way in which IT services can be delivered. The capabilities of these companies exceed the “scale in terms of sheer size to also include scale as it pertains to speed and agility.” The suggestion is that IT organizations should align with and emulate the processes, architectures, and practices of leading cloud providers.
Thanks to commercially supported open-source initiatives such as Red Hat Storage Server with GlusterFS, Inktank Ceph Enterprise and SwiftStack for OpenStack Object Storage, we can expect to see software-defined storage systems cross from cloud into more mainstream enterprise data centers across multiple deployment options. Several new startup-developed software-defined storage offerings will likely emerge from stealth mode in the coming 18 months. With commercial support for open storage software, traditional IT can use the same approaches once limited to the biggest operators. Even today you’ll find companies presenting their case studies at conferences from service providers, enterprises, as well as early stage companies.
Innovation in storage hardware
On the hardware side of the equation, standard storage building blocks for the software-defined data center are also getting optimized. Higher capacity drives consuming less power are improving storage clusters, enabling more resources in the same footprint. New technologies like hermetically sealed helium filled drives allow for more optimal data storage in the standard 3.5” form factor. Drives that are lighter and lower power also enable vendors of standard server hardware to increase the density of their enclosures to support software-defined storage. What had been 12-36 as a typical system density, 60-80+ drive systems are now much more feasible.
On the path to optimized hardware, Ethernet drives bring new abilities to distribute software services for scale-out storage. This architecture optimizes the data path so that application services can run closer to the location where data resides at rest. Developers can take advantage of those drive-resident resources in open architectures without needing to modify their applications. By virtue of Ethernet, operators supporting those developers get seamless connectivity to existing data center fabrics, and use of existing automation and management frameworks. An open Ethernet drive architecture can also enable the intermixing of new technology with server-based deployments of popular software-defined storage solutions.
Historically, servers and networking have been the stars of the data center. With the volume, velocity, value and longevity of data, however, we’re entering an era when data enterprise storage is taking over the spotlight as a key enabler for advancements in the data center. It’s not that processing or moving the data is easy; it’s that data has become the currency of business insight and needs to be stored and readily accessible for companies to fully realize its value. For data center architects and storage developers looking to keep pace with next generation big data processing, analytics, research queries and other applications that require long-term retention of active data, it’s imperative they understand how open software-defined storage will impact (and benefit) the new ecosystem of storage architectures.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.
About the Author
You May Also Like