Insight and analysis on the data center space from industry thought leaders.
Three Storage Trends Shaking Up the Enterprise
As storage becomes a larger and larger aspect of an overall IT environment, companies are beginning to not only understand, but witness firsthand the expense of the integrated system model.
February 29, 2016
Dave Kresse is CEO of InterModal Data.
The storage industry has historically been left out of the conversation when discussing the innovative and ground-breaking feats coming out of the technology world. Now, that focus is shifting thanks to three emerging trends in the enterprise: the move away from integrated systems to software-on-commodity hardware architectures, the focus on utilization rates of physical resources, and the increasing need to support millions of individual workloads.
Companies such as Facebook, Google and Amazon have devoted massive resources to build and maintain customized data center infrastructures from the ground up. In doing so, these companies have realized tremendous levels of scalability, flexibility, and efficiency. Enterprises today are experiencing large and growing amounts of data storage requirements and are focused on achieving the same benefits, driving these trends.
Software Soars
As storage becomes a larger and larger aspect of an overall IT environment, companies are beginning to not only understand, but witness firsthand how expensive the integrated system model for storage is. The business model of these vendors requires them to significantly mark up the commodity components they bundle with their software; and particularly at scale, this becomes exorbitant. Until recently, customers have had to live with this because software-only storage vendors were not delivering the level of reliability, availability, serviceability, and supportability required from enterprise-class storage solutions. However, there is a new generation of software only or software-defined storage vendors who have taken a “systems” approach and are delivering the quality and predictability Enterprises need. The shift away from integrated systems is underway.
Having the proper storage architecture in place can have a direct impact on other aspects of the business. Companies are constantly challenged by exorbitant costs with their integrated systems, as these types of storage architectures only can create a finite amount of capacity and performance controllers. Since these controllers are physically attached to the shelf, when one runs out, both must be replaced. One way companies are fighting back is through software-only models that work seamlessly with commodity hardware. This approach increases reliability, availability and serviceability needed to operate at maximum capacity in a large-scale environment.
Utilization is Key
When it comes to cost, moving to a software plus commodity hardware model helps some, but the real culprit in storage is the high level of unutilized and underutilized resources stemming from the way solutions have been historically architected. Traditional storage architectures physically attach shelves to controllers, isolating a finite amount of performance and capacity within a given system. The problem is that unless a customer exhausts the performance and capacity at the same time, one of those two resources is underutilized within that system. At scale, this adds up to a tremendous amount of waste – waste not only of upfront capital dollars, but ongoing operational waste of space, power, cooling, and systems management. New storage architectures are emerging that are purpose built to address this utilization issue. One relatively new approach, which started inside Facebook, is a disaggregated architecture. A disaggregated storage architecture physically separates the infrastructure into component functions, which are connected over Ethernet. This approach allows enterprises to granularly scale out. If they need more performance, they can add that without having to add more capacity, and vice versa. Increasing resource utilization through disaggregation has in some cases resulted in up to a 10x reduction in the amount of physical resource required to support a given set of performance requirements.
A New Definition of Scale
It was not too long ago when enterprises defined “scale” in terms of how large a file system a storage solution could support. This was in an era when storage was purchased on an application-by-application basis. However, with virtualization now ubiquitous in the enterprise and the adoption of microservices on the horizon, and with a shared storage model becoming the norm in order to ensure efficiency, enterprises now define scale in terms of how many file systems, or connections, a storage solution can support. As the definition of scale has literally been turned on its side, traditional storage solutions have struggled to adjust. Solutions today are now architected to support hundreds of thousands, moving to millions of workloads. The better ones allow customers to define classes of service so that the workloads can be given different levels of resource in order to support disparate performance requirements.
The Storage of Tomorrow, Starts Today
Today, enterprises demand new levels of scalability, flexibility, and efficiency to meet the challenges they are experiencing with the large and growing amount of data they must manage. At the same time, they still require their storage to be reliable, available, serviceable, and supportable. In response to this demand, there has been an unprecedented amount of innovation around the storage market. The traditional storage vendors have not been able to keep pace. New entrants who have been able to deliver on these demands are winning in the marketplace at an ever accelerating rate.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.
About the Author
You May Also Like