Don't Make More Space, Make Storage Smarter
It's no secret that IT shops are under pressure to store more and more data, but is adding more disk capacity the answer?
March 3, 2015
Just like any other sprawling technology, it was just a matter of time before it all caught up to the storage platform. Slowly but surely cloud and data center administrators started asking: “Just how many more disks can I shove into a shelf?” We were adding more arrays and a lot more physical hardware to address a variety of infrastructure growth challenges. And so, much like it happened with server sprawl, there needed to be a way to better manage and control the storage platform.
Modern data center shops still needed performance for data management, cloud agility, and of course, the end-user workload experience. There was still the direct need to house more information as IT consumerization continued to grow. But is buying more disk really the answer? Do we really need yet another controller?
This is where software defined storage (SDS) technologies start to help – a lot. We’re not talking about eliminating physical storage, just making it smarter! The cloud storage platform needs to be extremely agile. With that in mind, the storage infrastructure supporting a truly distributed environment needs to be agile as well. So, what is the software defined storage revolution doing to the cloud storage model? Let’s take a look.
Creating smarter storage. Work smarter, not harder, right? That’s the message to storage vendors out there. Instead of provisioning additional physical resources, how can we make what we have work better? Software defined storage is designed to intelligently pass data traffic to appropriate pools, shares, and aggregates. Consider this, through a smarter storage platform you have the ability to control all storage components from a virtual appliance. This gives you the ability to create a directly extensible storage infrastructure. With a virtual storage controller layer – utilizing SDS – you’re able to aggregate your storage environment and then distributed from data center to cloud. Ultimately, SDS platforms won’t care which hypervisor you’re using or which physical controllers you have. It’ll only want to be presented the appropriate resources. From there, the VMs will be able to communicate with one another while still living on heterogeneous platforms
Less disk, more optimization. The biggest impact on cloud storage is that we’re probably going to see less physical disk and a lot more efficiency on the controllers. The software layer is designed for optimization of existing resources. Remember, replication and one-to-many configurations are still critical. Now, powerful platforms can reduce the amount of disk while still allowing you to replicate, backup, and manage data, oftentimes without even impacting production systems. Here’s a quick example: thin provisioning. Thin provisioning utilizes the on-demand allocation of blocks of data versus the traditional method of allocating all the blocks at the very beginning. In using this type of storage-optimized solution administrators are able to eliminate almost all whitespace within the array. Not only does this help with avoiding the poor utilization rates, sometimes as low as 10 percent to 15 percent, thin provisioning can also optimize storage capacity utilization efficiency. Effectively, organizations can acquire less storage capacity up front and then defer storage capacity upgrades in line with actual business usage.
Creating the next-gen, cloud-ready, storage platform. There is going to be a lot physical abstraction happening around the data center platform. The software layer allows administrators to better control resources, span data centers, and create an even more robust infrastructure. Storage and the management of data is absolutely critical to the success of the cloud. Software and operating systems residing on controllers is good but sometimes not enough. Now, integration with cloud-based systems, management tools, and even the hypervisor itself are critical for optimal storage control.
Here’s something else to consider around smarter storage - A big conversation point in the industry is the boom in commodity hardware usage. This is happening at the server level and other data center levels as well. With concepts around software defined storage there isn’t anything stopping anyone from buying a few bare-metal servers and filling them up with their own spinning disk and flash storage. From there, they can deploy an SDS solution which can manage all of these disks and resources. In fact, you can replicate the same methodology over several data centers and cloud points. Even with your own disks and storage arrays, new storage control mechanisms not only allow you to deploy your own commodity storage platform but also your own storage intelligence layer.
What we’re really trying to do is create more intelligence around the modern data center and all of its supporting components. This means abstracting controls into the logical layer and allowing it all to scale. The vast interconnected state of the current data center model clearly dictates that we need to be using smarter technologies to help us become more efficient. Now, a big part of that is the software-defined platform. VMware is all over it, Atlantis Computing has a product, and even Nutanix now has a patent around software defined technologies. The storage platform is only going to continue to become better, smarter, and a lot faster. Big data, the proliferation of cloud-connected devices, and now the Internet of Things are all making a direct impacts on storage.
About the Author
You May Also Like