Insight and analysis on the data center space from industry thought leaders.
Have Your Scale, and Object Too
A metadata engine enables enterprises to deploy a more powerful scale-out file system that can seamlessly integrate with on-premises object or public cloud storage.
June 16, 2017
David Flynn is CTO of Primary Data.
The IT department has one of the toughest jobs in the enterprise. While maintaining application performance today, IT teams are increasingly being asked to handle the massive data growth coming tomorrow. Cloud and scale out storage are top of mind for most IT teams facing these challenges. In fact, Gartner’s 2017 Strategic Roadmap for Storage predicts that “by 2021, more than 80 percent of enterprise unstructured data will be stored in scale-out file system and object storage systems in enterprise and cloud data centers, an increase from 30 percent today.”
Rapid data growth is straining IT budgets, fueling this remarkably fast adoption rate and pushing enterprises to transition to the cloud even sooner. A metadata engine enables enterprises to deploy a more powerful scale-out file system that can seamlessly integrate with on-premises object or public cloud storage. This can help enterprises adopt scale-out systems and the cloud much more rapidly, at far less cost, and with much less risk. Let’s take a closer look at how.
Tomorrow’s Scale-Out System on Your Existing Hardware
As Gartner notes, many enterprises are keen to transition from standard NAS systems to a scale-out NAS platform to manage the rise in unstructured data, but are waiting to find a solution that can meet both short-term and long-term needs. Long-term capabilities enterprises are looking for include: massive scalability of performance and capacity; the ability to tier data across clusters with different services and performance capabilities, according to policy for flexibility and cost-optimization; and the ability to tier data across resources from different vendors or even commodity x86 servers to accelerate the adoption of the latest innovations.
Metadata engine software can meet these performance, scalability, and manageability requirements. It does this by separating the architecturally rigid relationship between applications and storage to deliver several key benefits. First, it enables data to be managed and moved across all storage, rather than within a single storage silo. This allows the right data to be placed on the right storage at the right time to meet data’s performance, protection, and price needs. Secondly, it moves metadata (control) operations out of the data path to enable parallel I/O, which improves scalability dramatically to support billions of files. Finally, deploying a metadata engine for data management is fast and easy, as only metadata needs to be migrated into the system—data is instantly assimilated, in place, without application impact. This capability turns complex migrations that might take weeks or months into a simple, non-disruptive process that can be completed in minutes.
Making Order of Unstructured Data Chaos
Beyond making NAS highly performant, efficient and scalable, a metadata engine’s ability to assimilate different types of storage resources into a global namespace extends to object and cloud storage to ensure enterprises will be well-equipped for the coming onslaught of unstructured data. With a metadata engine, admins control whether data moves to on-premises object stores (for example, for compliance purposes) or to one or multiple clouds (for example, to different availability zones for DR) based on the businesses objectives for data. In addition, data on object/cloud stores are still visible as files.
This offers several benefits: first, managing files versus objects is more intuitive; secondly, files retrieved are instantly usable, without the need to modify applications to use object data; and finally, data can be retrieved at a file-granular level, making an active archive use case more cost-effective by minimizing bandwidth charges. For example, without a metadata engine with file-level access, if a company needed to restore a single file from a large backup bundle, they would need to pay the bandwidth charge to move the entire backup bundle on premises and then rehydrate the bundle to restore the file. If the bundle contained video and audio files, these bandwidth charges could be significant. A metadata engine maintains access to data in the cloud as files and can retrieve just the file that is needed.
A metadata engine gives enterprises powerful capabilities that enable IT to meet the challenges of rapid growth of diverse data, including an explosion in the volume of unstructured data. It does so by transforming new and existing NAS resources into an immensely powerful scale-out platform that integrates seamlessly with on-premises object stores and the cloud. These capabilities enable IT to achieve the flexibility, agility, and cost saving benefits of the software-defined data center years sooner to keep their business ahead of the pack. Indeed, as the great philosopher Socrates said, “the secret to change is to focus all of your energy not on fighting the old, but on building the new.”
Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Penton.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.
About the Author
You May Also Like