Insight and analysis on the data center space from industry thought leaders.
Strategic IT Infrastructure in 2017
Analysts forecast that NVMe, which is enabling the next generation of flash performance and density, will become the leading interface protocol for flash by 2019.
March 13, 2017
Matt Kixmoeller is VP of Products for Pure Storage.
Data is rising at an unprecedented clip, and companies are struggling with how much to store, how much can be analyzed, etc. – compromises no one wants to (or should have to) make.
That’s precisely why data management has become the topic du jour and why the perception of IT infrastructure has shifted: What was once considered a commodity has become a platform where businesses build applications and derive insights from mission-critical data.
While an all-cloud future continues to dominate headlines, the reality is quite different. As we continue to move into 2017, we expect to see the conversation shifting from the all-out hype and push to public cloud to a more nuanced discussion of optimizing infrastructure to fit the specific needs of each customer. Here are the top IT infrastructure trends I believe will be top of mind for the balance of 2017:
Renewed Focus on Hybrid Cloud
Contrary to popular belief, the public cloud has not swallowed the majority of workloads and applications with only 20 percent of workloads today in the public cloud. And according to IDC, growth is expected to slow after 2017 as businesses begin to pull back from experimentation and optimize storage strategies.
Multiple factors will contribute to the slowdown, including concerns over vendor lock-in, security, accessibility and cost. In my view, public and private clouds will co-exist in the long term, and most data centers will be a mix of public cloud (IaaS/PaaS platforms like AWS and Azure) and private cloud. While the public cloud can often offer more cost-effective elasticity, experimentation, archival, and disaster recovery, private cloud will excel for more predictable, performance-critical workloads (latency is higher in public cloud, and bandwidth is expensive) as well as when there are security concerns with using proprietary algorithms or data in the public cloud.
Many customers have had almost blind directives to move some element of IT infrastructure to the public cloud without a real understanding of cost and operational implication. I think we will continue to see a better inspection of public cloud realities, resulting in the first wave of “best practices” that define which services and workloads should remain on highly optimized, resilient on-premise technology, specialized SaaS or secure private cloud, and which should be in public cloud.
More and more businesses will begin to see strategic infrastructure as a competitive advantage. The differentiation will lie in having your own private data center to analyze data faster, discover new insights, and deliver new products and experiences.
AFAs Take Another Leap with NVMe
NVM Express, a next-generation memory-class protocol for CPU-to-flash communication, is poised to drive a shift across the storage industry to NVMe architectures. Analysts forecast that NVMe, which is enabling the next generation of flash performance and density, will become the leading interface protocol for flash by 2019. A critical mass of consumer devices has already shifted to NVMe, and the enterprise will not be far behind.
Current storage systems use the legacy SCSI or SAS protocol, which present a bottleneck and cause performance delays when communicating to flash. The NVMe protocol holds the promise of eliminating the SCSI bottleneck, bringing massive parallelism with up to 64K queues and lockless connections that can provide each CPU core with dedicated queue access to each SSD. Storage arrays with internal NVMe also provide better performance, with increases in bandwidth and lower latency, as well as higher density and greater consolidation – all amounting to lower-per-workload costs.
Today, legacy disk vendors are finally on board the all-flash train, but to play catch up with purpose-built AFAs, most have adopted a retrofit strategy. Unfortunately for customers, retrofit arrays aren’t designed for NVMe – or any new, flash-native innovations – which means another expensive refresh cycle and painful forklift upgrades are on the horizon for anyone investing in a retrofit array today.
The massive parallelism unlocked by NVMe will be required by any business who wants to take advantage of their data, as well as capitalize on future technological advances such as super-dense SSDs, modern multi-core CPUs, new memory technologies and high-speed interconnects.
Cost Uncertainty Could Impede Cloud Adoption
Although public cloud can offer cost effective elasticity – short bursts of workload activity can be quickly and easily spun up in AWS – in the long term, public clouds will not be able to offer the cost certainty that CIOs need.
The simplicity gained in managing public cloud deployments from an IT perspective has been almost eradicated by the added complexity of cost management. Essentially, the time admins recouped has been replaced with managing budgets. This trend is made evident by the sheer number of fiscal management tools available for the cloud. And more and more, CFOs are attending conferences like AWS’ re:Invent, eager to understand where and how their IT budgets are being spent.
One thing is for certain: Data will continue to grow at an unprecedented rate in 2017 and beyond, making IT infrastructure and data management a boardroom issue for major corporations worldwide. It’s an exciting time to be in the space.
Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Penton.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.
About the Author
You May Also Like