Why You Should Pay Attention to Container-Native Storage

The use of containers is exploding in every vertical, making container-native storage a much-buzzed-about trend in enterprise IT. Here's what you need to know.

Karen D. Schwartz, Contributor

January 11, 2021

4 Min Read
Container native storage
Getty Images

Tech has no shortage of buzzy new technologies – and cutting through the hype to see what will actually impact the enterprise can be challenging. We're here to help. Starting in 2021, our contributors will give a rundown on an emerging tech and whether it'll pay off to pay attention to it. For storage in 2021, here’s our look at container-native storage.

To see the other trends highlighted in our IT Trends To Watch series, read our Emerging IT Trends To Watch report.

What Is Container-Native Storage?

Container-native storage (also called persistent storage for container-based workloads) is a software-based approach to maintaining storage within containers even after those containers are destroyed. Because the storage runs on standard server nodes that support Kubernetes and other orchestration platforms, it ensures that the data associated with the application remains intact on each node of a cluster.

Think of container-native storage as “HCI for containers,” suggests Eric Slack, a senior analyst at Evaluator Group. Like HCI, it provides a comprehensive, scale-out, standardized compute environment that can simplify deployment and operational tasks for IT.

The use of containers is exploding in every vertical. According to IDC, the installed base of container instances will grow at about a 62% CAGR from 2019 to 2023. While it’s possible to use other types of storage for container workloads, such as direct-attached storage, network-attached storage or SANs, these methods are not well-suited to the nature of container storage. SAN and NAS solutions tend to be difficult to scale, expensive and cumbersome for agile development environments. The direct-attached approach can be complicated to manage and relies on manual processes to add storage as containers scale. It’s also possible to use a container storage interface (CSI) to connect storage arrays to Kubernetes, for example, but run into problems with speed and dynamic provisioning. Other challenges with these approaches include portability and consistency; containers are built with portability in mind.

At a basic level, the software-defined layer virtualizes the physical storage on each node to create a storage pool, which can be used by the containers also running on the cluster, Slack explained. This creates a scalable compute infrastructure that can run in the public cloud, allowing groups of containers to be transferred between on-premise clusters and those running in public clouds.

How Long Has It Been Around?

When Docker first started making waves in about 2013, nobody thought much about persistent storage since most applications were stateless. Over the next several years, as others jumped into the container world, the need for persistent storage designed for containers became evident. By around 2016, companies like Portworx (now part of Pure Storage) had jumped on board.

Why Are People Paying Attention to It Now?

The need for container-native storage is directly related to the staggering growth in container-based workloads; not only for testing and development, but also into production environments. As that happens, more IT professionals are experiencing challenges providing persistent storage for those containers. According to a survey from ESG, more than one-third of organizations cited storage performance as one of the biggest challenges in delivering persistent storage for containers. “We found that people are twice as likely to experience poor performance in container-based workloads,” noted ESG senior analyst Scott Sinclair. The same survey pointed to other challenges with storage for containers, including speed of provisioning and managing container storage across hybrid and multi-cloud environments. Container-native storage addresses these issues head-on.

Who Benefits From It?

Storage and virtual machine administrators save time and effort with this technology. Instead of submitting a ticket to the IT organization for more storage for an app, which requires administrators to size the storage and spin it up, container-native storage allows users to provision their own storage. “If you have 1,000 microservices, each could automatically request storage,” Sinclair explained. “Without this in place, administrators have a lot of extra work to do, and developers have to rework their code.”

Where Can You Get It?

  • Datacore

  • Dell/EMC

  • Diamanti

  • HPE

  • MayaData

  • Microsoft

  • Pure Storage/Portworx

  • Red Hat

  • Robin.io

  • StorageOS

  • VMWare

  • Microsoft

Open source options, including Rook, Ceph, Longhorn and OpenEBS

About the Author

Karen D. Schwartz

Contributor

Karen D. Schwartz is a technology and business writer with more than 20 years of experience. She has written on a broad range of technology topics for publications including CIO, InformationWeek, GCN, FCW, FedTech, BizTech, eWeek and Government Executive

https://www.linkedin.com/in/karen-d-schwartz-64628a4/

Subscribe to the Data Center Knowledge Newsletter
Get analysis and expert insight on the latest in data center business and technology delivered to your inbox daily.

You May Also Like