All-flash Arrays, Storage Tiering and Storage Caching: How Do These Solutions Stack Up?

Many companies then look to their storage infrastructure for performance gains. But, when you compared options, there are some important differences to take into account.

Industry Perspectives

April 13, 2018

4 Min Read
DataCenterKnowledge logo in a gray background | DataCenterKnowledge

LukePruenheadshot_20002_0.png

Luke Pruen is Technical Services Director at StorMagic.

IT shops are always on the lookout for ways to improve application performance without negatively impacting availability or adding cost or complexity. Of course, adding more servers to the environment could do the trick; but that comes at a steep cost and, in many cases, added complexity (more servers means more things to manage and maintain). Many companies then look to their storage infrastructure for performance gains – all-flash arrays, storage tiering or storage caching. But, when compared, there are some important differences to take into account.

All-Flash Arrays

All-flash arrays are currently getting a lot of attention in the market, and for good reason. They can deliver massive application performance gains in terms of IOPS and sub-millisecond latency. While a chassis full of SSD drives can be very tempting for any IT organization looking to solve performance bottlenecks, this comes at a significant cost.

SSD drives are approximately 10 times the cost per gigabyte of spinning disks (but have a very low cost per IOPS). In addition, not every “performance-hungry” application will benefit from all-flash arrays – in some cases, the performance bottleneck is somewhere else in the system.

If the IT team is confident that all-flash will solve an application’s performance requirements and the budget supports expensive, all-flash arrays, then these could be a good option.

Storage Tiering

Another approach to performance improvement is storage tiering. With this method, multiple different types of drives are configured – SSD, 15K HDD, 10K HDD and/or 7.2K HDD – and intelligent software moves data to the most appropriate tier.

Typically, all writes go to SSD first to maximize write performance. Then, data is moved to lower cost tiers as the data ages. Each vendor’s implementation is slightly different, depending on things like block level migration, volume level migration or scheduling movement by time or priority, but the common denominator is that the actual application data is being moved between the different tiers and controller software and CPU cycles are being used to control the movement.

Storage tiering is typically good for larger environments inside a datacenter, where the usable capacity is in the 100 terabyte (TB) range.

Storage Caching

The latest approach IT teams are evaluating is storage caching. This implementation is similar to storage tiering in that multiple types of drives can be utilized, but there are two main differences.

The first difference is that system memory can be utilized as a caching tier for reads. Since adding memory to the server is inexpensive, this is an excellent way to improve storage performance for read intensive workloads without breaking the bank.

The second difference is that since this is a caching approach, the data being moved between the different tiers is actually a copy of the original source data. This approach minimizes the amount of data being moved around the back-end because only the most active blocks of data migrate up from HDD to SSD or memory cache. As it turns out, for typical applications, only a small percentage of data is actual read frequently, so it’s extremely cost-effective to keep this data in memory or SSD cache.

Storage caching is typically seen in edge computing environments, where the usable capacity is under 100 TB per system.

When it comes down to it, storage caching’s performance and cost benefits make it a terrific choice for most use cases. It’s all about the packaging, the other crucial element when comparing these three different approaches to improving storage performance.

Typically, all-flash arrays and storage tiering solutions require the user to purchase a complete hardware and software solution, which can be expensive. But with storage caching, IT teams can leverage software-defined storage implementations and simply configure their server with any mix of server memory, SSD and HDDs with caching enabled. This allows end users to save a considerable amount of money by using intelligent software to automatically place only the hottest data on the fast, expensive tiers and keep everything else on low-cost spinning disks.

At the end of the day, storage caching proves to be a much simpler, more cost-effective approach to storage performance improvement and can save IT departments considerable amount of money (and headaches).

Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Informa.

Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating.

 

 

Subscribe to the Data Center Knowledge Newsletter
Get analysis and expert insight on the latest in data center business and technology delivered to your inbox daily.

You May Also Like