Insight and analysis on the data center space from industry thought leaders.
Six Crucial Attributes of a High-Performance In-Memory Architecture
The move from disk-based to memory-based data architectures requires a robust in-memory data management architecture, writes Fabien Sanglier of Software AG Government Solutions. Here are six crucial attributes to address when evaluating in-memory data management solutions.
November 20, 2014
Fabien Sanglier, Principal Architect, Software AG Government Solutions, a leading software provider for the federal government.
The drop in memory prices continues to increase popularity of in-memory computing technology. But while local memory is very fast, it can also be volatile. If not architected properly, a scaled-out application’s in-memory data can easily become inconsistent.
The move from disk-based to memory-based data architectures requires a robust in-memory data management architecture that delivers high speed, low-latency access to terabytes of data, while maintaining capabilities previously provided by “the disk” such as data consistency, durability, high availability, fault tolerance, monitoring and management.
Here are six of the most important concerns to address when evaluating in-memory data management solutions.
From Disk-Based to Memory-Based: Six Areas of Consideration
Predictable, Extremely Low Latency. Working with data in machine memory is orders of magnitude faster than moving it over a network or getting it from a disk. This speed advantage is critical for real-time data processing at the scale of big data. However, Java garbage collection is an Achilles’ heel when it comes to using large amounts of in-memory data. While terabytes of RAM are available on today’s commodity servers, it’s important to keep in mind that Java applications can only use a few gigabytes of that before long, unpredictable garbage collection pauses cause application slowdowns.
Look for in-memory management solutions that can manage terabytes of data without suffering from garbage collection pauses.
Easy Scaling with Minimal Server Footprint. Scaling to terabytes of in-memory data should be easy and shouldn’t require the cost and complexity of dozens of servers and hundreds of Virtual Machines. Your in-memory management solution should be able to scale up as much as possible on each machine so that you’re not saddled with managing and monitoring a 100-node data grid. By fully utilizing the RAM on each server, you can dramatically reduce not only hardware costs but also personnel costs associated with monitoring large server networks.
Fault Tolerance and High Availability. Mission-critical applications demand fault tolerance and high availability. The volatile nature of in-memory data requires a data management solution that delivers five nines (99.999 percent) uptime with no data loss and no single points of failure.
Distributed In-Memory Stores with Data Consistency Guarantees. With the rise of in-memory data management as a crucial piece of big data architectures, organizations increasingly rely on having tens of terabytes of data accessible for real-time, mission-critical decisions. Multiple applications (and instances of those applications) will need to tap in-memory stores that are distributed across multiple servers. Thus, in-memory architectures must ensure the consistency and durability of critical data across that array. Ideally, you’ll have flexibility in choosing the appropriate level of consistency guarantees, from eventual and strong consistency up to transactional consistency.
Fast Restartability. In-memory architectures must allow for quickly bringing machines back online after maintenance or other outages. Systems designed to back up and restore only a few gigabytes of in-memory data often exhibit pathological behavior around startup or backup, and restore as data sizes grow much larger. In particular, recreating a terabyte-sized in-memory store can take days if fast restartability is not a tested feature. Hundreds of terabytes? Make that weeks.
Advanced In-Memory Monitoring and Management Tools. In dynamic, large-scale application deployments, visibility and management capabilities are critical to optimizing performance and reacting to changing conditions. Control over where critical data is and how it is accessed by application instances gives operators the edge they need to anticipate and respond to significant events like load spikes, I/O bottlenecks or network and hardware failures before they become problems. Your in-memory architecture should be supplemented with a clear dashboard for understanding up-to-the-millisecond performance of in-memory stores, along with easy-to-use tools for configuring in-memory data sets.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.
About the Author
You May Also Like