In-Memory Computing: What Happens When the Power Goes Out?

Thinkstock

As the demand for real-time access to big data accelerates and expectations for optimal performance increase, sophisticated data persistence becomes invaluable.

Chris Steel is chief solutions architect for Software AG Government Solutions, Inc.

While agencies are eagerly looking to find new opportunities for performance improvements and costs reductions, the question on using in-memory computing has definitely changed from an “if” to a “when” sensibility.  

But when faced with big data-sized mission critical applications, new considerations must be addressed when leveraging in-memory. One sweeping key component: persistence.

Also known as a “fast restartability store,” it is quickly becoming essential for the 24x7 requirements of mission critical applications and big data projects.  

In-memory computing is a relatively new trend, though the concept is well understood and is as old as the dawn of computers: Accessing data from memory is a lot faster than accessing it from disk or over a network.

In-memory computing uses large amounts of RAM to store as much of an application’s data as possible in memory, thereby increasing application performance and cutting cost by reducing the need to scale horizontally.

In the traditional database off-loading use case, static queries from the database are cached in memory on the application server. Subsequent requests for these queries can be returned very quickly, as they are already in memory at the application. This reduces the load on the database and avoids the extra round trip over the network to the database -- resulting in significant savings in response time to the user.

The Looming Issue: Data Recovery for Big Data

The one potential drawback to IMC is the volatility of RAM. If the power is lost -- so is all of the data in memory. When bringing an in-memory data set online after maintenance or an unplanned outage, three concerns must be addressed:

  • Making sure all in-memory data held at the moment prior to the downtime persists;
  • Making sure changes (writes) that occurred during the downtime are not lost; and
  • Getting the data set online as fast as possible to ensure overall availability.

The business impact of failing to deliver on any of these three can be devastating for an agency. Some developers may say, “What’s the big deal? Just reload the in-memory data set from the database!”

It’s true, if you're only keeping a few gigabytes in a caching tool that might be a perfectly acceptable solution. However, when you’re talking about terabyte-scale in-memory stores, rebuilding them from a disk-bound database could take days. Hundreds of terabytes? Make that weeks!

Aside from the time your server is offline, your availability is also taking a hit because reloading from a central database can severely impact that database’s availability for other processes.

Thus, persistence in your IMC solution is required for ultimate success. But what does that mean exactly?

Persistence is the ability of the cache to save itself to local disk and reload from disk after a planned or unplanned shutdown. So, if power is lost, the application’s cache can reload very quickly from local disk, rather than reloading from the database, over the network, which is slow and puts unnecessary strain on the database.

This fast, restartable store capability not only decreases recovery times; it opens up a slew of new functionality available to the application. Consider for a moment, the ability to store properties and other modifiable data locally, without the need to bounce the server or write additional file-polling code to make changes and persist those changes. Or having the ability to take a snapshot of data in-memory. This is useful when you need to take online backups of data.

How to Achieve Reliability Mission Critical Applications Demand

Clearly, when government IT departments incorporate in-memory computing with a fast restartability store, they can store environment-specific data in the cache, using a simple put/get API and manage the data externally through a host of vendor-supplied and open source tools available.

This can effectively push out clusterwide changes simply by updating data in the database and then clearing the cache locally or clusterwide to have one, many or all servers update at the same time, without bouncing the application. This is particularly attractive to developers and DevOps shops that are continually trying to balance the need for configuration with management of that configuration data.

As the demand for real-time access to big data accelerates and expectations for optimal performance increase sophisticated data persistence becomes invaluable.

The key for agencies needing to support big data scale projects and applications is to evaluate IMCs for both usefulness during normal operations and for its ability to quickly restart operations for terabyte-scale in-memory data stores. Once that happens and it is operational, public sector IT will make gains like never before.