Developers: 6 Steps to Revamp Your Apps

Bloomua/Shutterstock.com

A bogged down back-end database can cause timeouts and lead to reliability issues. So, what’s an IT department to do?

Chris Steel is chief solutions architect for Software AG Government Solutions.

As applications age and gather more users, common issues with speed and reliability start to creep up. Users losing patience with slow application response times can be pervasive if the requested application data is coming all the way from a back-end database or Hadoop cluster.

And when too many application users request data all at once, the back-end database can get bogged down, causing timeouts and leading to unwelcomed reliability issues.

So, what’s an IT department to do?

Many are eagerly embracing the benefits of in-memory computing for the low-latency access to terabytes of data at extremely high speeds.

Although these features are appealing, an application’s in-memory data can easily become inconsistent and unpredictable if not architected properly. From disk-based to memory-based application architectures, below are six areas of consideration:

Predictable, Extremely Low Latency

Working with data in memory is orders of magnitude faster than moving it over a network or getting it from a disk. This speed advantage is critical for real-time data processing at the scale of big data.

However, Java garbage collection is an Achilles’ heel when it comes to using large amounts of in-memory data.

While terabytes of RAM are available on today’s commodity servers, it’s important to keep in mind Java applications can only use a few gigabytes of that memory before long, unpredictable garbage collection pauses cause application slowdowns. Look for in-memory management solutions that can manage terabytes of data off-heap, without suffering from garbage collection pauses.

Easy Scaling with Minimal Server Footprint

Scaling to terabytes of in-memory data should be easy and shouldn’t require the cost and complexity of dozens of servers and hundreds of virtual machines.

An in-memory management solution should be able to scale up as much as possible on each machine so it’s not saddled with managing and monitoring a 100-node data grid.

By fully using the RAM on each server, hardware costs as well as personnel costs associated with monitoring large server networks can be dramatically reduced.

Fault Tolerance and High Availability

Mission-critical applications demand fault tolerance and high availability. The volatile nature of in-memory data requires a data management solution that delivers five nines (99.999 percent) uptime with no data loss and no single points of failure.

An in-memory solution that replicates data across multiple nodes leveraging active-active clusters ensures data is not lost and that the application is fault tolerant.

In fact, in-memory computing can increase availability by replicating user session data so that when an application instance goes down, the users on that instance can be seamlessly redirected to another instance that has access to that session data.

Distributed In-Memory Stores with Data Consistency Guarantees 

With the rise of in-memory data management as a crucial piece of big data architectures, organizations are increasingly relying on having tens of terabytes of data accessible for real-time, mission-critical decisions.

Multiple applications and instances of those applications will need to tap in-memory stores distributed across multiple servers or multidatacenters.

Thus, in-memory architectures must ensure the consistency and durability of critical data across that array and ensure WAN data replication. Ideally, there would be flexibility in choosing the appropriate level of consistency guarantees, from eventual and strong consistency up to transactional consistency.

Fast Restartability

In-memory architectures must allow for quickly bringing machines back online after maintenance or other outages. Fast restartability enables instances to load the data from local disk, as opposed to a database or other network source, when the application instance recovers.

Systems designed to back up and restore only a few gigabytes of in-memory data often exhibit pathological behavior around startup or backup and restore operations as data sizes grow much larger. In particular, recreating a terabyte-sized in-memory store can take hours if fast restartability is not a tested feature.

Hundreds of terabytes? Make that days or weeks.

Advanced In-Memory Monitoring and Management Tools

In dynamic, large-scale application deployments, visibility and management capabilities are critical to optimizing performance and reacting to changing conditions.

Control over where critical data is and how it is accessed by application instances gives operators the edge they need to anticipate and respond to significant events like load spikes, I/O bottlenecks or network and hardware failures before they become problems.

In-memory architecture should be supplemented with a clear dashboard for understanding up-to-the-millisecond performance of in-memory stores, along with easy-to-use tools for configuring in-memory data sets.

Also, given the hybrid environments many organizations face, managing hybrid storage and cross-language client support should be a must. This enables access to data from multiple client platforms (Java, .NET/C# and C++) and allows leveraging SSD and Flash technologies in addition to DRAM in order to scale to high terabyte data levels, predictably and economically.

With in-memory technology, organizations can enjoy the existing capabilities of disk storage while migrating to an in-memory architecture. But to take advantage of all an in-memory data management solution offers, it’s critical to analyze and understand which areas of the application are most important to consider in order to architect properly.