Sean Applegate is director of technology strategy & advanced solutions at Riverbed Technology.
The Obama administration's new 2017 budget tacked on $105 million to expand federal digital services, a very clear sign that citizen engagement is a top priority.
However, a recent report indicates agencies still struggle to improve customer service efforts—only 45 percent of government customers think agency representatives understand their needs, compared with 66 percent of private sector customers.
A key part of the problem is people have heightened expectations for digital experiences from the private sector and need only to look at their smartphones to recognize what “good” looks like.
Speed and performance matter, too. Even the best technology, designed specifically with end-user experience in mind, will be cast aside if it’s buggy, sluggish or prone to glitches. With increased cloud adoption, federal agency IT architectures are becoming more complex and harder to operate.
Now, more than ever, the performance of government applications and the networks that deliver them are vital to citizen experience, as well as government efficiencies.
Enter the new role of IT. Changes to underlying technologies will enable agencies to understand how their citizens interact with their applications and solve problem areas before they negatively impact citizen experience. Agencies should seek real-time customer experience insights and have a strategy to make improvements based on that intelligence.
Start with Understanding
Before optimizing for customer experience, agencies must fully understand the infrastructure they have and how applications are performing across it. That means, looking for where things work well and where they don’t work well.
It seems obvious that detailed performance visibility is an important part of running IT operations, but it isn’t happening. In fact, a survey conducted by Market Connections found 51 percent of federal IT decision-makers said it takes a day or more to detect and fix application performance issues.
One way to gain understanding is with integrated application and network performance monitoring and troubleshooting, which gives functions similar to warning lights and diagnostic sensors in a car -- but for the applications and networks across an agency. These are used by developers and engineers to see exactly what isn’t working, so they can fix it.
Much like a car’s instrument panel, performance dashboards provide vital insight into how network resources handle its apps. There are solutions that automate and streamline the collection of key metrics into one dashboard, empowering federal IT leaders to highlight concerns. They can then task technical teams to forensically examine the root causes of performance limitations, errors and slowdowns.
Have a Strategy for Improving
After gaining visibility, the next step is to develop usability standards and define key performance indicators. The three most useful characteristics to proactive monitor systems for include capacity, latency and errors. Having a single integration point that stitches network and application performance together will facilitate a team-oriented culture, what is known in psychology circles as having generative characteristics.
The fastest way to success starts with fixing bottlenecks -- key constraints that have the biggest impact on customer experience. For example, if Veteran Affairs casework loads are backlogged, VA needs to focus on fixing that first in order for the system to operate faster and better serve its constituents.
The fix can often be as simple as using solutions that accelerate performance to combat limited capacity or latency. TechValidate surveyed customers and found that network acceleration increased productivity by nearly 300 percent on average.
Application downtime is another example of a constraint that must be overcome. An IDC report on the benefits of an application performance management solution found the amount of customer time lost due to application downtime was reduced by 67 percent. To put that in perspective, the cost of application downtime for an agency ranges from $500,000 to $1 million per hour.
Predict and Control
With clear visibility into application performance and key constraints neutralized, it’s time to predict and control for future constraints. Agencies need mechanisms that enable them to easily anticipate performance constraints and make investments to avoid them.
Today’s modeling solutions help answer crucial “what if” scenarios that arise with digital service apps, allowing IT leaders to decrease risk and quantify end-user experience before making notable investments. These tools work similar to how a GPS estimates drive time and plans to avoid traffic when a destination is set.
Federal agencies are facing mission-critical performance management challenges when it comes to customer service. Hope is not lost and there are answers out there. By ensuring that applications and networks are optimized, IT resources are always ready for federal workers and citizens when they need them.