How Well Do You Really Know Your Network?

Inozemtsev Konstantin/Shutterstock.com

As the complexity of federal IT infrastructures continues to grow, so do the demands placed on government networks.

Sean Applegate is director of technology strategy at Riverbed Federal.

Last week, my 9-year-old son and I were watching the movie “The Croods,” streamed directly into our living room through the magic of Netflix (and the cloud). All was going well until about the 40-minute mark, when the movie abruptly stopped and the dreaded “buffering” began.

“Netflix is broken again,” my son said.

“Well, not exactly,” I said. “It’s probably the network.”

My professional and family worlds were colliding.

I say that because federal agencies deal with the same types of application performance issues every day, resulting in productivity losses they can’t afford. As the complexity of federal IT infrastructures continues to grow, so do the demands placed on government networks. Agencies are operating hybrid environments, with enormous amounts of data being shared across various public and private clouds, data centers and geographically dispersed facilities.

That’s a lot of pressure on network resources. The first step in optimizing performance and avoiding crippling latency, or congestion, is answering one fundamental question: “What’s going on across my network?”

It sounds simple, but many federal CIOs are facing a network visibility crisis. Isolated systems and the lack of application-aware network performance monitoring tools keep agency leaders from comprehensive, situational awareness.

Far too often, CIOs are forced to fly blind, making mission-critical infrastructure decisions without crucial insight into which applications are being successfully delivered, which aren’t, which personnel are using them, and over which network paths. When that data becomes clear, agencies can begin identifying the root causes of performance issues, and understanding how the interplay of these factors impacts mission goals.

The good news is that many federal agencies have this data. The challenge is making sense of it in an efficient and cost-effective way. Government organizations do a nice job of this when it comes to collecting and analyzing cybersecurity information, and the time has come to implement similar strategies on behalf of IT performance.

Network and application management tools automate the collection, visualization and analysis of performance data to deliver streamlined insight into root causes. Through a single dashboard, CIOs are now able to access the intelligence needed to pinpoint challenges, drilling down to resolve issues from the data center to the desktop, before they reach the helpdesk.

This visibility isn’t just essential to ongoing management and operations, but also to planning. Agencies gearing up for large-scale cloud application implementations or data center consolidation must navigate challenges related to workloads being hosted by partners in new environments, with data often traveling farther distances across a variety network paths.

The best way to truly know if your network and applications are up to the challenge is to model your agency’s future performance needs. This will help you identify and proactively address the risks related to capacity, latency, quality and other common performance constraints within complex federal architectures.

I’ve been around long enough to understand that the network is guilty until proven innocent, but there’s so much more we can be doing to set ourselves up for success. It starts with really getting to know your infrastructure and apps, the burdens they place on the network and how they impact performance. Only then can you create an optimization strategy that maximizes your existing resources, prevents any unnecessary and costly bandwidth upgrades, and empowers your staff to be as productive as possible.

The next step for me is practicing what I preach at home. I’m sure "Frozen" will be coming to Netflix shortly.

(Image via Inozemtsev Konstantin/Shutterstock.com)