A Tool That Can Keep Federal Data Centers Safe Amid Cloud Chaos

Virgiliu Obada/Shutterstock.com

To help secure clouds, agencies need to visibility into nebulous infrastructures.

John Breeden II is an award-winning journalist and reviewer with over 20 years of experience covering technology and government. He is currently the CEO of the Tech Writers Bureau, a group that creates technological thought leadership content for organizations of all sizes. Twitter: @LabGuys

We all know that the federal government has a love/hate relationship with data centers. First embraced as an efficient way to handle advanced government networks, they eventually grew out of control, leading to wasted resources, overlapping capacities and a loss of visibility as to what government was paying for in their data centers. The Federal Data Center Consolidation Initiative was created in 2010 to try and reverse the historic growth of federal data centers, with the Data Center Optimization Initiative more recently replacing it, but with similar goals.

One of the best ways that government is reducing their reliance on data centers is by moving to cloud computing, where agencies can buy computing capacity as a service, theoretically only paying for what they need. There are still concerns, especially with security, which has caused government to lag well behind the private sector in cloud adoption. Even so, their cautious efforts have been fruitful. The Government Accountability Office estimated in 2015 that the federal government saved half a billion dollars with its limited cloud technology use over a period of about four years.

» Get the best federal technology news and ideas delivered right to your inbox. Sign up here.

Innovative technologies like segmentation, which I described in an earlier column, are emerging to help secure those government clouds. Those new technologies will help, but often run up against the age-old computing problem of not being able to protect what you don’t know you have. Cloud computing got its name for a reason. It separates the hardware layer from computing needs, creating a nebulous area where users don’t know—and really don’t need to know—how many hardware resources are supporting their applications or hosting their data.

While most cloud computing providers do provide basic security as part of their service level agreement, it’s ultimately the user’s responsibility to protect their own data within the cloud. It’s why government has been so reluctant to embrace the cloud and why Gartner recently named cloud-based security solutions as one of the most important categories in cybersecurity for the immediate future.

One of my projects over this long, hot summer, is conducting a series of 15 in-depth cybersecurity reviews for Network World and CSO magazine, examining products that fit into those important Gartner categories. Not all of the products I have been reviewing so far have federal applicability, but a recent one stood out as being a potential game changer for feds looking to move to the cloud and to ensure security once they arrive.

The biggest problem with cybersecurity in the cloud—and this even applies to some of the newest technology like the aforementioned segmentation—is that it’s difficult to define security policies within what is essentially a nebulous infrastructure where a single app might be running across multiple servers at the same time, might be pulling resources from multiple sources, and might even be physically stored in several different locations. So, for example, in the absence of defined hardware rules and locations, something like segmentation only works if you know exactly how your apps in the cloud are performing and interacting with users.

Getting that important visibility is very difficult within the cloud. Examining the log files—the traditional method of figuring out how everything is interacting—is nearly impossible even in a medium-sized, cloud-based data center. One that I examined recently, a smaller facility with about 400 clients, generated over seven billion logfile events over a six-hour period. I can’t track billions of logfile events every few hours, and neither can most security teams, even large ones. It’s also why attackers can remain hidden for so long once a system is breached: There is too much other chaos going on to quickly unmask them.

The product that I recently reviewed is called the Lacework Cloud Workload Protection Platform, which is designed as a way to restore visibility to cloud computing, creating a baseline that other security programs can build upon. It’s configured for deployment as a service with no need for a hardware console installation, so perfect for cloud environments. It works by deploying tiny software agents each time a new virtual machine is spun up within a cloud, basically stamping that asset as owned by the organization or agency. A federal data center could deploy Lacework by simply adding the agent to the default image for all new virtual machines, and then pushing the same agent out to existing assets.

Suddenly, the cloud is not so nebulous. Assets might still be sprawled out across multiple hardware platforms, but each one is now identified as owned by the agency. The platform then creates a baseline of all activity occurring within a data center from users, assets and applications, diving into those logfile events and identifying what is normal, authorized activity, and what is an outlier or anomaly. It even turns that data into a visual map of how those assets are interacting, and plotting the changes over time. In the most general sense, it takes the cloud and allows users to define it more concretely, not based on the hardware assets, but instead on the apps, users and interactions that make an agency’s computing infrastructure work.

Lacework comes with its own threat-remediation abilities. I tested this by adding a malware process into the cloud test environment that tried to eventually spawn a new user and then elevate privileges. This is done in such a way as to be somewhat camouflaged in a normal data center or cloud environment, easily lost in the billions of other recorded events. But Lacework identified it because it was an outlier process that didn’t normally happen, shining a light on the rogue process even though it was moving slowly and taking a lot of precautions.

The true value of Lacework for feds might be helping to redefine a security perimeter, and getting real visibility into government clouds. Although its core security components are impressive, being able to map processes and interactions means that other security tools, like segmentation, automation, or even standards like SIEMs, can more easily integrate into cloud environments. That could help make government clouds more attractive, and remove some of the final deterrents to wider government adoption rates.