The State of Continuous Diagnostics and Mitigation


Many agencies face the same challenges when trying to implement CDM capabilities.

With very little effort and expense, adversaries in cyberspace leverage automation and artificial intelligence to evade large and expensive deployments of commodity network and system security stacks. Federal organizations report new cyber intrusions every day, and defeating these attacks is one of the most important national security challenges. In response, the Homeland Security Department kicked off the Continuous Diagnostics and Mitigation program in 2013 to provide adequate, risk-based, and cost-effective cybersecurity with more efficient allocation of cybersecurity resources.

Although the program has found much success among a select few agencies, most agencies struggle to realize the full vision of the CDM program. In its most recent examination of the CDM program, the Government Accountability Office stated, “Without a means of aggregating and assessing cybersecurity risks arising from their information systems to the organizational level, …11 agencies may be missing opportunities to identify trends or prioritize investments in cybersecurity risk mitigation activities in order to target widespread or systemic risks to the systems and organization.”

Homeland Security’s .gov Cybersecurity Architecture Review program—known as .govCAR—ensures that agencies move away from implementing capabilities based on perceived risk and instead identify the extent to which they have weak capability coverage to detect and defeat known threats. This analysis provides a clear return on investment as each dollar spent should address the most likely and most damaging threats and risks to the agency.

Many agencies are still working on implementing asset management and identity and access management capabilities and have not yet begun focusing on network security management and data protection management. This is due to the common challenges agencies face when seeking to implement CDM capabilities. Although many agencies have a plethora of individual capabilities and tools at their disposal, these are often segmented and siloed, sourced from a variety of providers, and ultimately do not communicate well with each other. 

This is a major barrier for many agencies, as it prevents them from attaining a comprehensive view of their cyber terrain. Without full visibility, agencies aren’t able to fully comply with the goals of the CDM program: identifying what is on the network, who is on the network, what is happening on the network, and how agency data is protected. After all, how can you protect assets if you don’t know where they reside, how they are configured and criticality of asset,  or don’t know you own them in the first place?

As adversaries grow increasingly advanced and target an ever-widening attack surface, federal agencies need solutions that detect and empower hunting for real threats, as well as prioritizing the right responses—as opposed to floods of alerts that ultimately slow down response times and overall effectiveness.

In order to protect data, agencies must first understand:

  • Their asset and network terrain.
  • Threats targeting their environments.
  • Sensitive data and where it resides.
  • Adversaries’ actions and their tactics, techniques and procedures.
  • How to alter the terrain to agencies’ advantage.

As previously stated, agencies already have many tools at their disposal, and much of the groundwork has already been laid. The focus going forward will be on how agencies can automate and operationalize the work already done, so that the advantages provided by these capabilities can be gained or maintained as effortlessly and efficiently as possible.

At the same time, many agencies are burdened with legacy technology that often perpetuates capability gaps and leaves vulnerabilities wide open. As agencies replace outdated technology and meet the government’s modernization objectives, they will need a centralized, connected view of their metadata, network gateways, internal networks, email and web gateways, and cloud virtual machines. This will provide them with a detailed level of visibility and threat intelligence, allowing them to make data-driven, risk-based decisions on how to manage their full terrain, configuration posture, and overall cybersecurity posture.

As agencies establish more and more of these capabilities, they will become increasingly hardened targets that drive up adversaries’ cost of intrusion and risk of exposure. Best of all, if agencies have properly automated their cyber defenses, their costs of defense will shrink as the cost of attack rises, greatly swinging the balance in favor of the agency. Automation will also solve a problem that many agencies struggle with—the cyber talent gap—by providing assisted and automated detection, investigation and response while preventing data leakage across the network.

After establishing this baseline, agencies can begin to get more creative, disguising their defense posture even further by using deception technologies. These allow agencies to create decoy assets that look real to attackers, but function as virtual tripwires/alarm bells when attackers try to access them. By strategically placing these around the network, at all points of the attack lifecycle, agencies can further reduce adversaries’ dwell time on federal networks, detect post-breach attacks and identify malicious insiders.

Ultimately, this will grant agencies the ability to understand both the content and context of security events. Combining deep and broad visibility on both the network and endpoint with fast, comprehensive detection (with specific validation and mature response and prevention capabilities) empowers security operations teams to employ detection techniques such as network traffic analysis, endpoint forensics, and payload analysis, as well as combine techniques to counter more advanced and persistent threat actors.

Craig Harber is the chief technology officer of Fidelis Cybersecurity.