Policy would require agencies to scan for network threats every 72 hours and begin patching holes

kentoh/Shutterstock.com

Continuous monitoring program remains a work in progress, DHS official says.

The original headline on this story was modified for clarification.

The Homeland Security Department later this month will present to federal computer contractors and remote cloud suppliers standards for finding and fixing cyber threats within 72 hours, DHS officials announced on Thursday.

The new approach aims to resolve what some cybersecurity specialists view as a flaw with the principle of automated “continuous monitoring” that the White House called for in 2010. Real-time tracking of potential network threats is intended to identify weaknesses faster and more economically than the old policy of manually reporting on computer inventories and incidents once a year. But spotting all the risks to personal computers and Internet connections in an organization does not make data any safer, critics note. Fixing them quickly does.

Resolving identified weaknesses rapidly is the goal of the new procedures and, according to some government advisers, agencies could eventually be required to adopt them. “We’re initiating the discussion and we are asking for comment,” DHS National Cybersecurity Division Director John Streufert told Nextgov on Thursday.

Homeland Security officials will describe the standards in-depth to industry officials June 25-26, Streufert said earlier in the day during a talk co-hosted by Government Executive Media Group, which includes Nextgov. He spearheaded the original continuous monitoring movement as the former chief information security officer for the State Department.

“Think continuous monitoring and mitigation,” said SANS Institute Research Director Alan Paller, who added that the term “continuous monitoring” has been misinterpreted in practice. “Knowing [of a weakness] and not fixing it is dangerous.” SANS, a computer security think tank, co-sponsored the event.

Streufert said the mechanics also will help shape rapid response in the cloud, where agencies do not always have physical control over their data. The standards are not a mandate, he said, but rather a template for how to tailor procedures to every sort of computing environment from federal data centers to corporate offices.

The formulation of the continuous fixing method follows the June 6 launch of the Federal Risk and Authorization Management Program for certifying the security of outsourced computer centers. FedRAMP is a process in which third-party auditors initially check that a cloud service meets a set of uniform, governmentwide security controls, and then offer the resulting accreditation documentation to all agencies for free reuse. Any department can then install the authorized service as long as the cloud vendor continuously monitors those controls for their federal customers.

“Every agency is still fully responsible for the security of their operating environment,” Dave McClure, an associate administrator at the General Services Administration, which manages FedRAMP, said during the talk. The procedures for automated, continuous reporting to agencies are still a work in progress, Streufert said.

McClure said the government, as of June 13, had received 22 applications from cloud companies for certification. FedRAMP is obligatory for all Web services that will have a limited or serious impact on government operations if disrupted. The Office of Management and Budget is responsible for enforcing the program, which McClure said he expects will happen each year when agencies file reports on compliance with federal computer security law. “I think they’ve got some leverage,” he said, without going into details.