Machine learning can help federal agencies quash cybersecurity threats, but it can’t do it without the help of humans.
That’s the crux of a new report commissioned by security giant McAfee, whose Chief Technology Officer Steve Grobman thinks federal agencies need to re-think how humans and machines work together on technology teams. Many federal officials have pointed to widespread shortage of cybersecurity personnel in government—and agencies' inability to compete with private-sector salaries—though the extent of this shortage has not been quantified.
An ideal model, Grobman said, is using artificially intelligent technology to hand threat reports off to humans, who then work on a response plan.
» Get the best federal technology news and ideas delivered right to your inbox. Sign up here.
Security teams can’t simply expect new technology to make decisions about which threats to pursue without human input, Grobman told Nextgov, especially as the malicious actors often perpetrating attacks against the government are humans themselves.
“Security becomes personal when considering the people behind the attacks,” the report said.
But especially in an administration that already established hiring freezes and proposes to shrink the workforce overall, technology that “is able to work with lower human involvement will give the government a path forward to provide critical protection,” Grobman said.
Broadly, security teams may look to automate simple task-oriented processes, but rely on humans to decide to investigate certain threat types, assess the impact of breaches, determine a bad actors’ motivation, or to come up with a remediation plan. Those responsibilities would require the government to acquire more skilled IT talent instead of planning to automate cybersecurity entirely, he explained.
For instance, an automated system might log that a user logged in at one location at a specific time and later at another location, but might not consider those two pieces of information might indicate a breach if it doesn’t violate any security protocols, he explained. Humans might be better at contextualizing anomalies.
And if a single security analyst takes 15 minutes to clear a security alert, “then that person can only process about 30 alerts per day. This formula dooms security teams into unsustainable reactionary patterns, and it fails to allow security personnel to develop problem-solving skills," Grobman said.
But the federal government’s slow hiring process could give adversaries more time to refine their attacks, Grobman told Nextgov. “Government will need to work through procurement processes and cycles ... which are not designed for speed,” he said.
“As new threats are introduced, security teams alone cannot sustain the volume, and machines alone cannot issue creative responses,” the report said.