An inside look at how the intelligence community deals with the exploitable software bugs it finds.
Friday’s global ransomware attack has reignited the debate about how the U.S. intelligence community conceals or reveals knowledge about critical software bugs. As confirmed by a former National Security Agency official, WannaCry exploited a vulnerability stockpiled by the agency and exposed in last year's Shadow Brokers dump.
But how much blame should NSA bear for WannaCry's rampage across 200,000-plus computers in 130 countries?
On the one hand, the intelligence community really does keep a trove of zero-day bugs. Spies need them to intercept communications—and much more, according to Michael Daniel, an Obama-era White House cybersecurity coordinator.
Disclosing a vulnerability can mean the U.S. forgoes an opportunity to collect crucial intelligence that could thwart a terrorist attack, stop the theft of our nation's intellectual property, or even discover more dangerous vulnerabilities used by hackers or other adversaries to exploit our networks. So automatic disclosure is not always the right policy choice, said Daniel in an email to Defense One.
Yet, the notion NSA simply hoards every bug it discovers is false. Since July 2011, Daniels said, representatives from various agencies and departments have periodically assembled to discuss newly discovered bugs and vulnerabilities. They vote on each one: reveal or conceal?
When the vote is to conceal, the decision will be revisited in three months to a year, depending on the bug's attributes and, Daniels said, a wide range of surrounding factors. How dangerous could it be if a criminal gang or adversary began using it? Would the intelligence community be able to detect its use? How badly do spies need the intelligence the bug might yield? Can they get it another way? Could they use the bug for a short period of time and then disclose it? Can it be patched?
Former NSA officials have praised the process.
"You’ve heard my deputy director say that in excess of 80-something percent of the vulnerabilities are actually disclosed—responsibly disclosed —to the vendors so that they can then actually patch and remediate for that,” Curtis Dukes, NSA's former deputy national manager for national security systems, said at an American Enterprise Institute event in October. "So I do believe it's a thoughtful process that we have here in the U.S."
Dukes said the impetus to conceal an exploit vanishes when it is used by a criminal gang, adversarial nation, or some other malefactor.
We may choose to restrict a vulnerability for offensive purposes, like breaking into an adversary's network, he said. But that doesn't mean we're not also constantly looking for signs whether another nation-state or criminal network has actually found that same vulnerability and now are using it. As soon as we see any indications of that, then that decision immediately flips, and we move to disseminate and remediate.
NSA has made no comment on the bug that enabled WannaCry. But Microsoft issued a patch March 14, even before the Shadow Brokers dump, which suggests either the IC disclosed the bug or at least did not succeed in keeping it a secret.
The problem is many institutions didn't install the patch, Microsoft President Brad Smith wrote Sunday in a blog post: "While [the patch] protected newer Windows systems and computers that had enabled Windows Update to apply this latest update, many computers remained unpatched globally. As a result, hospitals, businesses, governments and computers at homes were affected."
What conclusion can we draw from the WannaCry episode?
For one, the NSA process for disclosing zero days is not broken. But the decision to disclose a vulnerability is only as good as the intelligence about what different gangs or adversaries are up to. And the situation may improve, with more intelligence, better reporting from the field and better reporting from vendors to the public.
Daniels offered a few more ways to improve the system: "In the future, I think we need to arrive at some metrics for measuring how severe and exploitable a particular vulnerability is. For example, some zero days may require you to have physical access to a system to exploit it. That's obviously a very different threat than one that can be exploited remotely. I think we need to have a more standardized way to assess zero-day vulnerability severity."