So much of our nation's cybersecurity focus is on controlling information -- who can access it, use it, redistribute it, and talk about it. Experts talk about privacy and access controls, authentication and identity management. But what happens when cybersecurity efforts fail, systems are compromised and information is shared on the Internet with millions of others?
There is a growing phenomenon of post-cybersecurity efforts that focus not on technical or policy solutions, but on good old cause-and-effect human influence. Once information is leaked, companies and the government warn Internet users and potential viewers to stay away, or else.
The most significant example of this is in how the government has handled the WikiLeaks disclosures. Several agencies, including the Department of Defense, issued notices and memos warning employees and contractors of the hazards (to their employment) of taking a peak at WikiLeaks. On February 11th, the Defense Security Service sent a memo to "contractors cleared under the national industrial security program" warning that:
Unauthorized disclosures of classified documents (whether in print, on a blog, or on websites) does not alter the documents' classified status or automatically result in declassification of the documents. To the contrary, classified information, whether or not already posted on public websites or disclosed to the media, remains classified and must be treated as such, until it is declassified by an appropriate original classification authority.
The notice then goes on to lay out dos and don'ts for cleared contractors, including instructions on what to do if a contractor happens to access information inadvertently. If contractors fail to follow these rules, they could be found to have committed security violations, and their security clearances could be in jeopardy.
One of the reasons the government says the classified information should not be viewed by cleared contractors is that they, under the responsibilities given them when they received their clearances, are to handle classified information properly. Ironically, while those we trust our nation's secrets to are not to look at the site, those who are uncleared can freely do so. We have, in effect, created more obstacles for cleared personnel than for those without security clearances.
In the private sector, Sony issued a similar edict recently with regards to its PlayStation 3 systems. After discovering a hack to the PlayStation 3, which enabled users to install other operating systems on their consoles, Sony not only sued the hacker but warned its customers:
Consumers using circumvention devices or running unauthorized or pirated software will have access to the PlayStation Network and access to Qriocity services through PlayStation 3 system terminated permanently.
To avoid this, consumers must immediately cease use and remove all circumvention devices and delete all unauthorized or pirated software from their PlayStation 3 systems.
Again, the impetus, following a failed cybersecurity measure, is on a third party.
Both incidents remind me of when I was a child watching television, my parents telling me to close my eyes and keep them closed when an inappropriate scene came on. Closing my eyes did not make the scene go away, though it did keep me from seeing something that would either frighten me or cause me to ask a million questions. If I did happen to peak and get caught, I would inevitably be sent to my room.
Is the "Close Your Eyes and Don't Peak" approach to a cybersecurity failure a feasible one? In terms of the government, it is unclear how it could even police and enforce a policy of requiring contractors to not look at a publicly available website, regardless of its content. While government systems are and can be policed with certain steps, how does the government determine whether Contractor A viewed a WikiLeaks page from his home? Even if the government determined that Contractor A's computer visited the site, how can the government show that Contractor A was the individual who took a peak, and not his wife, son, or mother-in-law?
In the Sony example, the company took steps to protect its intellectual property (and that of its game-producing colleagues), as well as control access to the software and hardware that make up its console. Sony can't be faulted for its actions, but the actions do raise questions about whether and how a company can effectively protect its brand and intellectual property once its products reach the public.
While we are building stronger cybersecurity capabilities, we may need to assess how we better protect our networks, software, hardware and intellectual property after a compromise and disclosure. We need better resiliency so that in a time of adversity -- whether corporate or government -- we can create a "steeling effect" to counter the failures of our nation's cybersecurity efforts. Otherwise, we will be left with See No Hack, Hear No Hack, Speak No Hack.