Jason Thompson, formerly of Q1 Labs, is the vice president of worldwide marketing at SSH Communications Security.
Almost as long as computers have been connected, malicious actors have been out to steal sensitive data.
The long human history of encryption was first applied to the digital universe beginning with the National Bureau of Standards’ Data Encryption Standard in 1979. Two decades later, OpenSSL was created to encrypt websites and user information across the Web.
Without OpenSSL, government services facilitated via the Internet would be largely impossible. And OpenSSL is now the encryption method of choice for two-thirds of all websites.
Even though it is the default in Internet encryption, the OpenSSL project lacks the human and financial resources to address every potential issue with the software. The Heartbleed OpenSSL vulnerability occurred as a direct result of this, underscoring the serious threats that can exist if open source software is not able to keep up with modern threats.
Understanding the critical risks such vulnerabilities could pose to its security, Google re-evaluated its use of open source software and created BoringSSL, a branch of the OpenSSL project. The company had been managing over 70 patches to OpenSSL -- with even more on the horizon. This was making it difficult for Google to maintain consistency across multiple code bases and causing security concerns.
This concern is driven home by the four hackers who took up a challenge by website-security firm Cloudflare and succeeded in exploiting the Heartbleed vulnerability to steal private Secure Shell -- SSH -- security keys.
This is why an OpenSSL vulnerability can be so dangerous.
Secure Shell Implementation -- Don’t Forget the Risks
Operating quietly in the background of most government networks, the Secure Shell protocol encrypts connections and accesses the agency’s most sensitive data assets.
Keys are simply text files that can be easily uploaded to the appropriate system.
Associated with each key is an identity: either a person or machine that grants access to information assets and performs specific tasks, such as transferring a file or dropping a database, depending on the assigned authorizations. In the case of Secure Shell keys, those basic text files provide access to some of the most critical information within an agency.
Because of this broad access, Secure Shell key management is a significant security concern. In a recent report, IDC called out identity and access management, or IAM, risks within Secure Shell implementations specifically. For example, using Secure Shell keys can circumvent controls and provides only limited ability to identify and remove revoked, orphaned and unauthorized keys.
IAM control has become even more important with the spike in popularity in machine-to-machine activity.
Where Identity and Access Management Comes Up Short
Agencies use identity and access management solutions to help control access to cloud infrastructure, applications, servers, and both structured and unstructured data.
These solutions manage the identities assigned to interactive human users well -- but not so the larger number of identities assigned to the automated processes that drive much of the computing in large-scale data centers.
As nonhuman identities continue to grow, identity management won’t address the vast sum of identities present in an agency: the identities performing the majority of operations.
Machine-to-machine data transfers require a secure encrypted channel. For this reason, most of the identities that enable these processes use Secure Shell for authentication and authorization. However, holes exist in the governance of identities that use Secure Shell.
Instead of a centralized provisioning procedure, application developers, application owners and process owners may all have identity creation and assigning privileges. This often leads to a lack of proper control and oversight over creation of identities and their authorizations.
Without central management and visibility, government agencies cannot be sure how many Secure Shell identities have been created, what these identities are authorized to perform and what authorizations are in fact no longer needed.
The Point Is Not That Open Source is Bad
Open source vulnerabilities present the opportunity for government entities to reconsider how they use and manage open source technologies, both in their products and within their agencies. And that’s a good thing. The point is not that open source is bad. Rather, it is a call to action for technology leaders to take another look at the critical but oft-forgotten infrastructure their agencies are riding on, especially when it is something as ubiquitous and critical as encryption technologies like SSL or Secure Shell.
Here are important questions to consider:
- Is the agency’s open source technology being properly monitored?
- Can we rotate keys or update to new versions in order to rapidly respond to vulnerabilities?
- Do we know who is creating keys within the agency?
- Who has access to what information? Are we able to see whether someone has acted maliciously?
- Is our open source software supported by either a vendor or internal resources, or are we relying solely on someone’s good will?
Someone Is Bound to Discover Vulnerabilities -- So Be Ready
In general, OpenSSL encrypts government websites and user information just fine.
However, someone with the appropriate skill set is bound to discover vulnerabilities in any software, and OpenSSL is no exception. The problem is that this particular software is in charge of encrypting your data. Hackers can exploit OpenSSL to steal Secure Shell keys, giving them access to your network and its sensitive data.
To protect against this kind of exploitation, industry standards for using OpenSSL and implementing Secure Shell protocol include strong identity and access management controls, centralized provisioning and visibility into who is authorized to create keys -- and for what purpose.
Adopting these standards will go a long way toward protecting critical government and user data.