Demystifying Access Control and Zero Trust

scyther5/iStock.com

As agencies rethink making temporary remote arrangements permanent, officials should start with proven ubiquitous encryption technology.

The COVID-19 pandemic has forced many organizations to adapt to unprecedented numbers of personnel working remotely. Rather than just a temporary deviation, this seems likely to persist, with a recent Microsoft survey finding that 73% of employees want to continue to work from home after the pandemic. However, this "new normal" creates a significant security problem where IT resources, usually accessible only within the organization's local area network, or LAN, must now be made available remotely. 

In many instances, LAN-based IT resources are available without any special authorization, as these resources can only be accessed within an organization's premises, using physically connected devices, by vetted personnel, working within a physical access-controlled boundary.

To some extent, organizations had already begun to extend these boundaries with on-premises Wi-Fi. Those with their own premises that include outside areas, such as car parking, could largely contain Wi-Fi access within their physical perimeter. Remote access changes the scenario drastically as the physical boundary disappears. Virtual private networks, or VPNs, the standard mechanism used to secure remote access, even when coupled with secondary authentication, cannot replace traditional physical access controls, and network security alone is no longer enough. Access to IT resources now requires a far more granular approach, and this is what Zero Trust is all about.

Zero Trust assumes that all networks, even internal firewalled networks, are insecure. Organizations must individually evaluate each online resource to assess its value, decide who should be allowed access and how access should be regulated. Of course, this does not mean organizations should abandon network security in favor of Zero Trust. Overall, security is strengthened with multiple layers of different types of protection.

Security needs to start from each endpoint (the users and their devices) through to a specific IT resource (server) and, ideally, the service or application (end-to-end). Where multiple services share the same resource and security terminates at the resource, the most sensitive service must take precedence in terms of security enforcement levels. TLS proxies like Nginx and STunnel, underpinned with FIPS 140-2 validated OpenSSL, are tried-and-proven opensource technologies that can be used to add TLS and client certificate support to services retroactively. Endpoints should require encrypted network access (e.g. VPN or HTTPS/TLS), including two-factor authentication, or more broadly, multi-factor authentication,  or MFA.

To avoid man-in-the-middle MFA attacks, MFA should ideally be integrated with the communication encryption session. Further, to avoid the SolarWinds MFA bypass attack, the use of asymmetric encryption with hardware devices provided to end-users, such as smartcards or dongles, that can protect MFA encryption keys, should be used. However, this type of approach is typically both inconvenient and costly. 

A more practical approach is to try to tick as many boxes as possible while preserving enough convenience to guarantee widespread adoption: it makes sense to start with proven ubiquitous encryption technology. Public key infrastructure, or PKI, with all its limitations, remains the most widely supported means of implementing and managing communications security and is supported by all browsers and operating systems. The underlying standard cryptography layer (TLS) also supports mutual authentication using client-side PKI certificates to support the MFA requirements.

TLS client certificates are already integrated with the encrypted communication session (check) and are based on asymmetric cryptography (check). With client certificates, the private key associated with a user's public key can be stored safely, as modern computing devices all have platform-secured user key stores, often integrated with hardware security, e.g. Trusted Platform Module (TPM) (check). Client certificates are also often used to remove the need for user passwords. However, passwords have their own unique properties and, coupled with client certificates, make a potent MFA combination.

Purchasing client certificates can be expensive and owning your own PKI can be onerous, requiring a high-security facility. However, client certificates can be managed separately, removing the more significant security burden associated with issuing server certificates. For medium to large organizations, a client-only PKI is not too onerous and allows the possibility of using X.509 certificate authorization extensions to provide even higher access control granularity.

Organizations can introduce additional end-to-end controls with application-layer security. This is particularly important with multi-media communications such as instant messaging, voice/video calls and conference calls and file sharing. This type of media should only be deciphered and authenticated by the intended recipients.

Recent attack methods are proving more devastating, and the process of prevention requires some effort, but there is reassurance there! Organizations have a host of effective options available to ensure the security of their data and communications, and the cost and complexity they represent are a fraction of a successful compromise. 

Mark Currie is vice president and chief network officer of Cellcrypt Inc.