Five Eyes agencies issue guidance on securing AI

BlackJack3D/Getty Images

The new best practices aim to help organizations deploying AI to secure their digital environments, with a particular focus on protecting AI model weights and extensive system vigilance.

Government security agencies from nations within the Five Eyes cohort issued new guidance for deploying artificial intelligence systems securely, particularly to ensure any existing network vulnerabilities are not further exploited by the emerging technology. 

Released on Monday afternoon, the best practices document is coauthored by the Cybersecurity and Infrastructure Security Agency, the FBI, the Australian Signals Directorate’s Australian Cyber Security Centre, the Canadian Centre for Cyber Security, the New Zealand National Cyber Security Centre and the United Kingdom’s National Cyber Security Centre.

The guidance sets three core goals: improving the confidentiality and integrity of AI systems; assuring known cybersecurity vulnerabilities are secured; and implementing a robust series of safeguards to detect and prevent malicious activity. 

Given the prevalence of newer AI software in established digital networks, security authorities have been monitoring how to best marry the potentials of AI while mitigating the risk for disaster. Ongoing security concerns for AI systems generally revolve around the exploitation of data it is being trained on.

“Malicious actors targeting AI systems may use attack vectors unique to AI systems, as well as standard techniques used against traditional IT,” the guidance reads. “Due to the large variety of attack vectors, defenses need to be diverse and comprehensive. Advanced malicious actors often combine multiple vectors to execute operations that are more complex. Such combinations can more effectively penetrate layered defenses.”

While the best practices touch on familiar advice, such as refining and securing the digital network and environment governance, the guidance also notably recommends extensive software review before and during implementation.

The agencies note that leveraging cryptographic protocols and digital signatures to confirm the integrity and origin of each artifact passing through the system and storing all forms of code for further validation and to track any changes are key to risk mitigation.

The guidance recommends automating the detection, analysis and response capabilities across a network with an AI model deployed, as the authors say that this level of automation can help reduce workflow burdens of IT and security teams — with the caveat of using sound judgment as to when to bring a human perspective into the digital network supply chain.

“When considering whether to use other AI capabilities to make automation more efficient, carefully weigh the risks and benefits, and ensure there is a human-in-the-loop where needed,” the guidance says.

The need to protect weighted AI models is also included in the document. Weights in AI neural networks help a system make a specific decision regarding the input data or information. The augmentation of the numerical weights affects the AI model’s overall output.

Five Eyes security agencies recommend “aggressively” isolating weight storage in a highly restricted digital zone and implementing further hardware protections to prevent malicious actors from manipulating them.

“AI systems are software systems,” the document concludes. “As such, deploying organizations should prefer systems that are secure by design, where the designer and developer of the AI system takes an active interest in the positive security outcomes for the system once in operation.”