Microsoft to test third-party AI models for incorporation in its security offerings

Joan Cros/NurPhoto via Getty Images

The announcement follows Anthropic’s debut of its leading-edge Mythos model, which the company says has already found thousands of high-severity vulnerabilities.

Microsoft will be evaluating third-party artificial intelligence systems to pair with its network monitoring to see if powerful AI models can meet its internal cybersecurity benchmarking standards. 

In a Wednesday announcement, Microsoft detailed plans to partner with other members of the AI industry to eventually integrate its products into the company’s security platforms to defend against external, AI-driven threats. 

“Recent advances in AI model capabilities are changing how vulnerabilities are discovered and exploited,” Microsoft Security Chief Architect and Corporate Vice President Ales Holecek said in a blog post. “We are accelerating this work and partnering with the industry to use leading models, paired with our platforms and expertise, to turn AI-driven discovery into protection at scale.”

The announcement follows Anthropic’s debut of its ultra-powerful Claude Mythos Preview AI model earlier this month and the associated Project Glasswing, through which companies like Microsoft have been granted access to Mythos to test the system’s vulnerability discovery capabilities. Microsoft confirmed in its blog post that it is working with Mythos and other companies to coordinate a defensive response to the model’s discoveries.

Leveraging its own open-source model, CTI-REALM, Microsoft is testing models like Mythos to monitor its digital network security. Microsoft said it will focus on using the models for continuous network vulnerability scans, reducing attack exposure and building new solutions designed to help customers use advanced AI for security needs. That review will also include select open-source code bases.

Each of these three implementation areas incorporates existing Microsoft solutions, such as Microsoft Defender, Microsoft Security Exposure Management and the company’s Security Development Lifecycle framework to support effective AI model testing and implementation. 

Microsoft will also be updating the Security Exposure Management solution to offer greater visibility and guidance into threat identification through a new component called Secure Now, along with next steps for customers to take to be proactive about securing their systems.

“While models are powerful on their own, without prioritization and context, large volumes of results can overwhelm development teams,” the blog post reads. “These new solutions are designed to pair model output with the context and security solutions needed for enterprises to drive security effectiveness at scale.”

Microsoft also said that it will continue to share updates as the testing process gets underway. In June, the company is expected to debut a new multi-modal security scanner powered by AI to leverage multiple types of data to detect sophisticated threats.