Pentagon’s new bug bounty seeks to find bias in AI systems

Defense Department Chief Digital and Artificial Intelligence Officer Craig Martell speaks at the Defense Intelligence Agency’s DoDIIS Worldwide Conference, Dec. 13, 2022.

Defense Department Chief Digital and Artificial Intelligence Officer Craig Martell speaks at the Defense Intelligence Agency’s DoDIIS Worldwide Conference, Dec. 13, 2022. Myles Scrinopskie/Defense Intelligence Agency

The head of DOD’s Chief Digital and Artificial Intelligence Office said “the outcome of the AI bias bounties could powerfully impact future DOD AI policies and adoption.”

The Pentagon’s Chief Digital and Artificial Intelligence Office — or CDAO — announced on Monday that it launched a public bug bounty exercise to detect biases in AI systems, with an initial focus on risks in large language models.

CDAO said its Responsible AI Division will oversee the two separate AI bias bounties “to generate novel approaches to algorithmically auditing and red teaming AI models, facilitating experimentation with addressing identified risks and ensuring the systems are unbiased, given their particular deployment context.”

The first of these exercises will run from Jan. 29 through Feb. 27 and will focus on identifying “unknown areas of risk in Large Language Models (LLMs), beginning with open source chatbots, so this work can support the thoughtful mitigation and control of such risks.” 

CDAO said a second bias-focused AI bounty exercise “will soon follow,” although it did not disclose the date of its launch. 

The active bug bounty exercise is being overseen by software firms ConductorAI and Bugcrowd and “seeks to demonstrate how bias may relate to DOD use cases,” according to the registration website for the program.

A “bounty brief” from ConductorAI and BugCrowd said participants “stand to be rewarded for identifying instances of protected class bias,” with the Pentagon providing “a $24,000 pot that will be distributed to submitters based on a scoring rubric and evaluation” by the two companies. 

“The term bias in this document refers to the legal notion of bias, where the algorithm outputs a disproportionate impact on a certain federally protected class,” the brief said. “Thus, AI bias is the presence of prejudice or unfair inaccuracies within the predictions or behaviors of an AI model, in this case a Large Language Model.”

LLMs power, in part, generative AI chatbots such as OpenAI’s ChatGPT. The Pentagon and CDAO have been examining the use of generative AI tools and have taken broader steps to chart out the safe adoption of these technologies across the Department of Defense. 

The Pentagon rolled out a series of ethical AI principles in February 2020, and CDAO subsequently released a responsible AI toolkit in November 2023 to help guide the department’s use of these emerging capabilities. 

DOD also launched a task force last year — known as Project Lima — to examine potential uses of generative AI technologies. A CDAO official said in November that the task force had already found “over 180 use cases across DOD” where generative AI tools could be helpful.

Chief Digital and Artificial Intelligence Officer Craig Martell said in a statement that his office is “actively monitoring” risks associated with LLMs and added that “the outcome of the AI bias bounties could powerfully impact future DOD AI policies and adoption.”

CDAO’s bug bounty initiative is the Pentagon’s latest effort to bring in ethical hackers to help the department identify and rectify potential cyber vulnerabilities. DOD launched its “Hack the Pentagon” program in 2016 to leverage public security researchers’ knowledge to better secure its systems and networks from outside threats.