DIU Issues Step-by-Step Guide for Defense Stakeholders to ‘Responsibly’ Use AI

Olemedia/istockphoto.com

The framework is meant to promote transparency and accelerate the Pentagon’s adoption of trustworthy commercial technologies.

The Defense Innovation Unit produced a comprehensive framework to help contractors and federal officials gauge whether their technology and programs align with the Defense Department’s Ethical Principles for Artificial Intelligence.

DIU’s new Responsible AI Guidelines, released on Monday, mark the outcome of a months-long initiative to mesh those governing principles into the unit’s own commercial prototyping and procurement efforts. 

“The [RAI Guidelines] are specifically intended for use on acquisition programs run by DIU,”  DIU AI and Machine Learning Technical Director Jared Dunnmon told Nextgov in an email on Tuesday. “DIU actively invites others to make use of these guidelines, as well as to provide feedback, comments, or suggested updates by reaching out at responsibleai@diu.mil."

At this point, DOD deploys AI for heaps of different functions, including in military surveillance, weapons maintenance, administrative operations inside offices and cybersecurity. In February 2020, after public controversy surrounded some of its work associated with the technology, the department formally adopted five principles to guide what it called the ethical development of AI capabilities. The following month, DIU kicked off a strategic scheme to explore methods for implementing the principles in prototype projects with DOD collaborators and learn about best practices from experts in industry, academia and federal agencies. 

In a 33-page report accompanying its RAI Guidelines, the unit offers materials to help inform how officials apply the measures to the technologies they are developing or purchasing. Step-by-step procedures are presented for companies, DOD stakeholders and program managers to follow across the various phases that underpin the making of an AI system: planning, development and deployment.

Visual workflows and accompanying worksheets for each phase walks users through questions to be asked and multiple considerations to be made regarding the humans that would leverage the technology, as well as all who might be harmed by it and how—before and throughout each stage of the AI system’s existence and use. 

Dunnmon noted that those resources are intended to instruct and guide people “on how to properly scope AI problem statements” and provide detailed guidance on notions that each of the various players “should keep in mind as they proceed through each phase of AI system development.”

“The guidance can be incorporated into the statement of work as well as contractual milestones," he wrote in the email.

DIU’s report also includes sections highlighting its results from applying the guidelines to specific AI-centered projects that are unfolding among DOD and industry collaborators. They involve using advanced medical imaging and analytics from commercially available sources to track and counter transnational criminal groups. “DIU is actively deploying the RAI guidelines on a range of projects that cover applications including predictive health, underwater autonomy, predictive maintenance, and supply chain analysis,” officials added in the report.

Lessons learned from those experiments and the entire initiative are also included. The foundational tenets of responsible AI listed in the report confirm that the guidelines are meant to be adaptive, actionable, useful, concrete, realistic and provocative in that they spur deep discussions.

Following the guidelines’ release, some critics said they were not convinced the framework would lead to immediate change. But as Dunnmon noted, they can share their concerns directly.

DIU officials also emphasized in the RAI report that “this work is not intended to be a final product” and that they are actively seeking input to incorporate into routine updates.