DARPA Wants Artificial Intelligence To Explain Itself

150729-N-RZ218-013 , ... ]

150729-N-RZ218-013 , ... ] U.S. Navy

If humans can't understand why AI systems make certain decisions, they probably won't trust them.

The Pentagon wants to incorporate artificial intelligence into more systems but first needs to ensure its employees fully understand what drives AI, a new broad agency announcement suggests

Potential applications for defense are endless—autonomous aerial and undersea war-fighting or surveillance, among others—but humans won't make full use of AI until they trust it won't fail, according to the Defense Advanced Research Projects Agency. A new DARPA effort aims to nurture communication between machines and humans by investing in AI that can explain itself as it works.

An intelligence analyst who receives recommendations from algorithms about what to investigate further "needs to understand why the algorithm has recommended certain activity,” the BAA said. And the personnel overseeing a new autonomous system needs to know why it makes decisions so they "can decide how to use it in future missions."

» Get the best federal technology news and ideas delivered right to your inbox. Sign up here.

DARPA is looking for researchers who can create prototypes for AI systems that display their decision-making rationale—a feature DARPA calls "Explainable AI" or "XAI." The agency also wants teams that study the psychological theory behind effective explanations—should a machine describe its actions using analogies or examples?—and assess if a user's trust in an AI system grows if it explains itself. DARPA hosted a Proposers' Day on the topic Thursday. 

The end user, DARPA writes, is any person who "depends on decisions, recommendations or actions produced by an AI system, and therefore needs to understand the rationale for the system’s decisions.”

Ultimately, the user should be able to understand the answers to questions like "why did you do that," "when do you fail," or "when can I trust you."

Historically, the inner-workings of the most sophisticated machine-learning systems, such as deep learning, are the least transparent to a user, according to DARPA. But the easiest to understand, such as simple decision trees, aren't as accurate. 

Explainable AI could help humans apply it to more sectors globally, but it's especially important for the Defense Department, which is "facing challenges that demand the development of more intelligent, autonomous and symbiotic systems," the BAA said.