Soldiers Don’t Trust Robot Battle Buddies. Can Virtual Training Fix That?

PRESSLAB/Shutterstock.com

Allowing soldiers to train their robot wingmen in game environments could be the key to human-machine teaming.

You might think that troops would be eager to incorporate robots and automata into operations, since military robots are intended to save soldiers, airmen, etc., from the “dull, dirty, dangerous” jobs that operators are called on to perform in combat settings. But a new survey from the U.S. Air Force’s Journal of Indo-Pacific Affairs shows that frontline military personnel are actually more apprehensive than their commanders about it.

The paper, based on a survey of 800 officer cadets and midshipmen at the Australian Defence Force Academy, showed that “a significant majority would be unwilling to deploy alongside fully autonomous” lethal autonomous weapons systems, or LAWS, and that “the perceived safety, accuracy, and reliability of the autonomous system and that the potential to reduce harm to civilians, allied forces, and ADF personnel are the most persuasive benefits,” as opposed to other factors, such as cost savings, etc. 

This might be a problem for a Pentagon eager to begin incorporating autonomous systems into exercises and operations. Military leaders have been pushing new concepts of operation based around human-machine teaming for years now, with the goal of augmenting soldiers’ ability to carry things, sense their environment, and respond to threats with help from ground robots, small tactical drones, and even armed robots. 

So how do you get the troops who have to fight alongside robots to trust them? It’s not a new dilemma and some excellent scholarship has gone into recognizing and defining it. But a recently published paper from the Naval Postgraduate School offers a new look at the problem from an operator’s perspective. 

Marine Corps Maj. Daniel Yurkovich presented his paper, “This Is My Robot. There Are Many Like It But This One Is Mine,” on Monday at the National Defense Industry Association’s annual conference. Yurkovich argues that “inability to (a) understand artificial intelligence (AI) and (b) train daily, will compound to create an atmosphere of mistrust in valuable systems that could otherwise improve the lethality of Infantry Marines.” 

The key to building that trust might be allowing operators to help train the AI-powered machines that serve beside them, as opposed to just handing a soldier, Marine, or airmen a robot and sending the pair off to war together. “Teaching and developing AI agents within a simulated environment by the end user indicate there is the potential for better trust in the AI agent by the end-user when placed as a teammate” within a human-machine team, Yurkovich wrote.

This is an approach called interactive machine learning. As Microsoft researcher Saleema Amershi and his colleagues described it in this seminal 2014 paper, interactive machine learning is fundamentally different from an algorithm you simply train on data and then let loose…on more data. 

Interactive machine learning brings not just the algorithm designer but also the user into the process of designing updates. This allows users to “interactively examine the impact of their actions and adapt subsequent inputs to obtain desired behaviors...even users with little or no machine-learning expertise can steer machine-learning behaviors through low-cost trial and error or focused experimentation with inputs and outputs.” 

Of course, operators have enough to do without devoting the hours — sometimes hundreds of hours — to train robots in even simple real-world tasks. Yurkovich recommends training the robot’s brain in a virtual game setting first and then transferring the brain to the physical robot. 

“An Infantry Marine is given a robot with a removable AI device (RAID) and a compatible game console,” he wrote. “The RAID is preloaded with a baseline of automation that mimics the baseline of knowledge gained by Marines at the School of Infantry prior to arriving at their first Infantry Battalion. The game console, capable of establishing a connection with the RAID and the LVC-TE, is used by the Marine to interactively train with his robot in a virtual environment...When it is time for live training or operations, the Marine installs his RAID into his robot. Now, the functioning robot and Marine have become a live team with calibrated trust and tendencies built within a simulated environment. Upon completion of the live task or operation.” 

That’s not the only work to be done in building trust between operators and the robots that they will go into battle with. And Yurkovich’s experiment on the subject was inconclusive. But giving operators more experience actually designing the behaviors that they want in their robots is probably better than just handing them something right out of a box that they’ve never interacted with before, as the Air Force study shows.

It’s not a far-off problem. Over the weekend, an Iranian nuclear scientist was assassinated by what Iranian news agency Fars described as a remote-controlled machine gun. ISIS has famously deployed armed drones in many of its attacks and Russian soldiers have deployed ground robots to Syria. There’s no indication that any of these machines were fully autonomous, but the manner in which their operators used them suggests that was more of a technical—rather than ethical—barrier.