On AI and the military, researchers look to draw a line

Google researchers and academics across the world are petitioning organizations on the cutting edge of AI to steer their efforts away from military applications.

data science (chombosan/Shutterstock.com)

Researchers with Google and academics across the world are petitioning organizations on the cutting edge of artificial intelligence research to steer their efforts away from military AI applications, particularly autonomous lethal weapons.

Concerns about the ethical implications of AI are nothing new, but as governments and industry continue to invest in a number of technologies that fall under the artificial intelligence umbrella, the question of where -- and how -- researchers and industry should draw ethical lines around military applications has become increasingly relevant.

In an open letter first published by the New York Times on April 4, Google employees urged CEO Sundar Pichai to end the company's participation in Project Maven, a Pentagon program designed to apply big data analytics and machine learning to aerial imagery captured by drones and improve military decision-making. Those employees who signed -- more than 3,000, according to the Times -- argued the company “should not be in the business of war.”

“Building this technology to assist the US Government in military surveillance -- and potentially lethal outcomes -- is not acceptable,” the letter read.

A collection of nearly 60 AI experts and academics from around the world recently put out a similar letter, calling for a boycott of South Korean-based KAIST University over its decision to partner with Hanwha Systems, a leading South Korean defense contractor, to open a new research center dedicated to AI and military convergence.

Marie DesJardins, a professor of computer science at University of Maryland Baltimore County and one of the signatories of the KAIST letter, told FCW she is not being pollyannish when it comes to AI. DesJardins said she doesn’t believe in “Terminator” scenarios where sentient machines turn on their human masters, and recognizes that even if the U.S. eschews the use of AI in military operations, other countries won’t necessarily follow suit.

She has concerns about other implications on the technology, but she draws a bright red line at autonomous weapons systems, arguing that if left unchecked, they could one day facilitate killing on a scale that would make such systems as dangerous as nuclear weapons. DesJardins said she signed the letter because KAIST was not clear about whether the center would draw any ethical boundaries around its research, while the partnership with Hanwha Systems, which traffics in missile systems, tanks and other weapons of war, indicated the potential for work on autonomous weapons.   

“We need to be working globally to think about this issue, just as we thought a lot as a species about nuclear proliferation,” said DesJardins. “There’s the same potential here, you can develop essentially weapons of mass destruction very easily if you have autonomous weapons.”

The letter spurred a response from KAIST University President Sung-Chul Shin. In a letter obtained by FCW, Shin assured the signatories that the research center “aims to develop algorithms on efficient logistical systems, unmanned navigation [and] aviation training systems,” but it will steer clear of “autonomous weapon lacking meaningful human control.”

“I would like to reaffirm that KAIST does not have any intention to engage in development of lethal autonomous weapons systems,” wrote Shin. “KAIST is significantly aware of ethical concerns in the application of all technologies including artificial intelligence.”

Another signatory, University of Michigan computer science and engineering professor Benjamin Kuipers, told FCW via email that his interpretation of Shin’s comments indicates that the center will stay within acceptable bounds of military-based AI research.

“From my individual point of view, as a person not accustomed to parsing diplomatic statements, this sounds like a satisfactory response,” Kuipers wrote --  though he noted “the intentions of an individual or an organization can change at any time.”

There’s little in the way of formally recognized standards to guide researchers, and the oncoming AI arms race means military officials and policymakers could exert pressure to push the limits.

In an April 5 post reacting to the Google letter, the Electronic Frontier Foundation advised companies who develop AI technologies for government to “start by recognizing that there are no settled public standards for safety and ethics in this sector yet. [A company] cannot just assume that the contracting military agency has fully assessed the risks or that it doesn't have a responsibility to do so independently.”

DesJardins echoed those thoughts, saying society lacks agreed-upon rules about which forms of AI military work raise unique ethical concerns and which are like any other weapon of war. Partnership between the research community, government officials and technology manufacturers will be necessary to set such standards, and she said she believes an international treaty might be needed in the end.

“We’re deploying these things way faster than we’re developing our understanding of what the limitations are,” DesJardins said.

Not everyone is quite so concerned. Anton Chuvakin, a research vice president for technology consulting firm Gartner, said he does not understand the outrage, particularly in the case of the Google letter. He said current AI technologies are child’s play compared to the versions that regularly scare moviegoers and are more akin to advanced software systems computing mathematical formulas.

“To me, a lot of the current AI is mathematics and statistics. There’s no consciousness, there’s no intelligence,” he said. “It isn’t like they built a killer robot for the Pentagon.”

Chuvakin and DesJardins both agree that such killer-robot technology, if it is even possible, would not rear its head for decades. As for modern applications, Chuvakin said that other countries will move forward with AI research that pushes the boundaries regardless of how the U.S. chooses to proceed. Eschewing military AI applications, he warned, could put the U.S. at a fatal long-term strategic disadvantage.

Still, even Chuvakin acknowledged that one particular form of attainable AI does give him pause and warrants a larger ethical debate.

“Anything that leads to automated decision-making to kill people is probably the line I would draw,” he said.