Google DeepMind Researchers Join Pledge Not to Work in Lethal AI

Google DeepMind CEO Demis Hassabis

Google DeepMind CEO Demis Hassabis Lee Jin-man/AP

Thousands of AI researchers sign letter forgoing work on autonomous weapons.

More than 2,400 people at 160-plus tech companies, including the founders and top researchers at Google’s DeepMind subsidiary, have signed a pledge not to work on autonomous weapons. Their vow is contained in a July 18 letter to “governments and government leaders” that asks for “strong international norms, regulations and laws against lethal autonomous weapons.”

“Thousands of AI researchers agree that by removing the risk, attributability, and difficulty of taking human lives, lethal autonomous weapons could become powerful instruments of violence and oppression, especially when linked to surveillance and data systems,” reads the letter, organized by the Future of Life Institute. “The unilateral actions of a single group could too easily spark an arms race that the international community lacks the technical tools and global governance systems to manage. Stigmatizing and preventing such an arms race should be a high priority for national and global security.”

Among the signers are Demis Hassabis, Shane Legg, and Mustafa Suleyman, co-founders of the neural-networking software company DeepMind. Acquired by Google in 2014, DeepMind made headlines around the world last year when its AlphaGo neural network defeated the world’s best human Go player. This feat, a first for an artificial intelligence, represented an order-of-magnitude improvement over previous efforts to apply AI to simpler endeavors, such as chess, in part because Go is an exponentially more complex game. While there are roughly 30 branching moves in chess, there are more than 200 in Go.

Hassabis, in particular, has not been shy in expressing reservations about the militarization of artificial intelligence; he has signed similar letters from the Future of Life Institute.

The U.S. military is keenly interested in advancing AI projects similar to Project Maven, an Air Force image-classification program developed with Google’s help. About a dozen Google employees resigned in protest against the contract, and Google leaders decided not to renew it when it expires next year.

Still, Google Cloud head Diane Greene has been courting Pentagon business and, in particular, the huge JEDI cloud contract. Google’s artificial intelligence prowess is seen as key to their pitch for the contract.

The pledge does not keep DeepMind from doing work for the military, or even applying a neural net like AlphaGo to an image-classification problem — all of which helps the military as an institution become more lethal.

Air Force and other military officials generally contend that neither artificial intelligence nor Google workers are involved in designating, much less striking, targets. The military operates under a 2012 directive that requires human involvement in any decision about taking a life.