Do Killer Robots Violate Human Rights?

maxuser/Shutterstock.com

When machines are anthropomorphized, we risk applying a human standard that should not apply to mere tools.

As bizarre as it sounds, the United Nations just held an arms-control conference to figure out if killer robots might violate the laws of war.

Ten years ago, very few experts were worried about military robots. The technology was just emerging onto the battlefield. Now, several credible groups are waging war against killer robots, officially known as lethal autonomous weapons systems.

The UN returned to the subject last week in a five-day meeting of experts for the Convention for Certain Conventional Weapons. I was invited by the convention’s chairperson, the German Ambassador Michael Biontino, to speak about the problems that lethal autonomous weapons systems may create for human rights. This essay is adapted from my testimony and gives a glimpse at how this important debate is moving along. (These are my opinions alone and don't necessarily reflect the positions of UNIDIR or other organizations.)

The specific issue I was asked to address is whether killer robots, in making kill-decisions without human intervention, violate either a right to life or the "laws of humanity," as protected by the Martens Clause that has been in effect since the 1899 Hague Convention. (The Martens Clause requires nations to consider warfare through the lens of the “public conscience.”)

These concerns are a different kind than technology-based objections to killer robots. For instance, critics point out that artificial intelligence still can’t reliably distinguish between a lawful target (such as an enemy combatant with a gun) and an unlawful one (such as a civilian with an ice-cream cone), as demanded by the laws of war. Technology limitations, like this one and others, are possibly solvable over time.

But if lethal autonomous weapons are truly an assault on human rights, that’s a philosophical challenge that can’t just be solved with better science and engineering. So it’s worth focusing on human rights as some of the most persistent problems for the killer robots, and I’ll keep that separate from technical issues to not confuse an already-complex debate.

* * *

What’s objectionable to many about lethal autonomous weapons systems is that, even if the weapons aim only at lawful targets, they seem to violate a basic right to life. This claim is puzzling at first, since killing is so commonplace and permitted in war. If you’re a combatant, you are legally liable to be killed at any time; so it’s unclear that there’s a right to life at all.

But what we mean is that, in armed conflicts, a right to life means a right not to be killed arbitrarily, unaccountably, or otherwise inhumanely. To better understand the claim, a right to life can be thought of as a right to human dignity. Human dignity is arguably more basic than a right to life, which can be more easily forfeited or trumped. For instance, even lawful executions should be humane in civilized society.

Therefore, I will focus on dignity as a way to understand the right to life in this debate, though the two are more distinct in other contexts. Indeed, critics of killer robots also argue that automated killing violates a right to human dignity.

So, what is human dignity?

Unfortunately, the concept is vague; there’s not much consensus on an answer. We should note that, in many cases, a lack of human dignity does not make something illegal. People do undignified and shameful things all the time legally, such as lying and adultery. And what’s so dignified about going to the toilet or having sex? Sometimes, we’re happy to trade dignity for security, such as allowing airport personnel to search our bodies, as if we were criminal suspects. This is to say that our relationship status with dignity is complicated.

As it relates to killer machines specifically, human dignity is said to be about accountability, remedy, and respect. And these are capabilities that mere machines don’t seem to have, as some experts point out.

These inabilities are related to a lack of meaningful human control. Without that control, it’s unclear who or what is responsible for wrongful targeting and errors. Without being able to identify responsible parties, there’s no one to demand remedy from or compensation for wrongful injury or death. And making decisions to kill another human being—one of the most profoundly serious decisions we can ever make—demands respect in the form of reflection and meaning that machines cannot give.

While these claims are reasonable, they need more explanation. For instance, could the accountability problem be solved by just stipulating or assigning a responsible party? This may ultimately be unfair to that party and give rise to questions about justice; but accountability at least would be solved, if that’s necessary for dignity. With a responsible party assigned, remedy is now possible.

The harder requirement for lethal autonomous weapons systems to meet is to respect human life, since AI and computers are incapable of understanding our emotions; and that won’t change in the foreseeable future. But is respect a real requirement, or is it just a romantic notion that doesn’t really exist in modern conflicts? After all, we don’t demand that soldiers consider the gravity of their actions each time they pull a trigger.

Very few are thinking about the moral worth of their adversaries as human individuals with hopes, dreams, and fears of their own. At most, they consider whether the target is legal or not, if even that. Some soldiers are openly racist and nationalistic, acting out of hate, anger, and other degrading motivations.

Of course, the difference with robots is they can never be motivated by the right reasons—or wrong reasons or any reasons at all (see here for why that’s important)—whereas that possibility at least exists with human soldiers, even if they’re not as virtuous as they ought to be. If autonomous weapons lack the capacity for moral respect, it makes little sense to hold them to a human standard based on that capacity. This is less a technology failure than it is a category mistake.

As an analogy, no one demands that a bullet or guided missile must be considerate and respectful before it agrees to be fired or strikes a target; yet, they are legal weapons in war. We also don’t require this of dogs and other animals in war, which have consciousness and come closer to human-like cognition than machines can. This is because those weapons lack the capacity for moral reflection; therefore, we don’t insist on it.

Anyway, if moral reflection is truly required, it could perhaps be located in the design of these weapons, which are programmed and deployed by humans. The responsibility to make thoughtful and ethical decisions arguably falls on their shoulders, as well as the political leaders who authorize military engagement. Of course, this is easier said than put in practice and enforced.

* * *

To help clarify the notion of human dignity, Germany’s Constitution—or Basic Law—is especially illuminating. Germany has one of the most developed and thoughtful legal conceptions of human dignity. The very first article of its Constitution establishes that human dignity is inviolable, even more important than the right to life, which is established in the second article and can be forfeited under some conditions.

I want to offer two specific legal cases to show that it is not absurd to think that dignity and liberty can trump security:

The first case started in 2005, when Germany’s Air-Transport Security Act authorized its military to shoot down commercial aircraft that’s suspected of being hijacked by terrorists. If an airplane appeared to be on a collision-course with a building, then shooting down the plane and sacrificing the passengers seems to be the lesser evil than allowing the flight to continue and kill thousands of others on the ground. But in 2006, the German courts struck down this law as unconstitutional.

There were many reasons for this decision. One of the most important was that the law violated Article 1 of the nation’s constitution: to respect human dignity. The law would have treated people as objects, as if they were part of the aircraft itself and not individuals who each deserve respect and consideration. The law also would have treated people as numbers or statistics, pre-determining their deaths for the possibility of saving a greater number of unidentified lives. And there’s something wrong—something disrespectful and dehumanizing—with doing ethics by numbers alone.

The second case started in 1999, when a new anti-crime law allowed broad eavesdropping or acoustic surveillance inside a suspect’s home. But in 2004, German federal courts struck down the law, also on the basis that it violated Article 1 of the Constitution about human dignity—that people should be at liberty to have free and open conversations inside their own home, without fear of surveillance.

This idea is reinforced by Article 13 of their Constitution that specifically protects the sanctity of a person’s home. Even with probable cause and a court order, German police surveillance now must follow much stricter rules that safeguard privacy as a dimension of human dignity.

Again, the point is that it’s not absurd that dignity should be more important than security, and other legal cases exist to support this point. Security without dignity misunderstands the purpose of civil society and the state. But the question for us is: Do killer robots trigger any of the considerations cited in those legal cases?

The parallels might not be obvious, since criticisms of lethal autonomous weapons aren’t that they sacrifice innocent people—as in the case of a hijacked airplane—or violate protected spaces—as in the case of spying inside a private home.

But similar arguments could be made that killer robots don’t recognize people as people, but merely as objects. To them, some objects are coded as legal targets, and other objects are not. But even that account isn’t completely accurate; it may be worse. Computers don’t really recognize objects as we do, but all they “understand” are numbers—bits of information in the form of 1s and 0s. And civilized society can prohibit treating people as numbers, as German law has reasonably done.

Still, a prior question remains: Should we be applying human standards to machines that don’t have the human capacity for moral concern in the first place? If we could establish a framework for clear accountability, would that help to safeguard human dignity? All these questions require more study.

* * *

The Martens Clause can help us here, which requires that armed conflicts proceed under “the laws of humanity and the requirements of the public conscience.” It’s an important principle that dates back more than 100 years and is still invoked in modern debates, such as on nuclear weapons. Unfortunately, the Martens Clause is also very abstract and maybe more elusive than the concept of human dignity, though both seem to be related. So the preceding discussion applies equally well to the Martens Clause.

In its broadness, the Martens Clause can expand our understanding of human dignity. It raises complementary concepts, such as prohibiting weapons that are mala in se, or evil in themselves, such as using poison, exploding bullets, blinding lasers, nuclear weapons, and other such instruments in war.

Some experts thus argue that lethal autonomous weapons are mala in se. Certainly, it would seem creepy to be hunted by a robot, but a charge this serious requires more than an uneasy feeling or quick gestures at human dignity. What exactly is so evil or undignified about a machine’s automated decision to shoot someone—a machine that could more cleanly hit its target than a human shooter—when it is not undignified to be shot up by another human in a messy way that blows up parts of your body, head, and face? In locating any evil in killer robots, we should be careful to not personify or anthropomorphize them, which risks slipping in a human standard that should not apply to mere tools.

However, a novel argument against killer robots is perhaps that they could be too accurate in their aim, and this can make them mala in se. They could more precisely inflict mortal wounds, causing field and hospital mortality to exceed humanitarian guidelines previously established. This is related to the International Committee of the Red Cross’s study on superfluous injury or unnecessary suffering (SIrUS Project).

Further, if wounding a combatant is better than killing him, do autonomous weapons need to be lethal at all, given that they are not alive and don’t have a moral imperative to protect themselves? It may be superfluous and unnecessary that they shoot to kill. More work is needed to develop these arguments properly.

In its broadness, the Martens Clause is also more susceptible to subjective judgment, because different individuals and cultures can have different moral intuitions. One way to take the temperature of the collective public conscience is to simply conduct a survey of public attitudes about a particular issue, such as nuclear weapons or killer robots.

But we need to be careful again, since public attitudes may be uncritical, uninformed, or simply unethical, such as supporting racist and sexist policies. Doing ethics by survey alone seems to be the same mistake as doing ethics by numbers alone. Nonetheless, surveys do provide important data points in this conversation.  

* * *

The right to life, the right to human dignity, and the Martens Clause are all related. They need to be sharpened, if they are to provide practical guidance. Otherwise, appeals to those principles collapse into a “yuck” factor—vague intuitions against something, without much rationale support. And the “yuck” factor is often on the wrong side of history, such as opposition to organ transplants and blood transfusions, as well as ignorant prejudice.

We also need to think about broader implications of invoking those principles here. The debate on killer robots can implicate cyberweapons which also lack meaningful human control, however that’s defined. And there has been nearly no discussion about this link, even though cyberweapons more clearly elude human control: They often attack autonomously, proliferate indiscriminately, can result in physical harm, and cannot be stopped.

Our judgments here may have a spillover effect on civilian society and industry. For instance, autonomous cars are coming to our roads soon, and they may need to make sacrifice decisions—or “crash optimization” decisions—to minimize overall harm. They may be programmed to judge that crashing into object x (such as one person) is better than crashing into object y (such as five other persons). On some accounts, robot cars that can make potentially lethal decisions violate a right to life and human dignity.

War is terrible, but the grounds for weapons prohibitions must be more than that a weapon is terrible—they all are, because war is terrible in the first place. But some weapons are more terrible than others, and we can articulate that difference. For example, anti-personnel mines are indiscriminate and kill or maim lots of children. And nuclear weapons are not only indiscriminate but also kill in horrific ways. Can we clearly identify the inhumanity of lethal autonomous weapons, as we have for other prohibited weapons?

Fears of a dystopian future could be a legitimate reason for a ban, if those fears can be well articulated and defended. To confidently say whether a ban or a moratorium on killer robots is defensible or not, we need to better clarify the notions of human dignity and the Martens Clause, as well as related concepts of meaningful human control and autonomy.

Beyond this, there are still a wide range of possible abuses to safeguard against, as discussed elsewhere. What’s at stake is more than just a new category of weapons; autonomous systems are also an important technology with civilian potential and even humanitarian potential in future war.

(Image via maxuser/ Shutterstock.com)