A Brief History of Robot Law

OlegDoroshin/Shutterstock.com

A new paper looks at how courts have handled a few notable conflicts between man and machine.

When the robot found the shipwreck, the humans controlling it must have been elated. The SS Central America, a paddle wheel steamboat, had gone down in a hurricane in 1857, loaded with gold from the California Gold Rush. For more than a century treasure seekers searched in vain. But in 1988, a robotic submersible operated by an outfit called the Columbus-America Discovery Group finally found the wreck roughly 200 miles off the Georgia coast.

Lawsuits soon challenged Columbus-America’s claim to the gold. At the time, the usual way to establish salvage rights to a shipwreck was to send human divers to secure it. But a 1989 court decision established a new precedent, ruling that a robotic submersible could be used in lieu of human divers if certain criteria were met. This new standard of “telepossession” is still used today—it played a role, for example, in establishing salvage rights to the Titanic.

The legal system has been wrestling with what robots can and can’t do for longer than you might think. A new paper by Ryan Calo, a law professor at the University of Washington, paints a surprisingly colorful picture of this history, which Calo dates back to a 1947 plane crash involving an Army fighter plane on autopilot.

Courts have pondered the extent to which a robot can resemble a living person, as in a 1993 lawsuit filed by Wheel of Fortune star letter turner Vanna White claiming that a robot lookalike used in a Samsung ad campaign had violated her right to publicity (the trial court said no, an appeals court ruled yes). They’ve considered whether robots can be considered performers for the purposes of levying an entertainment tax (no, at least not the case of the animatronic animals that alternately entertain and frighten children at Chuck E. Cheese restaurants).

So far, courts have mostly treated robots as mindless machines and held humans responsible for their actions. What’s changing now, Calo says, is that robots are becoming more capable of acting and thinking for themselves. “What’s exciting about robotics today, in part, is that they’re able to solve problems in ways people wouldn’t, and that’s not something courts have encountered or even imagined,” he says.  

In the Columbus-America case, for example, it’s not clear that an autonomous robot—like one that executes a search pattern of its own design—would meet the criteria for telepossession set out by the 1989 ruling. There, the court emphasized the role of a human operator in directly controlling the robot’s movements.

But these days autonomous submersibles patrol the oceans on behalf of research institutions, navies and private companies. One company, Liquid Robotics, boasts that its bots have logged more than a million miles collecting data for defense, oil and gas industry, and other clients.

Then there’s outer space. In November, President Obama signed a bill intended to promote space exploration by private companies, including ones interested in mining asteroids for minerals. That mining would almost certainly be done by robots, Calo says, and it’s not hard to imagine competing claims. In the future, space robot lawyer might be an actual job title.

In the meantime, Calo and others predict that the most interesting cases to confront the courts will involve robots with “emergent” behavior—that is, robots capable of solving problems and behaving in surprising ways. Such bots could complicate the concept of criminal intent, a crucial determination in criminal cases.

An incident last year hints at the kinds of cases that could come up. Police in Amsterdam investigated a web developer named Jeffry van der Goot who created a Twitter bot that tweeted an apparent death threat directed at a local fashion show. The bot was an algorithm that remixed random phrases from van der Goot’s personal Twitter account.

Van der Goot insisted he hadn’t meant to threaten anyone and hadn’t anticipated that the bot would do so. No charges were filed, but he disabled the bot at the cops’ request.

The issue of anticipating what AI can do could also make it tricky to determine liability in civil cases, Calo says. He cites a classic law-school case study involving a mink farmer who sued a nearby mill company, claiming that the company’s use of explosives to clear a roadway had stressed his minks so badly that they’d eaten their young. The court ruled that stressed-out minks are beyond the foreseeable consequences of blasting and denied the claim for damages.

As artificial intelligence and robotics advance, “foreseeable consequences” may be a moving target.

“If we have truly emergent systems, it’s not clear that all the kinds of mischief they get up to are going to be anticipated,” Calo says.

Courts may need to get creative, Calo says.

“Maybe there would version of a crime that is the robot version of it, a recklessness that involves an artificial agent doing it on your behalf,” he says. “It wouldn’t be quite as bad as you doing it yourself, but it would be enough to make people think twice about deploying certain kinds of emergent systems in certain contexts.”

Based on his historical review, however, Calo worries that judges are too attached to an outmoded view of robots as machines that “do the specific bidding of people.” Unless they update that view, their decisions may not all be wise and just.

“He’s right to point out that this could go badly, but I’m more optimistic,” says Meg Jones, who studies technology law and policy at Georgetown University. “I’m increasingly impressed with some judges’ ability to get into the technical nitty gritty of these systems and how they work.”

What we’re seeing here is the very beginning of a long process in which courts adapt to a new technology, much as they did for the Internet in the 20th century and for railroads in the 19th, says Michael Froomkin, a law professor at the University of Miami.

In both cases, Froomkin says, it took decades to slot these new technologies into the existing legal rules. The next generation of robots will combine the high-speed connectivity of the Internet with the ability of trains to physically hurt people. Robot law already has an interesting history, but its future is likely to be even more so.

NEXT STORY: Are federal web sites too subtle?