Computers are exceedingly powerful machines, capable of instantly completing a litany of tasks that would take humans infinitely longer to carry out. But as useful as they may be, computers are often infuriatingly unable to understand us. If you don’t carry out every function—even the most basic—exactly as a computer expects, you’ll get no response; it simply can’t interpret what you want. As robots become an ever-more-common part of our lives, it’s going to be increasingly important that they start to understand us.
A group of researchers at MIT’s computer science and artificial intelligence lab and Boston University may have a solution. The group, overseen by CSAIL’s Director Daniela Rus, built a system that lets a person correct any mistakes a robot makes with their mind. It took a Baxter robot, a two-armed machine designed by Rethink Robotics for use on factory floors, and hooked it up to an electroencephalography monitor cap.
The EEG monitors a human’s brain activity, and using a set of proprietary machine-learning algorithms developed by the group, the system can analyze a person’s brain waves. The machine then outputs that analysis within about 30 milliseconds to the robot. This essentially allows a human monitoring the robot to tell it, just by thinking, when it makes a mistake, and to immediately correct it.
“Our objective is to create more natural interactions between humans and machines,” Rus told Quartz.
The group set up the Baxter to complete a simple task: pick up spray-paint cans and spools of wire and drop them into correspondingly labeled buckets. When a human, wired up to the EEG monitor, notices the robot is about to put an object in the wrong bucket, the robot changes direction and drops the object where it’s supposed to go. The group also programmed the robot’s digital face to blush when a human sensed it was making a mistake. Even robots don’t like to make mistakes in public.
The EEG monitor picks up on signals called “error-related potentials” the brain outputs when we notice a mistake, which Rus said were one of the easiest types of brain signals to detect from “an output that is mostly a big noise.” The system detects a change in ErrPs, then reacts and changes its course of action.
“Once you know this signal, you can use this to modify a robot’s behavior or stop it,” Rus added.
The group recently had its paper accepted to the Institute of Electrical and Electronics Engineers’ conference on robotics and automation taking place in Singapore in May. It also plans to further develop the system to more clearly detect mistake signals from humans, and potentially create a system that will allow the robot to get deeper feedback from the human.
For example, if the robot isn’t sure it’s registered an ErrP from a human, it could continue to carry out the potentially mistaken task, and if it receives a stronger error signal, it would then make a change. The question they want to answer, according to CSAIL researcher Stephanie Gil, is essentially: “Can you have a two-way conversation with the robot?”
While the system right now can pretty much only be used in yes-or-no situations, the group plans to develop it to handle more complex tasks. In the future, this type of human-computer interaction could become the basis on which we interact with all sorts of technology.
“One task that humans are good at is knowing when something is about to go wrong, and learning from our mistakes,” CSAIL Ph.D. candidate Joseph DelPreto told Quartz.
This was patently clear in a recent case where a Tesla car in autopilot mode crashed into a barrier after failing to notice a lane shift. Instead of taking over for a computer, typing instructions onto keyboards, or speaking them aloud, what if you could just think them? Self-driving cars could see as you see, autonomous delivery drones could know where you want your package dropped off, and pizza-making robots would know whether you prefer your crusts well done or not. It’ll be a brave new world, driven by servant robots we don’t even need to talk to for them to do our bidding.