Robotics ethicist calls for stronger US guardrails as automation accelerates
Kate Darling, research lead for robotics, ethics and society at the Robotics and AI Institute, speaks with science journalist Luis Quevedo on March 3 at Talent Arena in Barcelona. Camille Tuutti
Automation is advancing while the rules are not, the research lead for robotics, ethics and society at the Robotics and AI Institute warned.
Robots are moving into warehouses and factories faster than lawmakers are updating federal rules, a mismatch one technology ethics expert said could determine the future of work more than the machines themselves.
Speaking March 3 at Talent Arena in Barcelona, Kate Darling, research lead for robotics, ethics and society at the Robotics and AI Institute, said the trajectory of automation in the U.S. will depend less on what robots can do and more on what policymakers allow companies to do with them.
“Companies aren’t rewarded for making decisions that support people or social goods,” she said. “Companies are rewarded for profit. They’re rewarded for being first to market.”
When robotics cuts labor costs or increases output, deployment becomes a business decision. Whether displaced workers are retrained or supported elsewhere depends on public policy. Darling was blunt about where the U.S. stands.
“We don’t currently have that, in the U.S. at least, or at least not to the extent that we need for the transformation that’s coming,” she said.
Darling, who spent 14 years at the MIT Media Lab, also rejected the idea that regulation slows innovation. Governance, she said, directs innovation rather than stops it.
“I think what it does is drive innovation in a direction that is more supportive of people instead of in the direction that I fear we’re currently going,” she said.
Too often, the robotics debate focuses on technical capability. Darling said that misses the larger point. Automation doesn’t unfold on its own. It reflects economic and political choices.
Ethics boards haven’t fixed the problem, she added. Companies follow their advice when it doesn’t conflict with profit and ignore it when it does. Her alternative is to embed social scientists inside engineering teams from the start, as the Toyota Research Institute does, so ethics and social impact are built into product design rather than added later.
On accountability, she was equally blunt.
When something goes wrong in a human-machine system, the human operator often takes the blame, even when design, policy or management decisions set the failure in motion. Citing researcher Madeleine Clare Elish, Darling pointed to the concept of the moral crumple zone, where responsibility collapses onto the person closest to the machine instead of being distributed across the system behind it.
She referenced a fatal Uber self-driving crash in Arizona as an example, where the safety driver absorbed much of the blame while broader organizational failures received less scrutiny.
“We need to hold people and organizations accountable and not the machines themselves,” she said.
For Darling, the real question isn’t whether robotics will expand. It will. The question is whether U.S. policy keeps pace. She described Europe’s regulatory push as an experiment other countries are watching and argued the U.S. needs stronger guardrails.
“We need to vote for politicians who care about people and workers,” she said. “We need governance and regulation that supports workers.”
When pressed on what individuals can do, she was even more direct.
“Get involved in politics,” she said. “Politics is very important. And vote for people who support humans and human work and human flourishing.”




