Experts Urge Congress to Consider Implications of AI Bias and Human-Robot Interactions

EVZ/Shutterstock.com

It’s all part of the House Committee on Science, Space and Technology’s work to support the development of a national artificial intelligence strategy. 

As lawmakers consider the impact artificial intelligence will have on America’s workforce, experts across the government, industry and academia this week urged Congress to confront and prioritize issues around ethics, bias and the increasing interactions between humans and robots.

“Thanks to AI some weird and wonderful things are beginning to happen: Cars are learning to drive themselves, machines can now recognize your friends’ faces and when you see people walking down the street talking on their phones, you don’t know if they’re talking to another human, or to a machine and expecting the machine to answer,” Erik Brynjolfsson, Massachusetts Institute of Technology professor and director of the MIT Initiative on the Digital Economy said at a House Committee on Science, Space and Technology hearing in Washington. “Just last week, [the smart assistant] Siri tried to join into a conversation I was having about interest rates.” 

Brynjolfsson researches the impacts that emerging information technologies have on business strategies and productivity. When prompted by a question from committee Chairwoman Haley Stevens, D-Mich., he described how considerations around ethics and bias are becoming more and more urgent as AI rapidly advances. Ultimately, machine-learning systems are only as strong as the data that they’re fed, Brynjolfsson said. 

The educator emphasized that numerous studies demonstrate that machine-learning systems trained to make decisions based on those made by humans consistently end up perpetuating— “or even amplifying”—the same biases that humans often possess. 

“So when it comes to hiring, or making credit loan decisions, or who gets parole—if the humans who were making those decisions had a set of biases, those are going to be captured by the systems and repeated,” he said. 

Part of the problem is that when machine learning implements deep neural network technology, the technology will capture data from millions of examples and then spew out a recommendation based on what it’s been fed. But the process is sort of like a “black box,” he said, and people working with the systems often have trouble tracing back or understanding why the machines make the decisions that they do. 

However, instead of completely ruling out using machine learning simply because the technology may produce imperfect outputs, Brynjolfsson said stakeholders should hone in on correcting these biases. He added that they should also turn this into an opportunity to look to innovative techniques that could help solve the problem. One relevant technique is known as a turing box, which allows researchers to learn what sort of prejudice the systems may have inadvertently picked up along the way, Brynjolfsson said.

“And you can correct those in a way that may actually ultimately, I think, be easier to correct than our own human biases,” he said. “Because after all, it’s not like humans are perfect either.”

Rebekah Kowalski, vice president of manufacturing services at the staffing firm ManpowerGroup, said the issues Brynjolfsson raises demonstrate the need for modern new roles that will support companies in making responsible decisions as they roll out the new tech. In this case, she said a new workplace role, known as a “digital era enterprise ethicist,” could be hired to help entities make ethical decisions as they unleash budding technologies—and also decide when inherent bias within the technology goes too far. 

“Traditionally these decisions have been kind of bandied about,” Kowalski said. “Maybe [the information technology department] owns this, or risk owns this, or legal owns it. Well, now, the way organizations are built in the digital era, it does not land neatly in one of those siloes. It spreads across.”

She also listed out a few other positions that personnel can likely expect to see pop up across America, including machine-learning specialist, collaborative robotics specialist and autonomous mobility engineer. 

“How do you make sure people of different ethnicities are recognized by that autonomous vehicle? How do you make sure that your [human resources] systems are wired, not to filter people out, but actually to bring people in based on potential?” she asked. “So those are some of those roles that we see coming up across all organizations.” 

National Science Foundation Assistant Director of the Directorate for Social, Behavioral and Economic Sciences Arthur Lupia has been working fervently to support nation’s ability to respond to new technologies’ inevitable impacts on the workforce. He noted it’s imperative to address the increasingly prevalent engagements between humans and robots.  

“Understanding the human-robot interaction is really critical to all of the progress we want from manufactacturing, and farms and offices of the future,” Lupia said. 

These interactions and the growing automation of manufacturing processes require a great deal of trust between humans and machines, he added. And there should always be an override button, just in case something goes wrong or the technology makes decision that could set forth dangerous outcomes. 

For that reason, Lupia said NSF is bringing together large groups of people to discuss and better understand “at a pretty fundamental level” how that relationship around trust is going to work and how robot and machine learning advancements will bring about workplace reorganization across most businesses. 

“So that’s the fundamental question and it takes understanding humans because if you press the override button at the wrong time, right, you can disrupt the process, and if you wait too long, unintended consequences can happen,” he said. 

Ivy Tech Community College President Sue Ellspermann said NSF could provide stronger support to schools outside of distinguished research institutes to help them adopt emerging technology. As technology rapidly advances, community colleges’ limited funding and resources can make it difficult to keep up and effectively prepare their students to join the evolving technical workforce, she said. 

“So [we need help] keeping our labs up to date with that front-edge technology at the same time industry is getting it—not a generation later—we really need to have it early,” Ellspermann said.

Also during the hearing, Ranking Member James Baird, R-Ind., indicated that over the course of the next few months, the committee will be working to produce bipartisan legislation to support a national AI strategy.