Non-Discriminatory AI Requires a Human Touch, Experts Say

PabloLagarto/Shutterstock.com

Interdisciplinary teams that include social scientists can help curb bias, experts told Congress. 

Artificial intelligence systems are increasingly replicating historical patterns of discrimination and the government’s got to do something about it quick, technology experts told Congress Wednesday. 

To combat the perpetuation, they said lawmakers must boost funding for interdisciplinary research and work to ensure social scientists are embedded with technical teams from the inception of America’s AI projects. 

“We should think about ways that we can study AI that brings together computer scientists, lawyers, social scientists, philosophers, security experts and more—not just 20 computer science professionals and a single lawyer, which is some people’s definition of interdisciplinary research,” Jack Clark, policy director at the research organization OpenAI said at a House Science, Space and Technology Committee hearing.

Clark and his fellow panelists told legislators that the majority of AI systems deployed today are developed by a small number of people from homogeneous backgrounds (mostly white and male) and grants are not particularly friendly to large-scale interdisciplinary research projects in the space. Yet a lot of the projects’ outcomes incorporate or encompass specific values. 

“Technologists are not great at understanding human values, but social scientists are and have tools to help us understand them,” Clark said. “So my pitch is to have federally funded centers of excellence where you bring social scientists together with technologists to work on applied things.”

With over a decade of experience working in the AI industry, Meredith Whittaker leads Google’s Open Research Group and co-founded New York University’s AI Now Institute, a research center dedicated to understanding the social implications of artificial intelligence. She’s witnessed firsthand how the AI industry is “profoundly concentrated and controlled” by just a handful of companies that are “notoriously non-diverse.” 

“It is urgent Congress work to ensure AI is accountable, fair and just because this is not what is happening right now,” Whittaker said.

She highlighted some of the severe consequences strung from uniform groups of scientists—perhaps unintentionally—encoding their bias in AI technologies. 

Right now, Whittaker said, “voice recognition hears masculine sounding voices better than feminine voices, facial recognition fails to see black faces and transgendered faces, automated hiring systems discriminate against women candidates, medical diagnostic systems don’t work for dark skin patients and the list goes on, revealing a persistent pattern of gender and race-based discrimination among other forms of identity.”

But as AI is deployed more and more frequently in areas like criminal justice, hiring or education, the technical experts leading the charge don’t always consider the broad historical contexts within those areas of study, much less incorporate dynamic perspectives across every step.

“We need social scientists and others with the skills to understand the socio-technical systems within which these technologies are being applied,” she said.

Joy Buolamwini, a research assistant at MIT who founded the Algorithmic Justice League to enhance more inclusive and ethical technology, said her own experience reading social science literature actually enabled new innovation in her work on computer vision.  

Buolamwini read insights from a legal scholar studying anti-discrimination who found that if researchers did “single access evalutation” looking at discrimination by race or gender, people who were at the intersection were being missed. 

“I found that this was the same case for the evaluation for the effectiveness of computer vision AI systems,” Buolamwini said. 

She applied the same critical viewpoint to test gender and skin type bias in an Amazon computer vision product. There were discrepancies when Buolamwini and her research team looked at binaries of men or women, dark skin or light skin, but when she applied an intersectional analysis, the team saw error rates of zero percent for white men and 31 percent for women of color.

“And it was that insight from the social sciences to start thinking about looking at intersectionality,” she said. “So I would posit that we need not only to look at social sciences as something that’s a help but as something that is integral.” 

Dr. Georgia Tourassi, director of Oak Ridge National Laboratory’s Health Data Sciences Institute reiterated the panelists’ notions. She said America needs to have a federally coordinated conversation involving not only the stem sciences, but also stakeholders from the social sciences, economics, law and public policy fields to address the emerging complexities of AI use. The science community also needs to see more research and development investments, she said. 

Tourassi also emphasized her belief that the nation can lead the world, not just in technical advances, but through deploying socially responsible AI.

“We can lead the dialogue—that conversation for the whole world—and that differentiates us from other entities investing in the space,” she said. 

Clark added that it’s imperative for America to front the charge. 

“If we fail here, then the values that our society lives under are partially determined by whichever society wins in AI,” he said.