How NSF and Amazon Are Collectively Tackling Artificial Intelligence-Based Bias

Andrey Suslov/

An agency official shared an update on the Program on Fairness in AI.

The National Science Foundation and Amazon teamed up to fund a second round of research projects aimed at promoting trustworthy artificial intelligence and mitigating bias in systems. 

The latest cohort selected to participate in the Program on Fairness in AI include multi-university projects to confront structural bias in hiring, algorithms to help ensure fair AI use in medicine, principles to guide how humans interact with AI systems, and others that focus on education, criminal justice and human services applications. 

“With increasingly widespread deployments, AI has a huge impact on people’s lives,” Henry Kautz, NSF division director for Information and Intelligent Systems, said. “As such, it is important to ensure AI systems are designed to avoid adverse biases and make certain that all people are treated fairly and have equal opportunity to positively benefit from its power.”

Kautz, whose division oversees the program, briefed Nextgov on the complexities that accompany addressing fairness in AI—and the joint initiative NSF and Amazon are backing to help contribute to the creation of more trustworthy technological systems. 

What is “fair”?

AI is already an invisible variable that touches many, crucial aspects of Americans’ lives. Uses range from helping facial recognition unlock smartphones to making recommendations about punishments judges should impose for criminal convictions. But there’s still no universal guarantee that the rapidly evolving technology won't be harmful to certain people.

“It is important to note that we are still trying to understand fairness,” Kautz explained. “And once we have a better understanding of the many facets of fairness, the challenge is not just to design AI systems that are as fair as people are, but to actually be even more fair and unbiased—since we know people can make biased decisions, either implicitly or explicitly.” 

Mathematical definitions of fairness can hone in on the algorithmic outcomes of different groups using a statistical approach, he noted, so methods in that realm might look to ensure various metrics are consistent across different groups. From a social perspective, on the other hand, officials might consider how AI could improve fairness and equality across society. “An AI system might be used to help determine a novel vaccination or food distribution method or the location of medical resources that users would not have thought of without the analysis from the system,” Kautz noted. Or, in technical approaches to fairness, officials might consider the accountability of the users of an AI system and what information is needed to guarantee they feel confident that informed decisions can be made. 

“Thus, there are many ways to look at fairness in AI, and that is what NSF and Amazon are trying to do through this joint effort,” Kautz said. “We are making progress but are still in the early stages, where we need to understand the different aspects of fairness, in real-world settings, so that we can in turn understand how we can design our systems with ‘fairness’ built into them.”

Advancing Fair AI 

NSF has been funding research to promote fairness in AI systems “for some time,” according to Kautz, while Amazon grasps the importance of building out systems designed with such approaches. 

“Given our mutual interest in this space, it seemed natural for NSF and Amazon to partner to leverage the resources and expertise that each organization brings,” Kautz said, adding that the two “intend to provide approximately $10 million each, for a total of about $20 million, over the three-year life of the program” they cooperatively steer. 

The first cohort came last year, this announcement marks the second, and another is anticipated to rollout in 2022. 

Amazon does not play a role in the selection of proposals for the research grants, only NSF selects the awardees, Kautz confirmed. 

“Through the partnership, the research community submits proposals to NSF, which in turn uses its standard peer review process to identify meritorious proposals,” he explained. Agency officials complete NSF’s standard award process and provide grants to those chosen while Amazon separately sends its funding contributions. The company additionally offers consultation to the researchers who receive awards. 

“The response to the solicitations has grown, indicating growing importance and interest in the research community in addressing fairness in AI” since the program’s inception, Kautz added. “The award topics have also broadened, and now include projects in natural language processing, computer vision, and applications to criminal justice.”

But what hasn’t changed is the effort’s overall aim and potential to help scientists push forward toward new technical breakthroughs, accelerate the transition of their research results from laboratories to practice—and train the next generation of researchers and practitioners, which Kautz deemed “another dimension that is really important to NSF.”

“We all appreciate there is a real need for competencies in AI across all sectors of our economy.  Providing students studying fairness in AI with exposure to industry, and the problems that they are facing, is one way to develop and nurture talent that our research ecosystem is going to need going forward,” he said. “Finally, students participating in this program’s projects may get exposure to future job opportunities as a result of Amazon’s engagement.”