Firm Offers Tips on Eliminating Bias and Other Risks When Deploying Analytics Models

metamorworks/istockphoto

Government leaders are “appropriately” wary of implementing models they don’t understand, a senior expert explained.

Advanced analytics models perform what people have asked them to do—so the right kind of human oversight system is needed to eliminate bias in them, according to a new report from management consulting firm McKinsey & Company on de-risking public sector technology.

“I hope our research will help leaders understand that models can be made more transparent, more accurate and fairer—and that these risks can be managed,” McKinsey Senior Expert Eric Schweikert told Nextgov Thursday.

Schweikert served as lead author of the report, released Friday. He briefed Nextgov on some of its main takeaways for government insiders.

Multiple McKinsey experts contributed to the work, which discusses “pernicious risks” associated with advanced analytics in the public sector, like bias and discrimination. “Examples are not hard to find: advanced analytics models have been shown to sentence people of color more harshly, erroneously accuse low-income and immigrant families of fraud, and award lower grades to students from less privileged neighborhoods,” they wrote in the report, noting that this might be why many public agencies may be hesitant to deploy such assets at scale. 

“As the complexity of data increases, fewer and fewer people truly understand what models are doing—nor where their bias may lie—which makes that human oversight more difficult. My work often involves helping government agencies kickstart their analytics efforts. I find senior leaders are—appropriately—wary of implementing models they don’t understand,” Schweikert explained. “That’s why bringing transparency and accountability to models is a passion of mine, both by temperament, and in order to help organizations be more comfortable with using analytics models.” 

Artificial intelligence and machine learning models can present more potential risks than others, depending on the complexity of their algorithms. But techniques like “explainable AI” can calculate the contribution of different variables to an answer in an otherwise unexplainable AI model, Schweikert said. People can therefore have debates about them that anyone can follow, regardless of expertise. 

In the report, the experts outline key actions to help mitigate risks of bias and other hazards.

First on that list is to make someone responsible for model risk management. In regulated industries, that responsibility typically sits with a chief risk officer, the report notes, but “in federal agencies that lack robust enterprise-risk-management structures, it could sit with the chief information officer, chief data officer, or the senior-level executive responsible for governance and oversight of technology.”

To Schweikert, the right candidate needs a unique mix of hard-to-find and much-in-demand skills.

“But when you find the right person, it can transform analytics in an agency. Crucially, they need to act as an analytics ‘translator,”’ he said. “That means engaging deeply with the mission of the agency—the senior leaders who are asking the questions—yet understanding enough of the data science to spot problems.” 

Further, he noted, individuals in such roles must be able to balance process and oversight to minimize risk without inadvertently creating obstacles to productivity. They also need to have the interpersonal skills to communicate the transformative value of analytics and collaborate with skeptical colleagues.

Other recommendations listed in the report include: Developing and sharing a clear set of analytical practices and standards, building a model-risk-management infrastructure, considering forming algorithm review panels and appointing an ombudsperson, and strategizing at the enterprise level.

“These are all ways of ensuring the people building the models have considered potential sources of bias and have done everything to eliminate them,” Schweikert said. “Finally, rigorous measurement of bias in the model’s decisions, in production, could underpin the human effort.”

The study points to a recent report that shows 45% of U.S. agencies were “still only experimenting with advanced analytics “and that 12% “were using highly sophisticated techniques.” Schweikert emphasized that point, noting that “a lot of government agencies are just getting started with AI.” In some cases, they’re turning to chatbots or other simple tools. Still, in the related area of machine learning, he said, “there are agencies that have done deep and serious work with good outcomes.”

Diving deeper into the challenges faced by both government and industry in this realm, the senior expert said one of the biggest is bias that’s existed in data for decades. 

“For instance, few people would explicitly use ethnicity as an input variable, but many would use zip code, which—depending on how the model is constructed—could inadvertently become a proxy for ethnicity,” he explained. “And that data also includes historical bias: for instance, home-loan models have had to address the legacy of human-driven redlining that is embedded in loan data.”

On top of that, some algorithms can be so sophisticated that it’s hard to understand how they are making decisions. In those cases, there could be a need to “simplify” the model “to give up a little precision for the sake of explainability,” Schweikert said. And federal entities with less experience with analytics may also lack expertise translating business questions into specific mathematical targets for models.

“[L]et’s say I’m overwhelmed with resumes and want to build an AI model to screen them. But I want to make sure I consider demographic characteristics in the outcome—ensuring there is a diverse representation. That can be translated into the math in a surprising number of ways,” he noted. “The agency needs to have someone who can help broker that conversation between agency leadership and the data science team to deliver the right outcomes.”

Schweikert added, going forward, McKinsey will continue to do extensive research and development on model risk management and responsible analytics.