How to Stop Fearing AI

William Potter/Shutterstock.com

The latest cloud of fear emanating is that artificial intelligence will pass along or even amplify the biases and inequities that have plagued our society for decades.

Seems like every day there’s another reason to worry about artificial intelligence.

It all started with shrill warnings of an existential threat: Those pesky computers would one day figure out they’re in a broken relationship with their carbon-based bosses. That we need them more than they need us, we aren’t growing with them as they hoped. When that day arrives, we’ll find themselves on the collared end of the leash faster than you can say super-intelligence. At least those of us whom our digital overlords deem worthy of keeping around.

But that concern seemed remote and better left to Twitter debates between tech billionaires. So we decided to focus next on the more prosaic issue of job loss; that the robots’ coming-out party would surely lead to exponential growth in leisure time for us humans but without the requisite salary and benefits necessary to enjoy our newly found freedom. But, you know, those dire predictions foretold by the marketing of the electric washing machine in 1904 never came to pass, so maybe we’ll all be OK.

The latest cloud of fear emanating from this tech-enabled dystopia is that artificial intelligence, when utilized in the decision-making systems we interact with daily, will pass along or even amplify the biases and inequities that have plagued our society for decades if not centuries. Think mortgage approvals, auto insurance rates, offers of employment, apartment rentals, or even what food Kroger’s stocks in each of its grocery stores.

It’s reasonable to fear that these AI-powered “deciderers” would inherit the biases present in the humans who engineered them. Or that the historical data sets used to train and tune the algorithms would likewise drag our history of bias into our collective future.

Bias, whether human or AI-inspired, may be a bad idea but in many circumstances, it isn’t illegal. At least not yet. Earlier this month, Sens. Ron Wyden, D-Ore., and Cory Booker, D-N.J., and Rep. Yvette Clarke, D-N.Y., introduced the Algorithmic Accountability Act, requiring companies to study and fix flawed computer algorithms that result in inaccurate, unfair, biased or discriminatory decisions impacting Americans. It’s an important bill that, if written properly, could alleviate much of the concern about AI bias.

But legislators taking aim at AI bias would do well to learn from an industry where bias has been illegal for decades: lending. Almost 45 years ago, Congress passed the Equal Credit Opportunity Act, which among other things, made it illegal for banks and other lenders to consider an applicant’s race, gender, or ethnicity as well as other criteria in deciding if he or she qualifies for a loan and at what interest rate. What’s more, the resulting fair lending regulations stipulated that a bank could run afoul of the law even if it had no intention of discriminating or “redlining” to avoid a particular class of borrower. If a lender’s decision processes results in unfair bias, it can be found to be illegal, regardless of intent.

Legislators enacting fair lending laws in the 1970s were naturally focused on human bias rather than AI. But shockingly the substance of fair lending regulation enforcement—monitoring and testing the impact on actual consumers who apply for loans—translates well to the AI-driven world almost half a century later.

With the monsoon-like backlash against Big Tech in recent months, it’s easy to forget that computers can be very helpful. AI algorithms help us on a daily basis and make our lives easier. We rely on these new technologies for mapping and navigation, real-time traffic alerts, in our news feeds, and fraud monitoring when there’s an unusual purchase in our bank accounts. Our music and entertainment streaming services are showing us content-based on algorithms. But AI isn’t perfect, and the technology sector could learn from the financial services sector in its approach to measuring outcomes and impact.

In 2010, the Consumer Financial Protection Bureau was formed by the Dodd-Frank Wall Street Reform and Consumer Protection Act. This agency was designed to fix some of the root causes of the financial crisis that led to the Great Recession in 2008. The CFPB examines many creditors in order to assess compliance with federal consumer financial law. One area here involves regular monitoring and reporting around borrower impact, particularly for protected groups. AI models change and grow over time, so one-time testing isn’t enough. To address this, Upstart created a series of automated tests to assess our AI models and alternative data as part of a compliance plan we agreed to follow for the CFPB. We report applicant outcomes to the CFPB on a regular basis.

Credit score itself is a mathematical formula created by FICO in 1989 that has indeed been shown to further propagate racial inequality and perpetuate the wealth gap. Thirty years on, we’ve seen the limitations of credit score. It’s time for a new approach. We can use AI to bridge that gap—and to help find more ways to prove that consumers are indeed creditworthy.

We don’t need to worry ourselves to the bone about what unfairness AI might portend. We simply need to monitor the consumer outcomes and make sure they aren’t biased. Perhaps Elon Musk is right. Maybe computers and AI will someday rise up to destroy all of humanity. If that’s true, at least they can give us lower interest rates until then.

Dave Girouard is the co-founder and chief executive officer of Upstart.