4 Agencies Pledge Enforcement Against AI Bias

Panuwat Sikham/Getty Images

A letter helmed by the Federal Trade Commission promises vigilance against automated systems used to make critical socioeconomic decisions.

Leadership from four federal agencies doubled down on their support to use existing laws to police emerging artificial intelligence systems, as innovation in the field supports further integration into critical components of society. 

Announced by the Federal Trade Commission, Chair Lina Khan cosigned a letter—along with Consumer Financial Protection Bureau Director Rohit Chopra, Assistant Attorney General for the Justice Department’s Civil Rights Division Kristen Clarke, and Equal Employment Opportunity Commission Chair Charlotte Burrows—to articulate their agencies’ continued enforcement efforts against biases in AI systems.

“Today, the use of automated systems, including those sometimes marketed as “artificial intelligence” or “AI,” is becoming increasingly common in our daily lives,” the letter says. “Although many of these tools offer the promise of advancement, their use also has the potential to perpetuate unlawful bias, automate unlawful discrimination, and produce other harmful outcomes.”

Citing increased automation technologies in systems that make important societal decisions, including housing and job opportunities, the federal signatories acknowledged the benefits and risks associated with the advent of more automated systems. Facing a lack of regulation and rapid innovation, the four leaders pledged their agencies’ continued advocacy for individual rights. 

“Existing legal authorities apply to the use of automated systems and innovative new technologies just as they apply to other practices,” the letter reads. “We take seriously our responsibility to ensure that these rapidly evolving automated systems are developed and used in a manner consistent with federal laws, and each of our agencies has previously expressed concern about potentially harmful uses of automated systems.”

Among the inherent design features within the majority of AI technologies that concern federal leaders are model opacity and lack of transparency; the use of unrepresentative datasets to train AI algorithms to make biased decisions; and how a given organization may leverage AI tools. 

“Our agencies reiterate our resolve to monitor the development and use of automated systems and promote responsible innovation,” the letter continues. “We also pledge to vigorously use our collective authorities to protect individuals’ rights regardless of whether legal violations occur through traditional means or advanced technologies.”

All four agencies represented by the letter—who have previously collaborated to voice concern over bias in AI systems—specialize in some form of U.S. consumer protection regulations. The FTC in particular issued a warning in early 2023 for businesses making inaccurate or unsubstantiated claims about AI products that could lead to improper usage. 

“We already see how AI tools can turbocharge fraud and automate discrimination, and we won’t hesitate to use the full scope of our legal authorities to protect Americans from these threats,” Khan said. “Technological advances can deliver critical innovation—but claims of innovation must not be cover for lawbreaking. There is no AI exemption to the laws on the books, and the FTC will vigorously enforce the law to combat unfair or deceptive practices or unfair methods of competition.”