EEOC Launches Initiative to Ensure AI Doesn’t Exacerbate Civil Rights Issues


The new program looks to educate employers and employees about how algorithms are used in hiring and other employment decisions.

The Equal Employment Opportunity Commission wants to make sure new technologies used in hiring, firing and promotion decisions abide by civil rights laws, even if the people using those technologies don’t have a fundamental understanding of how they work.

Artificial intelligence tools are being used in the public and private sector to help employers find potential employees and make hiring and retention decisions. But those technologies carry inherent risks, including automating biases and systemic issues for organizations.

To combat this, the EEOC recently launched a new initiative to catalog those potential harms and educate employers and employees about how best to use these technologies without causing civil rights issues.

“The bottom line here, really, is despite this aura of neutrality and objectivity around artificial intelligence and predictive tools that incorporate algorithms, they can end up reproducing human biases if we’re not careful and aware that we need to check for that,” EEOC Chair Charlotte Burrows said announcing the initiative during a Genius Machines event hosted by Nextgov and Defense One.

Burrows cited reports of hiring algorithms that suppressed resumes from female applicants and job posting tools that fail to give employers the full scope of potential hires.

“We have always had the role of ensuring that any screen or device that might screen someone out—might be the reason that you didn’t get a job—that those are fair with respect to not having an unjustified impact on people based on race or national origin or age, etc.,” she said. “We are launching a brand new initiative to ensure that tech-based hiring tools that are used in employment decision making—including those that incorporate artificial intelligence in some way—comply with the federal civil rights laws that our agency enforces.”

The initiative launched with five initial goals, according to an EEOC press release:

  • Establish an internal working group to coordinate the agency’s work on the initiative.
  • Launch a series of listening sessions with key stakeholders about algorithmic tools and their employment ramifications.
  • Gather information about the adoption, design, and impact of hiring and other employment-related technologies.
  • Identify promising practices.
  • Issue technical assistance to provide guidance on algorithmic fairness and the use of AI in employment decisions.

To start, the program will focus on educating employers about the AI tools they are buying but might not understand.

“Some [AI tools] are being sold with the promise that if you remove the human element, you remove the bias,” Burrows said. “One of the things that we really want to help educate people about is that’s not necessarily actually the case.”

The EEOC plans to produce educational materials to help employers at every technical level better understand what they are buying and teach them how to audit those tools to identify ingrained biases.

The goal is not to prevent, restrict or roll back the use of AI in hiring and management decisions but rather make sure the people using those tools understand what they’re buying.

“Employers have, sometimes, a lot more resumes than they can sort through,” Burrows said. “There’s a real need there and I’m hopeful that people can get the benefit from it while still protecting civil rights.”

That said, under current law, “employers can’t just take a vendor’s word that it’s safe and complies with the law,” she said.

The commission already offers employers a set of uniform guidelines for screening potential employees—guidelines that have shown to be effective with old processes, as well as new technologies, Burrows said. But the new initiative will enhance those offerings.

“What our value added is now and the reason why we wanted to start this initiative at this moment is to help people understand how [the uniform guidelines] apply now that we have these new tools emerging,” she said. “What we’re hoping to do is have these conversations, learn most about what the questions are, then provide ongoing guidance to the public about it: You’ve got these guidelines, how do you use them in this context?”

The commission also wants to make sure employees are informed about the technologies that are influencing their bosses’—or prospective bosses’—decisions, she told Nextgov.

For instance, Burrows noted algorithms can dictate job opportunities even before someone applies for a job.

“A lot of employers like to target their ads to those whom they believe will be most interested in them or most desirable as applicants,” she said. “If there is some bias that creeps into an artificial intelligence program or other screening tools, that can actually affect the pool from the beginning.”

The initiative is in the early stages now, with a focus on fact-finding and listening sessions.

“Once we get that message out … then the next question is, ‘What do you want me to do?’ And that’s where the initiative is right now,” Burrows said.