Without legal safeguards, this technology will undermine democratic values and fundamental rights.
At protests against police violence across the country, law enforcement authorities have deployed the tanks, tear gas and military equipment that they have at their disposal. Americans have also gotten a look at a more insidious police capability: facial recognition systems with the ability to identify faces in a crowd and track people as they move through the world.
As devotees of any police procedural drama will know, law enforcement agencies have used facial identification software for years, finding suspects caught on security cameras in databases of mugshot photos. But in recent years, software for identifying faces has improved dramatically, high-definition cameras have proliferated to near-ubiquity, and the facial databases that police can access have expanded far beyond the mugshots of convicted felons
This combination gives law enforcement the ability to conduct real-time surveillance of anyone—or everyone. And this happens in a black box with few limits: Despite the advances in the technology, legal controls on facial recognition have not kept up.
Facial recognition can help find missing children and identify wanted criminals. But without legal safeguards, this technology will undermine democratic values and fundamental rights. Ubiquitous, opaque mass surveillance presents an obvious threat to privacy, chills our freedoms of speech and assembly, and diminishes our ability to be anonymous in public.
What is more, many facial recognition systems have built-in biases: research has demonstrated that commercially-available systems disproportionately misidentify dark-skinned people, especially women. And recent reporting on the arrest of an innocent Black man based on a false facial identification demonstrates how this technology can entrench and enhance systemic bias in policing.
In the glare of recent media attention, IBM, Amazon and Microsoft have all suspended their supply of facial recognition software to law enforcement, citing ethical concerns and calling for regulation. But corporate restraint will not prevent the abuse of this technology—even if some firms stand down, others are eager to step in.
In the absence of any national law or policy, cities and states have been leaders in regulating facial recognition. Illinois was ahead of its time in 2008, passing the Biometric Information Privacy Act, which restricts how private entities may collect and use biometric data, including facial scans. Facebook’s use of facial recognition without Illinoisans’ consent recently led to a $550 million settlement under this law.
Earlier this year, Washington state became the first to pass a law addressing use of facial recognition by government authorities—including law enforcement. The law requires transparency into how authorities use the technology and requires human review when automated identification could significantly affect people’s lives. It also prohibits the use of facial recognition for real-time surveillance except with a warrant or under “exigent circumstances.”
Congress has woken up to the risks of this technology. The House-passed police reform bill would restrict facial recognition in police body cameras, and a new bill introduced by House and Senate Democrats would ban the use of facial recognition by federal law enforcement and condition federal funding of state and local law enforcement on the imposition of similar bans. A moratorium on the use of this technology is appropriate, as lawmakers consider more nuanced policy solutions that can address the potential harms of the technology while permitting its more benign uses.
A more durable solution to prevent abuse of facial recognition should include robust transparency measures, which can provide critical insight into how and where the technology is used. Beyond such baseline requirements, any law on facial recognition should address all three of the broad risks presented by the technology.
First, to address privacy risks, Congress should pass a national privacy law that disciplines the collection and use of biometric data, including facial scans. The European Union’s data protection laws have been used to punish misuse of facial recognition systems and can be a model in this regard.
Second, to address bias risks, Congress should establish testing requirements, standard-setting, and certification mechanisms to prevent deployment of biased facial recognition systems. Where automated facial identification could affect individuals’ lives, human review should be mandated. And photo databases used by police for facial recognition must themselves be unbiased, for example by excluding the mugshots of people not convicted of any crime.
Third, laws should limit the use of facial recognition to conduct mass surveillance and real-time identification. Such scanning should require a warrant and should be prohibited in First Amendment scenarios such as protests and religious observances.
Unchecked, these tools will undermine privacy, entrench bias, and create a surveillance apparatus ripe for abuse. The measures above can help prevent those outcomes, and additional restrictions may be warranted, such as the prohibition of certain especially risky applications of facial recognition. The technology is already here; it is time for policymakers to act.
Sam duPont is the deputy director of GMF Digital, the German Marshall Fund's tech policy program.