A malfunction could mean wrongful identification of an innocent person.
Brian Brackeen, CEO of facial recognition company Kairos, has a message about the technology his firm develops: It’s not yet ready for the burden of upholding the law.
In an editorial for TechCrunch, the tech CEO explains that artificial intelligence algorithms powering facial recognition need massive amounts of data to function—and in his experience these systems have not been given enough data on people of color to function properly. If used in law enforcement, that malfunction could mean wrongful identification of an innocent person.
“Software is only as smart as the information it’s fed,” Brackeen writes. “If that’s predominantly images of, for example, African Americans that are ‘suspect,’ it could quickly learn to simply classify the black man as a categorized threat.”
That particular example could be a reality, as data collected by law enforcement are notoriously biased against people of color.
Brackeen warns US citizens of the experiments in China where citizens are being monitored by cameras on every street corner, and authorities are making arrests or giving tickets based on automatic identification through that footage.
“Imagine if America and its already terrifying record of racial disparity in the use of force by the police had the power and justification of someone being ‘socially incorrect’?” he wrote.
Yet companies are working to improve the parity of their facial recognition algorithms across skin tone and gender, mainly after a study from MIT Media Lab found huge gaps in commercial systems’ accuracy between white men and female women of color.
Microsoft, a company shown by MIT to have such a disparity, released a blog post today claiming that the company had reduced facial recognition error rates for its commercial facial recognition algorithm in men and women of color by “up to 20 times.” Microsoft sells this facial recognition technologies for other companies to in their own apps and software products.