Law enforcement needs standards for using AI algorithms, GAO official says

Yuichiro Chino/Getty Images

The director of Science, Technology Assessment and Analytics at the Government Accountability Office told lawmakers there should be transparency in how such algorithms work and when they are used.

Algorithms have long been used to aid in sifting through data pertinent to law enforcement investigation. But challenges still persist in these use cases, particularly related to biased training, Karen Howard, director of Science, Technology Assessment and Analytics at the Government Accountability Office told members of the Senate Judiciary Committee on Wednesday.

Howard said that several areas of law enforcement that use AI algorithms could benefit from additional federal oversight, including increased training, standards development for AI in criminal investigations and increased transparency in an algorithm’s performance and testing. 

“We think national standards would help to drive the conversation if there were standards at the federal level,” she said. “Some of those kinds of standards could start to bring more consistency to this and help reduce the potential for well meaning human investigators who are doing their best to solve a crime to be able to use the tools more effectively and interpret the results more efficiently.”

Howard added that policymakers could aid in promoting these common standards for nationwide usage, along with ushering in more transparency in not only how certain algorithms learn, but also when an AI-powered algorithm has been used, such as in a facial recognition scenario.

“We believe that transparency step should be known,” she said. “People should be aware when an algorithm has been used in one form or another as part of the evidence collection and assessment process.”

The National Institute of Standards and Technology has established itself as one of the vanguards of how to measure the safety and efficacy of AI algorithms. Howard said that while NIST does test and verify the accuracy of a variety of submitted algorithms for safe usage, it does not test every algorithm available.

“The NIST tests, which are top of the line gold standard testing, they are done only if a vendor chooses to submit its algorithm for testing. There's no requirement,” she said. 

Algorithms like those used in probabilistic genotyping software — which is used to link genetic samples to persons of interest — have raised concerns within oversight agencies before. Howard suggested that one solution to ensure safe algorithms are leveraged effectively in law enforcement is to require them to be tested through an independent third party, such as NIST. 

She also echoed a point that is prevalent in current government-issued AI guidance: maintain a human in the analytics process. Referencing an example of an algorithm’s power to identify a potential match of a fingerprint when cross-referenced with an existing database, Howard noted that a human should be involved in the process to interpret the result. 

“All the algorithm can do is find the best matches to whatever quality fingerprint they're given,” she said. “We know that in order for these tools to be used properly the analyst's decisions before the algorithm comes into play are critical and the analyst decisions afterwards.”