How Police Departments Can Evaluate Threats by Using an Algorithm

Schmidt_Alex/Shutterstock.com

Cops, firefighters and EMTs react in ways shaped by whatever they’re told.

When Tamir Rice was killed by police officers in Cleveland, Ohio, in 2014, some observers assigned a portion of the blame to a 911 dispatcher. She relayed a citizen’s concern that a black male was sitting on a park swing, pulling a gun from his waist band and pointing it at people. But she failed to convey the caller’s observation that the male was “probably a juvenile” and that the gun was “probably fake.” Perhaps that information would have changed the behavior of the cop who shot to kill even as he stepped from his squad car.

As it turned out, the gun was indeed a harmless toy, and Rice, the black male holding it, was just 12 years old.

Every year, incidents like that one underscore how crucial it is for 911 dispatchers to piece together and pass along relevant information to first responders. Cops, firefighters and EMTs react in ways shaped by whatever they’re told.

Of course, even the best dispatchers can only piece together so much context in a matter of seconds. Often all they have to go on is a single, frantic 911 caller of unknown reliability. And that’s where the technology company Intrado sees a market opportunity.

With its product, Beware, a city that gets a 911 call about a known individual or address can plug that information into a proprietary search function and get a “threat assessment” based on publicly available data that a dispatcher wouldn’t have time to find or weigh. The technology would not have helped to save Tamir Rice. But according to an article in The Washington Post ,police officers in Fresno, California, believe that it helps them better respond to many emergency calls in that city.

“While officers raced to a recent 911 call about a man threatening his ex-girlfriend, a police operator in headquarters consulted software that scored the suspect’s potential for violence the way a bank might run a credit report. The program scoured billions of data points, including arrest reports, property records, commercial databases, deep Web searches and the man’s social-media postings,” the story begins. “It calculated his threat level as the highest of three color-coded scores: a bright red warning. The man had a firearm conviction and gang associations, so out of caution police called a negotiator. The suspect surrendered, and police said the intelligence helped them make the right call—it turned out he had a gun.”

That sounds like a good outcome in that particular case. And Fresno Police Chief Jerry Dyer is a fan of the product as one part of a larger municipal-surveillance network that’s now at his disposal.

“Our officers are expected to know the unknown and see the unseen,” he told The Post. “They are making split-second decisions based on limited facts. The more you can provide in terms of intelligence and video, the more safely you can respond to calls.”

But as is often the case when police departments adopt new technology on their own initiative, rather than engaging in a sustained period of public comment and civic debate, Dyer appears to underestimate the pitfalls the technology could bring, and has adopted it without key safeguards, making problems all but certain.

One was raised at a City Council meeting:

Councilman Clinton J. Olivier, a libertarian-leaning Republican, said Beware was like something out of a dystopian science fiction novel and asked Dyer a simple question: “Could you run my threat level now?”

Dyer agreed.

The scan returned Olivier as a green, but his home came back as a yellow, possibly because of someone who previously lived at his address, a police official said. “Even though it’s not me that’s the yellow guy, your officers are going to treat whoever comes out of that house in his boxer shorts as the yellow guy,” Olivier said. “That may not be fair to me.”

In fact, depending on how heavily Beware weighs social-media posts, it seems like it would be relatively easy for a hostile person to create a few fake accounts and elevate someone else’s personal threat assessment and their home address to red status.

How resistant is this proprietary-search capability against being gamed? Fresno’s police department can’t possibly know, not only because the product is relatively knew, but because it won’t be told how the technology actually works.

Intrado considers the specifics a trade secret.

Opacity of that sort will make it much more difficult to evaluate the efficacy of the company’s tool. And it could easily obscure egregious civil-liberties violations. For example:

  • The algorithm could assign an elevated threat level to individuals who have social-media accounts registered under names typically given to black or Hispanic people.
  • It could assign an elevated threat level based on tweets or Facebook posts that offer constitutionally protected speech that criticizes police officers or police unions.
  • It could disadvantage low-income people by assigning an elevated threat level to their addresses based on the behavior of past tenants in their high-turnover apartments, while richer folks in single-family homes are less often miscast.

If Beware were open source software, an unjust or flawed criteria or piece of code could be discovered. But the secrecy around Intrado’s approach makes real oversight impossible.

In that way, the company’s product is part of an alarming trend. More and more secret code is being incorporated into the criminal-justice system, making it more opaque and vulnerable to mistakes.

“Proprietary software interferes with trust in a growing number of investigative and forensic devices, from DNA testing to facial recognition software to algorithms that tell police where to look for future crimes,” Rebecca Wexler explained last year in Slate. “Inspecting the software isn’t just good for defendants, though—disclosing code to defense experts helped the New Jersey Supreme Court confirm the scientific reliability of a breathalyzer.”

Dyer downplays the concerns of Beware's critics, assuring The Washington Post that its scores don’t trigger a particular response––in Fresno, he explained, 911 operators “use them as guides to delve more deeply into someone’s background, looking for information that might be relevant to an officer on scene.” But that just means that any faulty information would come to the police officers second hand. Dyer added that street officers never see the scores, although in Fresno, they were told about a “red” address and then called in a negotiator.

Police departments that use Beware would seem to have troublingly perverse incentives, insofar as cops who use deadly force are judged based on what a reasonable officer would’ve done in the same situation with the same information. Lawsuits against municipal governments turn on that question. Will the fact that police were responding to a call relating to a house or individual with a red threat level now be used to argue that subsequent force was relatively more reasonable?

I can imagine a “smart” emergency-response system that incorporates publicly available data in a way that enhances public safety without infringing on civil liberties. If I had a schizophrenic son, I would love a way to register that fact with the emergency-response system so that dispatchers would know to send folks trained to deal with the mentally ill. In the event of a fire, I’d love for my local fire department to know automatically that my wife and I have a friendly dog in the house.

But the number and gravity of the unanswered questions surrounding the technology that Fresno and others are using more than justifies the skepticism of critics like the ACLU of Northern California’s Matt Cagle, who summed up his objections to me:

Fresno police rushed forward with new surveillance technology without meaningful public input about whether this software is appropriate or what safeguards should be in place. This is yet another surveillance tool being used without transparency or accountability, and it risks targeting communities that are already vulnerable to police misconduct.

If Intrado opened up every aspect of Beware to scrutiny by the public, and the Fresno police force committed to specific protocols, its benefits might prove greater than its drawbacks. But so long as residents aren’t allowed to know what causes their local police force to stigmatize a person or an address as “dangerous,” they’re effectively denied the ability to decide whether the software tool is just or prudent. My threat assessment: Beware of this product and proceed only with great caution.

(Image via /Shutterstock.com)