CBP’s Outgoing Biometrics Lead on Law Enforcement Use of Facial Recognition

jer123/Shutterstock

Law enforcement should be careful using facial recognition in public spaces. But with the right applications, authorities and transparency, federal programs could serve a lot of good.

After close to three decades in government, John Wagner, the head of Customs and Border Protection’s biometrics program and deputy executive assistant commissioner of the Office of Field Operations, is set to retire this month. 

Wagner joined customs in 1999 and transitioned over to OFO in 2003 when it was created along with the Homeland Security Department. Since being named DEAC in 2014, Wagner has led CBP’s efforts to comply with a congressional mandate to use biometrics to verify the identity of travelers entering and exiting the country. This effort has culminated in the wide use of facial recognition technologies at ports of entry–land, sea and air.

As he prepares to retire, Wagner joined Nextgov to talk about the evolution of CBP’s Entry/Exit biometric program, why it’s different from some of the scarier facial recognition use cases and the broader implications of law enforcement’s use of the technology in public spaces.

The interview has been edited for length and clarity. 

Nextgov: Why are you leaving CBP now and where are you headed?

Wagner: I'm retiring as of the middle of July, after 28-plus years, and don't know what I'll be doing next.

What prompted you to retire?

The opportunity to finish up my career and look for other things to do.

Were you always interested in technology, and specifically biometric technology? Or is this just something you walked backward into?

I think it's something that grew over time. CBP had always been a very forward-leaning agency as far as computers, databases, data analytics and the use of technology. And I kind of grew up in the organization in that mindset. 

So, when I started getting involved in some of the national program work, we always look to automation and technology, which was a natural flow into biometrics to support the programs we were doing—being the Trusted Traveler program, Global Entry—to the advent of fingerprinting en mass for the the biometric Entry/Exit requirements or for criminal record searches. And that technology started really to come online where you could run criminal history checks or law enforcement inquiries. It’s just the culmination of years and years of working with similar technological advances. 

Can you give us the history of CBP’s biometric programs, specifically with Entry/Exit? As you mentioned, the agency collects a lot of different biometrics. How did you land on face as the best option?

There were several congressional mandates requiring DHS to build a biometric-based arrival and departure system for most visitors to the U.S. This was a recommendation straight out of the 9/11 Commission. The government had an interest to know—a national security interest to know—who was still in the country and who's overstayed their authorized period of admission. And they felt biometrics was the best confirmation to get accurate data on. 

Our system up until then was basically biographically based. DHS implemented the arrival processing of visitors with fingerprints in the mid-2000s on entry. But the departure piece was elusive because there was no distinct departure process like in other countries. Like in Europe, when you depart you go through a border control officer and have your passport stamped. There's no such design in the U.S. where we controlled departures. And domestic flights commingle with international flights at our airports. 

So, there was no simple place to inject a biometric collection without creating a lot of congestion and confusion and great expense for everyone. 

CBP was given this mission—Congress transferred it in 2013 to CBP. And really the solution for us came about by reevaluating the use of biometrics, the type of biometrics, and reengineering how biometrics are used within our systems. 

We haven't walked away from, say, fingerprints. Every time we encounter an alien for the first time, as the U.S. government, we will take the full set of 10 fingerprints. What we're switching to is a biometric that’s easier to take from a person and to match it against the person, and then link it to their other biometrics that are already on file so we can get the results of the fingerprints without actually having to fingerprint someone because the records are saved in the database. 

When we talk about landing on face as the choice, it was the one biometric we had on file for just about every traveler—including U.S. citizens who provide their photograph to the Department of State, which is placed in their passport—and we can compare people against that photo, which is something that's commonly done today manually. 

It was the one biometric we had for just about every traveler. It was the simplest biometric and most intuitive for people to provide because everyone knows how to take a picture. It wasn't seen as draconian as fingerprinting; making people feel like criminals or they did something wrong by being fingerprinted. It was a different passenger experience. 

And it was just a quick way and a simple transaction for CBP to build: to take a photograph, compare it against the photographs of the people traveling and link that back to the biographical vetting and the other biometric vetting so we have a complete picture—no pun intended. 

We built this on top of a lot of the systems we had built for vetting and national security purposes in the past and the [advanced passenger information] data and the [passenger name records] data we get from the airlines. We’re able to stage the photographs of who's traveling on that day or on that flight into a segmented pool or gallery. And when we take a picture, we only search it against those people that are flying. So, it's a very limited search. That's why it's so fast and it's so accurate. 

And we don't actually have to handle the travel document, which gets us into the reengineering piece when we think about different ways we're using biometrics.

I imagine that's also nice during the pandemic, too: cuts down on contact.

Sure, and now as people are a lot more cognizant about what they touch, who they’ve provided their travel documents or boarding pass to, certainly the hygienic benefits of having a touch-free environment, but also a secure environment, I think, would be of great interest to not only the travelers but people who work at the airport, too, either government employees or airport or airline employees.

How do you feel about where the program has come and where it's going? Do you feel as though the biometrics program and the facial recognition pilots that you've launched are in a steady enough place? What do you feel like you've finished and what do you feel like you're leaving behind?

I feel like we have finished designing what a feasible solution could look like. It's cost effective. It is secure. It meets the congressional mandates. It's very intuitive for people to address, you know, the complexity or the confusion, the congestion concerns. And, it actually facilitates travel. So, we've been able to devise a security solution—a very elusive security solution—by improving the passenger experience and the facilitation experience. So, I think we've defined that. 

It's a question now of how do you implement this and what does the implementation look like? This is what we're in the midst of. The technology is very flexible as far as what a camera could look like and/or a gate that could be attached or a person standing there and monitoring it. We're very flexible in how the airline or airport designs the passenger experience point of that. 

I think what's left is addressing the privacy concerns, the surveillance concerns and the government's use of the information and being clearer with the public—or getting a better understanding with the public—of how the information is actually being used and how it's not being used.

What is your position on those issues and what potential solutions are there out there?

The new information we're collecting is a photograph of the traveler. If you're a U.S. citizen, and it's matched to your U.S. passport or your Global Entry photo that's in our database, [the new photo is] deleted. It's not kept by anyone. If you're a foreign national, and your picture is matched to your visa, your passport or your previous arrivals, it goes over to [DHS’s biometrics database IDENT] and is stored by the [Office of Biometric Identity Management] within IDENT as the official system of record. Then, any sharing of that information is within that system of record notice, or privacy impact assessment. 

So, when you talk about what new data is being really utilized or collected or shared, there's really not much new here. That's where we have to do a better job of explaining that. And then explaining what photos we're actually matching against, which is really just your travel document photos or your previous arrival photos. It’s photos a traveler has given to the government for purposes of international travel and are matched manually by a person to that photo today. We're just using an algorithm now to help us make better decisions.

The specific use case that we're talking about with CBP, you're doing one-to-one matching not one-to-many, right? It's identity verification, not identification—i.e., ensuring a person is who they claim to be, not identifying who an unknown person is.

It’s one-to-a-small-many.

Right because you're comparing photos to a gallery.

Right, against the people on your flight or the people flying into or out of the airport that day.

What's the accuracy rate of that right now?

What we're seeing is match rates in the 97 to 98, sometimes 99%.

The reason for this is we're using good quality cameras and we're using good quality photographs. You have a willing participant having their picture taken. It's not surreptitiously being taken; they're standing in front of a camera. So, you get a good quality photograph, you're matching it against high quality photographs against a very limited data source of gallery photographs of which your picture’s in there. 

For instance, we take a high quality photograph of a traveler, we search it against a few thousand photographs of which your passport or your visa photo is expected to be within that list. 

Right, if you are who you say you are.

Right. So, it's very accurate in that who does this picture most look like from who's on this flight manifest, or who's within this block of flights leaving this terminal at this time? Versus, taking a photograph and running it against a random set of photographs of varying degrees of quality of which your picture might not or probably shouldn't be in that, and you're a lot more likely to match or resemble someone than no one at all.

This is where NIST gets involved and looks at the different control factors in their studies and what causes higher error rates versus what helps mitigate the error rates.

For your program, are you using facial recognition in any other ways? Are you doing any one-to-large-many's or any of these other kinds of identification use cases?

For the biometric Entry/Exit, it's really just the person who's traveling inbound or outbound on an international flight against that gallery of assorted travel related photos. And that's it. And you're only matched against those photos in that gallery, which is only comprised of travel document or arrival photos.

CBP has that narrow use case in Entry/Exit that you've been talking about. But then, when you bring facial recognition technology to other use cases, like embedding it into body cams to do identification, which is something that Border Patrol was looking at, something other federal agencies and law enforcement are looking at across the board. That's not a specific thing that you are doing within this program. But, as someone who's been around this technology for the amount of time you have and dug as deep as you have into these issues, I wanted to get your opinion on broader law enforcement use of facial recognition technology. From your experience, do you see a point at which it becomes a bad idea to use these technologies? Is embedding them in body cams to do identification matches, for instance, is that too far in your opinion? Is there a line?

It gets back to your authority to use that information and run law enforcement queries against it. 

When you're talking about a public space, it's much different requirements, much different expectations of privacy, much different law enforcement criteria to have someone identify themselves; versus a place like an airport, where you're required to produce identification, you're required to have establish your identity to go through the different processes at the airport, or crossing the border. The legal requirements are already there for you to present a document, have your citizenship, your identity and your admissibility determined by a CBP officer. 

The tools and technologies on how that's done and how those decisions are made—be it the standalone camera or body-worn camera or facial recognition or fingerprints—as long as the government follows the Administrative Procedures Act or the Privacy Act, and publishes the documents that talk about the data collection and the use and the sharing and the archiving of that information through the privacy impact assessments and regulation systems, a record follows all that. But it starts with: Do you have a use case to even collect or use the information? 

So, the individual technologies or the form factor in which you're using it to complete a process that's already grounded in statute is a completely different discussion than going into public spaces, where there's no requirement you have to produce an identity document—there's no requirement that you have to prove who you are in a public space—for a law enforcement officer to identify you, potentially stop you, ask you for identification, to surveil you. There's a different set of requirements and what we'd like to do is split this discussion into: Are you automating a process that's already authorized in statute or required in statute versus are you creating a new condition by which you might be subject to a law enforcement action and does matching by a facial recognition algorithm, can that lead to reasonable suspicion or probable cause for a law enforcement organization to take an action against you? Or does it form a part of the process of how they would build probable cause, like manually reviewing photographs or an anonymous tip line that somebody calls in? 

But that's a whole different discussion than automating a check that already goes on today.

There needs to be a lot more discussion on what we're doing in a public space where you have different expectations of privacy and different legal protections. So, that is a very good discussion that needs to be had. 

But, it's not so much individual technology and all use cases of that technology. So, I just caution that, let's not paint with too wide of a brush and shut down the technology where there could be lawful uses of it separate and apart from the public space.

And that's why I wanted to have that discussion with you, for two reasons. One, to your point about making that distinction: The program that you are running—and are now leaving—is separate and distinct from a lot of the other facial recognition conversations going on out there. And I think that's a worthwhile point to hit. 

But to the other half of your point: You sit in a unique position between law enforcement, homeland security, technology and immigration and diversity issues. So, even though your program is different, I am interested to hear your thoughts on these broader issues of privacy and the use of facial recognition in public spaces.

I think we would support greater public discussion on it, but also, identification of the differences in the use cases and start with the discussion on what are the authorities to use technology in different settings. And, if that's the basis of the discussion, … we would support greater discussion or clarity on how the technology could permissibly be used and under what circumstances. 

It's very powerful technology, and there are a lot of good and justifiable use cases that do serve public safety. That being said, there's also concerns about how it could be misused and are the adequate protections or barriers in place to prevent its misuse. 

I think until there's some greater articulation of those parameters, people will still be hesitant to support a lot of this. I think the public could benefit from better discussion or greater discussion around what are the potential good uses and how do we lawfully get our law enforcement agencies to a place where they can take advantage of the technology … but also prevent the abuses, or give the transparency to the public so they have the confidence that it's not going to be abused.

It's part a legal discussion, it's part a technology discussion—the limitations of the technology, the benefits of the technology—and then, are there new laws or requirements needed for this?

Would it need new legislation?

No, I don’t think additional legislation is needed. I think that's more of a legal discussion about permissible actions of a law enforcement agency in public spaces. The courts could rule on it. Congress could certainly rule on it by passing additional legislation. But, as these cases go through the court system, you could see judicial rulings on what's permissible or under what circumstances things might not be permissible.

I'm not really the one to opine on the legal pieces of that, but I think that's where the discussion needs to be held.

As someone who has been in and around this, what advice do you have for your colleagues at, say, Border Patrol or TSA or ICE? How can they get ahead of some of these issues and make sure that their biometrics programs are on the up and up?

Well, first and foremost, follow the requirements of the Privacy Act and the Administrative Procedures Act. File your privacy impact assessments, your systems of record notices, consult with your privacy offices on how to do that. But, it all starts with identifying your legal authority to utilize the information and how it’s utilized. 

So, once you can establish that, then it's a matter of publicizing or having the public discussion about how it's being used or why it's being used or how it's lawfully being used. That can be a difficult conversation sometimes in a law enforcement sense because certain techniques and procedures in law enforcement you would not want to publicize because the people you're trying to catch are going to see how you're doing things. But, it's finding that fine line between disclosing too much to instilling in the public the confidence that you're not misusing it, and you're using it within the parameters of your authority within your agency. It's finding that fine line to do that.

More broadly, what have you learned throughout your career, specifically with CBP and the biometrics program? What's your big takeaway as you leave? 

The biggest piece I took away was the consultative nature of what you're doing. Bringing in your stakeholders to help you solve the government's problems. Building that trusting relationship with your stakeholders and the public so you get the right feedback and you're building the right things for the government to serve the public.

Building those relationships, building those networks, having that open dialogue not only within government but also externally with the stakeholders and the public so as you're devising solutions for government, you know what you're building is really the best thing you can build under the circumstances.