European Countries to Test AI Border Guards

ImageFlow/Shutterstock.com

A new program replaces human border guards with artificially intelligent avatars to watch for deception in travelers. It follows similar efforts that go back a decade.

Next time you go to Latvia you might be get hassled by a border guard who isn’t even human. The European Union is funding a new pilot project to deploy artificially intelligent border guards at travel checkpoints in three countries to determine whether passengers are telling the truth about their identities and activities.

The system, dubbed iBorderCtrl, works like this: an avatar asks the traveler a series of simple questions like name, dates of travel, etc. The AI software looks for subtle symptoms of stress as the interviewee answers. If enough indicators are present, the system will refer the traveler to a human border guard for secondary screening.

“It does not make a full automated decision; It provides a risk score,” Keeley Crockett of Manchester Metropolitan University in England explains in this promotional video.

The theory is that micro-expressions undetectable to a human interrogator — pupil dilation, eye direction, voice changes and patterns, etc. — can reveal deceptive intent. It’s based on the work of Paul Ekman, considered the founder of “validity science,” the analysis of microexpressions to discover truthfulness.

The European Union will test the system at train, pedestrian, and vehicle border crossings in Greece, Hungary, and Latvia.

It’s not the first time that border agencies have experimented with algorithmically analyzed microexpressions to detect deception in travelers. In 2008, the U.S. Department of Homeland Security funded research into  a kiosk equipped with cameras and software, a program that came to be known as Automated Virtual Agent for Truth Assessments in Real Time, or AVATAR. It was deployed to several pilot airports in 2013, but never nationwide.

The iBorderCrtl researchers have so far only tested the system on 34 people, and are hoping for an 80 percent success rate when sample sizes improve.

As dystopian as AI border guards may sound, there’s evidence to suggest that they can outperform human guards in lie detection. In 2006, DHS experimented with a program called Screening Passengers by Observational Techniques, or the DARPA-developed SPOT, which sought to teach border patrol agents to spot suspicious microexpressions.

When a SPOT-trained agent wound up harassing King Downing, a coordinator for the ACLU, Downing sued. The Government Accountability Office later determined that the government had deployed the program before it had sufficient data to determine whether it would work.