Artificial intelligence will be a powerful tool for border control agencies, but security must be built in at every level to protect privacy.
With the advent of artificial intelligence and machine learning, there are now many new tools available to help border control agencies identify potential criminals and predict criminal cargo shipping activity.
However, for each step forward, we should be wary of the impact to personal privacy. The data used to fuel these new systems—and the resulting intelligence reports—could be vulnerable to data breaches and other types of hostile hijackings.
Yet while these concerns are real, the tools themselves bring great value to law enforcement and border security. The ability to better target criminals by identifying suspicious patterns of behavior elevates the quality of border patrol work and enables better protection. Existing mechanisms, which involve rudimentary pattern matching, are sometimes effective but often yield way too many false leads.
New machine learning tools free up busy security agency personnel and allow them to spend more time stopping legitimate threats and less time pursuing false leads. These artificial intelligence-based systems have the ability to learn and improve through each analysis. If a border agent inspects a cargo shipment flagged as potentially risky and finds nothing, that result goes back into the system and the targeting algorithms are automatically updated to be more accurate.
The fact that machine learning uses massive amounts of data culled from local, state and federal agencies, social media, personal travel histories and police departments, raises the question about how the data—which is likely to include sensitive information like bank records and criminal histories—is amassed and stored. Extraordinary precautions must be taken to ensure that storage and access to the data fully complies with local and international laws. Cutting-edge security must be built into the very foundation of the network.
As we’ve seen many times in recent years, data can sometimes be accessed through third-party systems down the supply chain. To prevent unauthorized users from accessing the information, tools used by customs agents should have security design principles such as “deny by default” and “least privilege” built into their systemic DNA so that data is secure and used only for its intended purpose. There are four ways that this can be achieved:
Superior security can be achieved through microsegmentation of the system so trespassers encounter only smaller, compartmentalized environments. In other words, we can mask the broader environment—and hackers cannot hack data or networks they cannot see. This approach is far more secure than using traditional virtual local area networks and firewalls.
These networks must also have a multifactor authentication to verify that users are truly who they claim to be and give them access to only one port on an authorized server. This system will ensure the user is working from a device that is authorized to access the network, and it is critical to confirm that authorization before they can even see the servers. Finally, the system should also check for breaches before each interaction.
As important as regular vulnerability assessments and segmentation are, resilience must be built into the program in order to respond quickly when a breach does occur. A good example of a standard we should all follow is the European Union’s General Data Protection Regulation, which updates the rules for companies that collect, store or process large amounts of user information regarding citizens. While these EU regulations don’t concern data that is relevant to national security or law enforcement, they do set a standard we should all want to follow—e.g. “data protection by design.”
The Human Factor
Systems should always have human analysts making the ultimate decision so they can use plain old common sense. If a system flags a traveler who has taken regular flights with several different minors in the last year, a human should examine all evidence available to determine first if there is a potential legitimate reason or if they should be investigated further.
Intelligence analysis produced by AI tools will be a great step forward for border security agencies and for our safety. These technology tools hold the possibility of detecting and deterring criminals and threats that have slipped across borders without fear for years. However, the public will support the use the tools only if they are assured their personal data and privacy are protected. So ensuring these programs and networks are impregnable must be among our first responsibilities.
John Kendall is the director of the Border and National Security Program for the global public sector practice at Unisys.