How AI and Automation Can Uncover Attacks in the 2020 Election

umarazak/Shutterstock.com

Stopping phishing attacks is harder than it sounds.

Adversaries of the U.S. including Russia, China and Iran “probably already are looking to the 2020 U.S. elections as an opportunity to advance their interests” and “will use online influence operations to try to weaken democratic institutions.”

Those are the words Director of National Intelligence Dan Coats shared in the U.S. intelligence community’s January 2019 Worldwide Threat Assessment. We already know these warnings have merit and that phishing is one of the tactics these adversaries rely upon most. As one of the Mueller indictments showed, the leaked emails from Hillary Clinton’s 2016 presidential campaign were the result of a phishing attack on campaign chairman John Podesta. More recently, both major parties were targets of spear phishing attacks during the 2018 midterm election season.

These threats are not abating and are serious. Federal agencies need to take precautions now to defend against potential phishing attacks—but this is harder than it sounds. Phishing, which kicks off other attacks, is difficult to detect because attackers use sophisticated social engineering, spoofing and deception techniques to trick users into clicking on emails and links perceived to be trustworthy. Unlike malware or attacks that contain identifiable signatures, phishing is often hard to detect because the emails appear to come from an expected source, like a partner or colleague. Officials with high clearance are prime targets because one wrong click of the mouse can compromise their credentials and give attackers access to all the sensitive information a person is entitled to.

To fully remediate a complex attack beyond confirming a suspicious email, security teams need to carry out several investigatory steps, including identifying who was targeted, how many people out of that group clicked a link, and what can be learned about those people to find commonalities or assess the threat. To answer these questions, teams may spend a day or more examining proxy logs, generating a list of IP addresses of who visited a link in a phishing email, or cross-referencing across multiple data sources to home in on the users likely affected.

Overall, this process can take many agencies several days to complete. The issue is exacerbated further by the current cybersecurity skills gap leaving agencies with a limited pool of security analysts, most of whom aren’t equipped to “hunt” and work beyond the traditional alert-response system.  

To protect against these attacks, resource-strapped security teams should be armed with technologies like artificial intelligence and analytics. AI is good at eliminating complexity, crunching voluminous data, and augmenting humans by surfacing the information needed to make decisions in a matter of seconds, instead of days. For instance, by automating the network traffic analysis, teams can more quickly view all users and devices that have used any protocol to communicate with servers linked to the attacker domains.

Further, AI solutions can identify commonalities by uncovering whether an attacker targeted a random group of email addresses or specific people. If the attacker targeted individuals who have similar access levels or are connected through a specific government project, it dramatically changes the threat assessment and helps establish a motive. Such an attacker is likely to continue trying until they have achieved their goal.

Ultimately, this approach avoids investigations that take weeks or even months if conducted manually. Never mind that often the investigation would be inconclusive or closed before a long-term fix was found and implemented, leaving the attacker in charge. Artificial intelligence can drastically cut this time and enable government security teams to establish more robust security profiles as the 2020 presidential race starts to heat up and adversaries on phishing expeditions continue to cast a wider net.

Gary Golomb is cofounder and chief scientist at Awake Security. He previously served in the United States Marines 2nd Force Reconnaissance Company.

NEXT STORY: Is Your Agency Ready for AI?