Here’s where campaigns should focus.
As the election cycle heats up, the potential for domestic and international meddling needs to be a chief concern for all campaigns—in every party and at every level of government but especially for the highest office in the land.
In a recent article for The Hill, we called attention to the severity of the threat that campaign hacking could pose in the 2020 presidential election, and the lack of attention it seems to be getting from many of the serious presidential contenders. And since then, The Washington Post has reinforced this view by shining a light on major candidates that are not being candid about how (or if) they’re securing their own campaigns.
It’s important to recognize that every campaign has likely already been targeted by attackers. In fact, it’s easier for attackers to strike now before campaigns gain structure and resources. Once an attacker has access, they have no problem waiting patiently to use it. As such, candidates from both sides should be treating digital security as a top priority.
Protecting a campaign for any office doesn’t have to be rocket science and doesn’t always revolve around any specific technologies. There are a few basic steps that anyone protecting an organization, its “inner circle” and its figurehead should be implementing to create a strong foundation for proper security first.
Audit the Entire Potential Attack Surface
Cyberattacks can target data and, most dangerous for political candidates, reputations. The vastness of the threat surface poses immense challenges, so the first step to take is to make sure a candidate is “clean” from a cyber perspective.
In this sense, the notion of “cyber” or “digital” will feel similar to how threats are gauged in the physical sense. Just as everyone in a candidate’s inner circle of friends and acquaintances will be screened in the “real world,” footprints and relationships in the digital world need to be audited too. Objectively, candidates have to be open about the good, the bad and the ugly of who they are, what they’ve done in past roles, and how that might make them a target of certain groups.
Establish a Trust Framework
Campaigns are really no different than corporations. In retail, for instance, you have an array of individuals, from hourly store workers (often seasonal employees) all the way up to the CEO. With national campaigns, you have volunteers in regional campaign offices and a hierarchy that stretches to the candidate.
Like a giant retail organization, campaigns are so large and distributed they bear similar risks. The best way to manage a presidential candidate’s cyber safety is similar to how you’d protect an organization’s top executives—by tiering risk layers to protect access. The first step is identifying who has direct access to the individual, and then understanding the second, third and even fourth-party risks.
When figuring out the concentric circles of trust, you also need to develop access and communication protocols. Developing guidelines that keep the outer circles apart from the inner circles is a necessity and should inform communication operations and decisions on security attributes such as disk encryption and network segmentation. The further a person is from the “bullseye,” the less privilege they should have. This is especially significant given the often-federated nature of campaigns between parties, campaign committees and the campaigns themselves. After all, we know from 2016 that the Democratic National Committee was compromised through someone at the Democratic Congressional Campaign Committee.
With a full picture of the attack surface in place, and a trust framework where information and access become more restricted for people at the center of the campaign, the next critical step is to understand legitimate behaviors. Only then will a campaign be able to detect the behaviors that might indicate a breach.
This goes beyond simply knowing who people within a campaign interact with. It’s essential to establish an understanding of who they are most like (i.e. similarity analysis), how they interact with different people and organizations, and which digital services and tools they use.
By using artificial intelligence and behavioral analytics, campaigns can identify outliers from a peer group, changes in behavior and other variables. The results minimize the workload on the (potentially volunteer) security team by identifying “interesting” behavior for analysis.
Take for instance a situation in which a staffer is observed trying to communicate or share files with another staffer in a manner that is not typical to the trust framework you’ve established. This could signify a malicious actor within the campaign or someone whose credentials have been compromised and being used by external attackers.
Assume the Worst—And Plan For It
From a security standpoint, businesses have realized that, due to the expansive threat surface of their operations, they will never stop every attack. While it has taken years—and sometimes regulation—corporations have realized transparency is the best policy when it comes to breaches and hacks.
Transparency in messaging post-breach is key. Unclear messaging can lead to damaging blame-shifting. Even if something happens ten levels away from the target (candidate), you still have to acknowledge it so everyone knows the campaign was in control and handled it in an effective manner. Having a plan for rapid response, and testing it, is essential.
By taking these steps, campaigns will put themselves in the best position to stop attacks that will certainly continue between now and election day. We’re at a critical time where democracy depends on digital security, and everyone inside the political apparatus needs to be aware of the threats, have at least a basic understanding of how to defend against them, and act vigilantly to ensure that our representative government remains intact through fair and influence-free elections.
Rahul Kashyap is CEO of Awake Security, former CTO of Cylance and has worked on cybersecurity initiatives with the Defense Information Systems Agency, Department of Defense, and several other government agencies.
NEXT STORY: The Not-Com Bubble Is Popping