Closing the Cyber Skills Gap Will Take New Technologies in Addition to New Talent

gorodenkoff/iStock

The cybersecurity resources gap is no longer a problem we can solve with humans alone.

Though the cybersecurity “skills gap” should be an issue of the past, it remains a problem companies across industries still struggle to solve. Even with the cybersecurity workforce gap seeing its first decrease on record, cybersecurity professionals’ workloads and tasks have only scaled up and increased in complexity amidst a rise in sophisticated, fast-moving attacks. The actual most critical gap for security teams across industries isn’t a specific skill: it is the gap between the growing number of cybersecurity tasks and the personnel and expertise needed to complete them.

Even as more individuals enter the field, a number of factors will continue to leave the cybersecurity community playing a constant game of catch up. Without solving the work-to-worker gap, trying to solve the skills gap by hiring en masse will only make a small dent in a larger, continuous problem.

These challenges include evolutions in external attack methods—such as the rise of automated attacks and advanced threat tactics. Security teams are also dealing with accelerated transformation in workforce practices and technological infrastructure, with the widespread shift toward remote working and the adoption of software-as-a-service and cloud platforms. 

With these compounded challenges, the cybersecurity resources gap is no longer a problem we can solve with humans alone. Rather than work harder, and throw more humans at the problem, we need to empower them to work smarter. 

To work smarter, we need to rethink the intelligence around the security tools with which we equip our professionals, and the roles new technologies can play in augmenting and expediting cybersecurity tasks. Cybersecurity professionals—no matter how skilled—are often using legacy tools that spit out a constant stream of alerts. As a result, many spend more than 25% of their time hunting down "false positives"; benign activity that is mistakenly labeled as threatening, according to a recent report. The right technology, however, can parse through this data autonomously, cutting through the noise to foreground only the most threatening activity. 

Secondly, machine-speed attacks like ransomware, which lock down entire digital ecosystems in seconds, outpace humans’ ability to respond. Even the most talented analyst does not have a superhuman response time. They also need to sleep, take breaks and have weekends. We need security technology that can fight back against these threats head-on without human intervention, spotting and stopping rapid attacks in an instant before they escalate and do damage.

We need technologies that not only alert security professionals, but that also allow us to get the best of human talent available. AI is perfectly positioned to meet this task. The power of artificial intelligence/machine learning (AI/ML) to conduct the initial triage and response of security incidents is how human teams are going to regain control.

However, not all AI is created equal. Closing the gap requires a shift in focus from merely replacing humans with automation to augmenting humans with autonomous, self-learning technologies. These autonomous technologies work in tandem with humans. For example, when threatening activity occurs, this AI can connect disparate events to immediately discern the full scope of the incident and recommend remediating action. Here, the human remains fully in control of the incident investigation and mitigation loop, yet they are able to start from a place of investigated action rather than initial incident evaluation—or worse, incident response—saving them hours. 

Self-learning AI is uniquely positioned to do the heavy lifting for security professionals, thus freeing them up for high-level activities. Rather than parsing through false positives, analysts can look at fleshed out, AI-generated incident reports, then use their expertise to make high-level decisions as to what to do next. 

Closing both the resources and skills gap will take more than new talent but will require embracing breakthrough technologies. The true power of self-learning AI is not in dealing with basic tasks for humans, but also supercharging their existing skillset, and often going beyond it, thus augmenting their abilities to keep pace with the speed and severity of today’s threats.

Marcus Fowler is the director of strategic threat at Darktrace.