the-next-generation-of-threat-detection-will-require-both-human-and-machine-expertise

Share news article

Share on facebook
Share on twitter
Share on linkedin
Share on email

The Next Generation of Threat Detection Will Require Both Human and Machine Expertise

There is a debate in the world of cybersecurity about whether to use human or machine expertise. However, this is a false dichotomy: Truly effective threat detection and response need both kinds of expertise working in tandem.

It will be years before machines completely replace the humans who perform typical detection and response tasks. What we predict for the meantime is a symbiotic relationship between humans and machines. The combination means that detection of and response to threats can be faster and more intelligent. It leaves humans to focus on what humans do best, while artificial intelligence (AI) shines at tasks better suited for machine processing.

Threat detection is very much an adversarial problem. Attacks rely on stealth, which often makes detection difficult, especially among billions of data points. Technologies we’ve relied on for the past 20 years are not sufficient to combat threats or sift through the “noise” to find the “signal.” Yet skilled humans can find threats that rule-based systems cannot identify.

Any system that uses AI for the next generation of threat detection will need to harness the power of both human and machine expertise and be able to learn and adapt based on human feedback.

Perfection Is Not the Goal, Human Performance Is

There’s a misconception that AI can’t really make decisions, and we need vastly experienced human experts with irreproducible human intuition.

Looking at this through the lens of the classic Turing test, we asked: Can a machine outperform a security analyst in 80% of the work currently done by humans? If the answer is yes, imagine the productivity gains and efficiency for security operations.

We see reason for optimism here. Forty years ago, a chess engine beating a human was unthinkable, but the problem was settled in half that time. Just 10 years ago, automated audio transcription was poor, and humans were better at the task. Now machines can transcribe at least as well as humans.

Teaming Up for the Best Outcome

Most companies can’t hire enough staff to deal with all of the security alerts. The ideal solution to this talent crunch employs intelligent automation to assist security analysts, incident responders, and threat hunters. There are three main ways to successfully apply security automation:

1. Alert triage. Turning millions of alerts and thousands of events into a handful of actionable cases with context about what happened and why helps prioritize tasks for human workers.

2. Incident response. Automating repetitive tasks reduces the mean time to detect (MTTD) and mean time to respond (MTTR). This frees up human analysts to respond to more important threats and make more effective, immediate decisions.

3. Threat detection. Threat detection is an offensive game, focused on identifying and correlating new threats across the network, different endpoints, and applications while prioritizing actions over alerts. Of the three, this is also the main area for improvement: How can we apply automation more effectively to threat detection?

Automating Threat Detection

There are two kinds of automation. The first is replicating simple human actions to build into an AI-driven process. Threat detection, however, is essentially a decision-making process.

The second kind of automation requires us to determine which incidents genuinely require escalation by human security analysts. The current quality of automation technology is clear — in some security operations, machines exceed human accuracy. The goal is to build a decision engine that makes decisions as well as human beings, if not better.

But how can we trust that machine decision-making equals or supersedes human decision-making? Simple. Look at the data!

Automation may mark an alert as an incident that a human security analyst later closes without escalation. Ask them why, and the analyst will walk you through their thought process. Those “whys” are the basis of what we call a factor. Factors that are not immediately obvious may play an important part in the final decision.

The more factors we gather, the sharper the accuracy of both human and machine expertise. Meanwhile, we can also reduce false positives. Every difference between human and machine may uncover additional factors, or human analysts may combine factors in different ways than the automated system.

Improving the Decision Engine

A rules engine is limited to modeling just the “bad” qualities or behavior we observe in a pool of data. As a result, it can only identify and respond to incidents that fall within those criteria. In contrast, a decision engine teaches the machine both “bad” and “good” and enables the model to progressively learn.

Mimicking a human’s approach to learning and replicating it delivers the same decision, only automated. Hundreds of decisions can be made in just one minute, and resolution time plummets. Instead of running through 20 routine alerts, human analysts could focus their time and energy on one or two actionable cases.

Triage presents thousands of alerts a day. But in threat hunting, the problem is three or four orders of magnitude larger. Hundreds of millions of events mean we’re looking for the proverbial needle in a haystack. So how do we apply the same factor analysis approach to threat hunting as we do to alert triage?

Factors can be mapped to each of these hundreds of millions of events with feature engineering. If we extract a given factor, we can apply transformations and reduce the number of different values the factor has (its dimensionality), which is especially useful when dealing with 100 different values or more.

This allows us to map each factor to a score and combine them for a final score, which the AI can use to make decisions. But because there will always be differences in decisions made by human analysts and decision engines, the AI must be able to accept human feedback.

This is supervised algorithmic machine learning in action. Humans provide feedback via labeling, and this input “educates” the system to build a model. It’s even possible to build an unsupervised system for tasks that fit it. To work effectively, AI needs to be explainable, customizable, and adaptable.

When we build a decision engine with human expertise and incorporate automation wherever possible, this is what the next generation of SOC technology will look like.

Related News

How to Craft Rich Data-Driven Infographics with Powered Template

How to Craft Rich Data-Driven Infographics with Powered Template

We’re living in a data-driven world, and this means that it’s imperative to share information in the most engaging and…
Meta Fined €265 million in Facebook Data Scraping Case in the EU

Meta Fined €265 million in Facebook Data Scraping Case in the EU

Ireland’s Data Protection Commissioner (DPC) has placed yet another fine of €265 million ($277 million) on Meta following Facebook’s data…
Critical Flaw Exploited to Bypass Fortinet Products and Compromise Orgs

Critical Flaw Exploited to Bypass Fortinet Products and Compromise Orgs

While performing routine monitoring, Cyble’s Global Sensor Intelligence (GIS) discovered a threat actor is distributing unauthorized access to several Fortinet…