Tip

How AI cybersecurity thwarts attacks -- and how hackers fight back

IT leaders are using AI to take security to the next level. But how much security can AI provide? David Petersson examines where AI cybersecurity excels and how it can be fooled.

As our digital lives get more automated, integrated and connected, the security risks increase as well; 2018 was full of hacks and privacy scandals, ranging from healthcare breaches to blunders by Facebook and Google.

Cybersecurity is more important than ever, and many experts are using AI to take that security to the next level. For CIOs, the critical questions are: How much security can AI provide, and what should the realistic expectations of AI cybersecurity be?

Traditional cybersecurity

Currently, cybersecurity relies on file signatures to detect malware and rule-based systems to detect network abnormalities. These methods require a lot of known factors: There has to be an actual virus outbreak where security experts isolate the malicious files and detect a unique signature that would help other systems to become alert and, thus, immune.

The same holds for rule-based systems: We set rules based on our experience of what could be malicious activity or just lock down systems and restrict any kind of access to stay on the safe side.

The problem with these methods is their reactive nature: Hackers tend to find innovative ways to bypass the known rules, and before a security expert knows about the breach, it is often too late.

Game changer

What makes AI cybersecurity different is its adaptability: It does not need to follow specific rules; rather, it can watch patterns and learn.

Chris MoralesChris Morales

"Unlike a signature-based approach that delivers a 1-for-1 mapping of threats to countermeasures, data science uses the collective learning of all threats observed in the past to proactively identify new ones that haven't been seen before," said Chris Morales, head of security analytics at Vectra, an AI threat detection vendor.

Consider a ransomware attack. After downloading ransomware, the malware would scan your files, single out what it finds important, make an encrypted copy of those files, delete the original ones and send the encryption keys to the ransomware operators so they have a unique key for every victim.

Doug ShepherdDoug Shepherd

"That sequence of events is pretty unique; you're not going to see a lot of credible software doing that," said Doug Shepherd, chief security officer at Nisos. This limits the usefulness of traditional antivirus software, which looks for signatures detected in known ransomware in order to block a new attack. AI in cybersecurity, in contrast, can identify actions that look like ransomware.

"Security software that does 'holistic' or 'AI' is going to look for these sequences of events and flag things it doesn't like, versus your traditional 'hash-based' security model, which flags software or bits of software that have been seen before and are 'known-bad,'" Shepherd said.

AI cybersecurity's 'combined intelligence'

Another strength of AI is its combined intelligence. A computer system consists of different software components, each with their own security protection mechanisms and log files. While a human operator would need to monitor them all and examine each individually, an AI system can find patterns across all systems. The AI can compare the timestamps and users in the log files to build a complete view over what actions each user is taking, which makes it better at unmasking malicious actions -- and at discounting false alarms.

"SecOps teams are being challenged with too many alerts and not enough analyst manpower to address them. Through AI/machine learning, the signal-to-noise ratio is improved and, with it, so is the mean time to detection and mean time to response," said Chas Clawson, cybersecurity evangelist at consultancy and software firm Micro Focus.

The end goal, he added, is a semiautonomous security operations center where analysts only deal with the most complex or most critical events, while the rest is handled by automating the repetitive responses that machines can learn to handle.

AI cybersecurity systems present a brand new challenge for hackers: Not only do they need to breach enterprise systems, but they also need to breach them intelligently. Brute-force and bot attacks will be a thing of the past, according to AI cybersecurity evangelists like Clawson and Shepherd. Instead, hackers will have to.

AI vulnerable to data poisoning attacks

Peter PurcellPeter Purcell

AI takes security to a new level, but CIOs must have a proper understanding of its limitations and weaknesses as well. "Unfortunately, [IT and business] personnel become complacent once an AI-based cybersecurity system is implemented," said Peter Purcell, co-founder of EVAN, a network of IT professionals.

"They assume the AI-based cybersecurity system will learn fast enough to protect from all attacks. This simply isn't true," he added.

AI uses statistical data to classify patterns as malicious or benign, and as mentioned earlier, it has the ability to learn and adapt. But this strength can also become AI's weakness.

Satish AbburiSatish Abburi

"Many systems may detect it as an anomaly in the beginning, but after a while, they are trained to accept it as a normal behavior," said Satish Abburi, vice president of engineering at System Soft Technologies.

"Intruders typically will mask their activity by looking like normal behaviors -- e.g., send data via an HTTPS call to a server or printer. This type of tampering with the ingested data streams is known as a 'data poisoning attack.'"

Since AI's strength is in pattern recognition, hackers will either slowly adapt their malicious patterns to appear normal or execute their actions in ways that would confuse the AI. For instance, a malware can take extra, irrelevant steps to make it appear as normal and delay the execution of its attack after it has infiltrated the system enough.

Another tactic that works on trained machines is to insert strange signals that make no sense to a human but trick the algorithm to classify the activity as normal. We have seen that with image classification and face detection algorithms, where inserting a single -- and irrelevant -- pixel in the image confuses the AI to classify a taxi for a dog, with high confidence.

Mike LloydMike Lloyd

"The machine has no context of what is normal human activity -- you can add all kinds of strange signals that may make no sense to a human, but which will trigger the computer to say the activity looks good," said Mike Lloyd, Ph.D., CTO at security firm RedSeal.

Sophisticated hackers understand this: "If AI has been trained to recognize normal, then the best thing for the attacker to do is make their traffic look normal. If your victim surfs the web a lot and you want to steal a credit card number, make the traffic you send look as much like a webpage download as possible," he said.

AI cybersecurity in perspective

Kayne McGladreyKayne McGladrey

While CIOs should not consider AI a magic bullet, experts also stress they should not overlook its unique capabilities either.

According to IT consultant Kayne McGladrey, a member of the Institute of Electrical and Electronics Engineers, one of the unique benefits of AI is its ability to create individual profiles for each user and then consider what would be abnormal behavior for that particular person. This forces the hackers to limit their actions within the boundaries of normal activity for a specific target account, significantly preventing them from mass-attacking the system.

Another way to strengthen AI would be to give it more data. After all, the AI can only be as strong as the data it gets, and the more data it's given, the more it can help with classifying what's natural and what's not.

"If the end user logs on from Seattle, where their mobile phone and laptop is, a connection from New York would be unusual," McGladrey explained. "It is also possible to note the typing style and speed of a user and use that biometric signature to determine if the user is legitimate. These data [points] make it more difficult for a threat actor to operate silently in the environment."

Finally, it is also important to look at the primary risk factor in any security system, and -- as CIOs have heard before -- it is not software.

Joshua MottaJoshua Motta

"The vast majority of insurance claims we see resulting from hacking and cybercrime involve human error in one form or another,' said Joshua Motta, CEO and co-founder of Coalition, a cybersecurity insurance company. "Over 90% of all attacks that we see are simple things like phishing, SQL injection, remote access on the internet and bad passwords."

AI provides an extra layer of security. Even if hackers intend to breach an AI-protected system, it will significantly slow them down. As such, hackers will go for systems without AI protection, as they are easier to get into.

Eventually, no system is safe. The efforts of hackers are proportional to the value of their target. If we are facing script-kiddies who hack for fame, then we are dealing with actors who would just move on to the next, easier target. But, if we are facing state-level cybercriminals, they will persist until they break in. It bears remembering: Security is a process, not a product.

Dig Deeper on Risk management and governance

Cloud Computing
Mobile Computing
Data Center
Sustainability and ESG
Close