Manage Learn to apply best practices and optimize your operations.

Relentless AI cyberattacks will require new protective measures

Listen to this podcast

AI cyberattacks won't be particularly clever; instead, they'll be fast and fierce. Carnegie Mellon University's Jason Hong explains in this episode of 'Schooled in AI.'

Hey, I'm Nicole Laskowski and this is Schooled in AI. One of the bigger questions CIOs are facing when it comes to artificial intelligence may have less to do with the adoption of AI and more to do with protection against it. Makes sense: If AI is poised to affect every facet of enterprise computing, why would security be any different?

Just a few months ago, several publications declared that AI cyberattacks -- loosely defined as attacks that are powered by or enhanced by AI technologies -- are imminent and warned that the aftermath won't be pretty. AI cyberattacks could wreak a kind of havoc never seen before by enterprise CIOs and chief information security officers.

Jason HongJason Hong

Jason Hong: Oh yeah, it's going to happen. It's sort of a question of when it's going to happen.

That's Jason Hong. He's a security and privacy expert, and his message to CIOs is a little more measured than the doomsday approach the media have taken: In the beginning, AI cyberattacks won't be clever; instead, their real threat will speed -- and relentlessness.

Hong: If you've ever read Alice in Wonderland, you might remember the scene with the Red Queen, where basically the Red Queen says, 'We have to run faster just to stay in place.' That's essentially what's going to happen, because as the vulnerabilities are exploited faster, as data is being exfiltrated faster and so on, you need better kinds of tools and better kinds of techniques to prevent, as well as to detect, those kinds of attacks.

Editor's note: Continue scrolling to access the full transcript.

Transcript - Relentless AI cyberattacks will require new protective measures

We will get to AI cyberattacks in a roundabout way. First, I want to take a step back and introduce Hong properly because some of his research could help CIOs shore up common weaknesses that currently exist within their security fabric. These are weaknesses that will no doubt be preyed upon by AI cyberattackers. Hong is a co-founder of Wombat Security Technologies Inc., which specializes in cybersecurity training as a service, and he's an associate professor in the School of Computer Science at Carnegie Mellon University (CMU). His research focus is in an area called useable privacy and security.

Hong: The short way of describing it is how do we make privacy and security easy enough so that you don't need a degree in computer science to actually be effective in protecting yourself, your family and also your employees.

All the research he does is housed within CMU's Human-Computer Interaction Institute, which essentially looks at how people and machines can work together and how to improve things like usability. And he's working with psychologists and computer scientists and designers to pioneer an area of research called social cybersecurity.

Hong: Yeah, so the original name we had for the project was 'social security,' but you can easily see why we didn't choose that one.

This isn't quite how he described it, but it's basically about using peer pressure to educate users and encourage changes in their cybersecurity behavior -- for the better. He said the work is grounded in social psychology, which looks at how people influence one another. One technique he's experimenting with is called social proof:

Hong: Which is people tend to do what everyone around them is doing.

So, when you get off a plane, how do you decide what direction to go in -- left or right?

Hong: The simple heuristic is to follow where everyone else is going and you'll probably be going in the right direction.

Facebook experiment tests peer pressure

Hong and his team took their social proof experiment to Facebook. Every year, the social media company launches a cybersecurity campaign where they do things like push out messages to users that say, "extra security settings exist; click here to learn more." So, Hong and his team had a Ph.D. intern rewrite those messages to include language like "A hundred of your friends are using extra security settings" or "10% of your friends are using extra security measures."

Hong: It's a simple change, but it did improve the click-through rates and adoption of some of these security settings. And so the basic idea here is that it's a very subtle, but powerful, way of influencing people's awareness, knowledge and motivation to be secure.

Hong believes social proofs can be integrated into the enterprise and go head-to-head with common CIO security headaches by spurring employees to adopt two-factor authentication or update their software correctly or think about stronger passwords just by letting employees know how many people around them have strong passwords.

Hong: One of the big problems we've seen with respect to cybersecurity: It's pretty invisible. And because it's invisible, it's hard for people to adopt better practices over time, so we can't really learn from each other.

Criminals co-opt AI for cyberattacks

OK, let's get back to AI cyberattacks. Because, in addition to preventive measures like these, Hong has a pulse on how cybercriminals are using AI -- or if they're using AI -- in cyberattacks. And there's a bit of good news for CIOs because the answer is:

Hong: Not really. Most attacks don't directly use these artificial intelligence and machine learning techniques right now.

Hong said that attackers are still relying on these mass attacks to look for common vulnerabilities or ways to exploit lots of computers. Or the cyberattacks are targeted, so they're using similar techniques but looking for rare vulnerabilities, or targeting specific individuals inside of an organization. But here's where the other shoe is about to drop.

Hong: Now having said that, there are some tools that the hackers will use that do use artificial intelligence techniques.

So, not directly, but they are using AI technologies indirectly.

Hong: So, for example, they might use the Google search engine to try to find openings or common vulnerabilities inside a website or just to figure out who works at your company. So, this is a very indirect use of artificial intelligence techniques because Google uses a lot of these kinds of techniques inside of their search engine.

AI is super fuel for cyberattacks

CIOs can't sit idle because AI cyberattacks are here -- to a certain degree -- and they're only going to become more advanced. Hong said the initial AI cyberattacks won't look or feel new: The types of techniques and the kinds of attacks you're familiar with -- malware, phishing -- will continue. But AI will give them a super fuel to find cracks and weaknesses in your security programs faster -- much, much faster -- as well as require fewer resources to get the work done.

Hong: It used to take maybe a week after a patch came out for an attacker to reverse engineer it and figure out how to release and exploit. But nowadays, they can probably do it in a day or less because the tools are getting better. And so you can even imagine maybe those will come even faster like on the order of minutes. Same thing with exfiltrating data and trying to find interesting things. Also better kinds of automated attacks.

Soon enough the attacks will start to get more sophisticated, all within the confines of what AI is good at doing, which, Hong said, is repetitive, predictable tasks and not human-like behavior.

Hong: You can imagine that you could create a special kind of search engine that looks for the known employees who are working at your company and they can do this by scanning LinkedIn and looking at your website. And then also generating keywords for each of those employees based on things they might be interested in, and then just automating whole messages. So, sending out fake phishing emails, too.

Again, what I just described, that kind of phishing attack, it's very manual today, and you have a human in the loop -- nobody uses some tool to get the scam done. But now imagine trying to automate that process -- you'd probably end up using some of the relatively simple AI techniques to do that.

Like injecting bad data into a system so that the system then makes a bad recommendation. Again, this isn't new. You've already seen this technique in practice with spammers coming up with ways to trick spam filter algorithms.

Hong: The spammers were basically just grabbing a few words in the dictionary and trying to poison people's spam filters.

By padding their email messages with words that aren't normally associated with spam, spammers could get junk email through the filter or, as Hong said, actually poison the filter algorithm by tricking it or teaching it to believe nonspam words are actually spam.

Hong: If you train your spam filter on these emails that have lots of legitimate words inside of them, then your spam filter starts having worse performance.

It's not unlike Gresham's Law, Hong said. If two types of coins were in circulation -- all gold or all silver -- and then a hybrid coin of gold or silver plus cheaper materials appeared, people hoard the legitimate coins and take them out of circulation. The same is true for bad data, he said.

Hong: So, bad data will drive out the good data because it just sort of makes your good data not as effective.

There are steps you can take to protect yourself and your company from these kinds of attacks.

Hong: It depends on what specific domain your company is working in. But you know there are still some relatively simple things like if you're using user-generated content and you're relying on that, make sure you have usernames or you're recording the IP addresses. … That way, you can filter out bad data after the fact because usually they'll be clustered together. So, it will be just a few usernames that are giving you the bad data.

Or you can introduce thresholds.

Hong: If you only have a few instances of the bad data, you might just ignore it. And then basically what that means is you're trying to raise the amount of work the attackers have to do to compromise your system. So, it's basically an economic argument, which is that, yeah, eventually the attackers probably will be able to compromise your system with bad data. But it's just going to cost them a lot more in terms of time and money to do that and so they might just look for easier targets.

Finally, Hong said, the last thing you can do is …

Hong: … make sure you have a human in the loop.

If you're going to automate systems, you want to make sure that that, as you're adding new data into the system, some of it is verified to reduce the potential for false positives.

So, AI cyberattacks aren't a strange new world. They're an extension of the kinds of attacks you're already familiar with. And one of the best things you can do is do what you're doing now -- only better.

Hong: To protect yourself today, all the simple basic things still apply; it's just you need to be better about enforcing them and making sure that employees are following them. Most security attacks are still pretty basic -- attackers will try to just guess passwords; they'll focus on passwords that are very popular and that are used often, are weak or are being reused. That's not going to ever change.

Or they might focus on an old vulnerability that hasn't been patched. There are a lot of studies showing that most of these exploits are actually over a year old. That means that people haven't updated their software.

That's a window of opportunity for where attackers with better tools will be able to exploit them much faster.

Rather than a top-down approach for stronger passwords, maybe it's time to introduce the enterprise to social cybersecurity.

+ Show Transcript
Cloud Computing
Mobile Computing
Data Center
Sustainability and ESG
Close