NicoElNino - Fotolia

ISACA: Build security into artificial intelligence hardware

A new paper on how to fight off malicious AI recommends adding security features to AI chips. ISACA's Rob Clyde explains why that's a good idea.

Rob Clyde, vice chairman of the board of directors at ISACA, an IT governance organization, weighed in on a recent report, "The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation"  for SearchCIO. The comprehensive report, written by 26 academics, industry and policy experts, raises the alarm about the double-edged nature of rapidly evolving AI technologies.

AI breakthroughs have led to such wonders as a computer program that can outcompete champions in the game of Go and cars that can drive themselves, as well as the potential for hackers to weaponize AI applications and overwhelm an enterprise's best cyberdefenses. In part one, Clyde talks about the duality of AI and how the enterprise's current lack of AI skills, combined with rudimentary policing tools, make companies especially vulnerable to AI-driven cyberattacks.

Here, he analyzes several of the paper's discussion points on how the enterprise can thwart AI-driven cyberattacks, adding this grim caveat: The most enterprising hackers will stand "on the shoulders of the good guys" to dream up yet another way to attack.

Editor's note: This interview has been edited for clarity.

One of the discussion points of the "Malicious Use of Artificial Intelligence" report was the need to build security into artificial intelligence hardware.

Rob Clyde: I like the idea, and I consider it a very natural evolution of what we're trying to do with AI. The lower you can build security into the stack -- and the hardware is the ultimate level -- the harder it will be to find a vulnerability and break through. So, it makes some sense to hardwire certain security features into the hardware. We won't be able to do everything. It'll just be too much to push it down all the way. But there will be some things we could do.

Rob ClydeRob Clyde

So, building security into artificial intelligence hardware will help. What's stopping that from happening?

Clyde: One, it's going to add cost. And it is going to add complexity. And it could decrease the flexibility, the nature in which we write the artificial intelligence software. We may find that a lot of the software we've written won't actually run [on that securitized hardware] and that changes will have to be made. All of which I think can be overcome.

I want to emphasize: This will not fix everything. It will only address some of the concerns. You could still easily have AI that's built for good use -- to detect vulnerabilities in systems -- and a bad guy could use AI to find vulnerabilities and build another piece to exploit them. And now, you've got a cyberhacking, and they actually stood on the shoulders of the good guys.

The authors provided several ways to implement security into artificial intelligence hardware, such as features that prevent copying, restrict access and so on. What stood out to you?

Clyde: One that stands out is a feature that would allow auditing, because this is one of the issues with AI. People ask, 'Why did it do what it did? Did someone come in and mess with something? Did it learn something? How do we know this?' Anything that can enable an auditing -- that means keeping a trail -- can enable better understanding of what it's doing.

The copying discussion point is kind of interesting.

That stood out to me, too. The authors explain that it is necessary "to prevent copying a trained AI model off of a chip without the original copy first being deleted." But how would this act as a security feature?

Clyde: I think the idea there is to try to prevent this dual-use issue that we were just talking about. So, maybe if the bad guys can't just lift the trained AI that knows how to detect vulnerabilities, then they can't just use it for evil to find the vulnerabilities and put in the exploits.

The struggle I have with that is: Why couldn't they just use the same AI code, because a lot of this stuff tends to end up in open source? That's why AI has been able to grow so fast [because] the researchers have been very willing to share, which I think is a good thing. But people could easily pick up the same one and say, 'Fine, I'll train it myself. And maybe yours is trained better, but maybe mine is good enough.'

You bring up a discussion point that seemed to be raising some eyebrows. It was the recommendation to delay publication of research on new AI technologies until the results could be vetted for the potential malicious use of AI.

Clyde: A slight delay doesn't bother me. If somebody is not yet finished with their work on the AI, there might be a good reason to delay it for a short period of time. And I want to emphasize short. It's really hard to keep a secret, especially in a field like AI, and it's going to leak out. And the danger is that it won't leak out to the people who need to know so that they can be protected.

But there's an intellectual exchange that happens once the results are published. And when you delay, are you then stunting progress?

Clyde: If you look at some of the newer open source AI work published, they say to share all capabilities and algorithms by default. Capabilities and algorithms don't sound like vulnerabilities to me. So, I have less concern with the sharing of those, and the history has been to share them. I think the bigger issue is: If those capabilities and algorithms are leading someone to break into systems or you found vulnerabilities in an AI, then I think the delay makes a lot of sense. I'm not overly worried about this because I can tell you the nature of this field is most people are going to be biased toward being open.

Do you have a threshold in mind? How long should the delay be?

Clyde: I think it's one to three months. It's certainly not years. Once you go past that kind of a time frame, it leaks out, especially if it's big. And publishing is better than leaking it deliberately.

Dig Deeper on Risk management and governance

Cloud Computing
Mobile Computing
Data Center
Sustainability and ESG
Close