Not only are data breaches like Equifax and ransomware like WannaCry on the rise, but signs of AI-assisted cyberattacks...
are also surfacing. U.K.-based cybersecurity startup Darktrace recently discovered an AI-powered attack in India that worked with machine learning algorithms to mirror user behavior within the network and evade detection.
"We're seeing campaigns where the truly bad guys are using AI. They are using machine learning to attack machines, to turn your machine against you," said Cameron Armstrong, director of commercial sales at Darktrace.
Attacks like these are worrying because there isn't a great way to understand how they infiltrate and attack an organization's network. This also makes it difficult to detect the early warning signs, Armstrong said at the recent InfoSec World conference.
"Traditional tools and technologies aren't doing what the sales rep told you what they were going to do. They are great to have, but they're not showing us new ways to stay in front of that curve of innovation. Legacy tools are being constantly outpaced," she said.
Not only is there a need to detect these modern day cyberthreats early on, but organizations also have to figure out a way to be able to stop them once they happen -- and in real time, she added.
"There really is only one great way to identify and to combat something like a machine learning attack, and that's with machine learning," Armstrong said.
Putting machine learning algorithms to work
Fighting new age digital threats calls for deploying new age techniques: Having an autonomous response ready is incredibly important because it can help save time, she said.
Organizations should think beyond putting up firewalls and deploying antivirus software to keep the corporate network secure. They should stop relying on the ancient ways of running their security teams and start to think of figuring out new ways to understand behaviors of machines, she said.
Using both supervised and unsupervised machine learning algorithms within a tool can help establish pattern-of-life for every single user, IP address and network as a whole, which would take an "insane" amount of time if done manually, she said.
Cameron Armstrongdirector of commercial sales, Darktrace
"Those are millions and millions of pattern-of-life you would have to establish," Armstrong added.
Darktrace's self-learning platform uses unsupervised machine learning and recursive Bayesian estimation -- which uses the calculus of probability theory -- to spot anomalous activity within the enterprise.
"When you put these two technologies together ... first, you are going to get a full, unbiased view of behaviors for your entire organization, IPs and users," she explained. "The second piece of that is where the really advanced piece of math is going to come in. It is going to start to cluster all these different behaviors together and what you are left with are true anomalous behaviors."
Deploying unsupervised machine learning algorithms can help organizations identify early signs of cyberthreats and disruptions and respond to it, Armstrong said.
But while deploying machine learning algorithms is great at telling organizations what it sees, it doesn't have a moral compass, she reminded the audience. Therefore, deploying such tools won't make a security professional's job redundant because there is still need for people who know how to use such tools.
"If you are utilizing a tool that specializes in machine learning, all that tool is going to do is enhance the level and ability of your security team, security analysts and networks folks. You have full view and a full clarity of your network: how it's set up from the inside out," Armstrong said.