Artificial intelligence isn't just for the law-abiding. Machine learning algorithms are as freely available to...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
cybercriminals and state-sponsored actors as they are to financial institutions, retailers and insurance companies.
"When we look especially at terrorist groups who are exploiting social media, [and] when we look at state-sponsored efforts to influence and manipulate, they're using really powerful algorithms that are at everyone's disposal," said Yasmin Green, director of research and development at Jigsaw, a technology incubator launched by Google to try to solve geopolitical problems.
Criminals need not develop new algorithms or new AI, Green said at the recent EmTech conference in Cambridge, Mass. They can and are exploiting what is already out there to commit fraud and manipulate public opinion.
The good news about weaponized AI? The tools to combat these nefarious efforts are also advancing. One promising lead, according to Green, is bad actors don't exhibit the same kinds of online behavior that typical users do. And security experts are hoping to exploit the behavioral "tells" they're seeing -- with the help of machines, of course.
Variations on weaponized AI
Cybercriminals and internet trolls are adept at using AI to simulate human behavior and trick systems or peddle propaganda. The online test used to tell humans from machines, CAPTCHA, is continuously bombarded by bad guys trying to trick it.
In an effort to stay ahead of cybercriminals, CAPTCHA, which stands for Completely Automated Public Turing Test to Tell Computers and Humans Apart, has had to evolve, creating some unanticipated consequences, according to Shuman Ghosemajumder, CTO at Shape Security in Mountain View, Calif. Recent data from Google shows that humans solve CAPTCHAs just 33% of the time. That's compared to state-of-the-art machine learning optical character recognition technology that has a solve rate of 99.8%.
"This is doing exactly the opposite of what CAPTCHA was originally intended to do," Ghosemajumder said. "And that has now been weaponized."
He said advances in computer vision technology have led to weaponized AI services such as Death By CAPTCHA, an API plug-in that promises to solve 1,000 CAPTCHAs for $1.39. "And there are, of course, discounts for gold members of the service."
A more aggressive attack is credential stuffing, where cybercriminals use stolen usernames and passwords from third-party sources to gain access to accounts.
Sony was the victim of a credential-stuffing attack in 2011. Cybercriminals culled a list of 15 million credentials stolen from other sites and then tested if they worked on Sony's login page using a botnet. Today, an outfit by the good-guy-sounding name of Sentry MBA -- the MBA stands for Modded By Artists -- provides cybercriminals with a user interface and automation technology, making it easy to test the veracity of stolen usernames and passwords and to even bypass security features like CAPTCHAs.
"We see these types of attacks responsible for tremendous amounts of traffic on some of the world's largest websites," Ghosemajumder said. In the case of one Fortune 100 company, credential-stuffing attacks made up more than 90% of its login activity.
Behavioral tells in weaponized AI
Ghosemajumder's firm Shape Security is now using AI to detect credential-stuffing efforts. One method is to use machine learning to identify behavioral characteristics that are typical of cybercriminal exploits.
When cybercriminals simulate human interactions, they will, for example, move the mouse from the username field to the password field quickly and efficiently -- in an unhumanlike manner. "Human beings are not capable of doing things like moving a mouse in a straight line -- no matter how hard they try," Ghosemajumder said.
Jigsaw's Green said her team is also looking for "technical markers" that can distinguish truly organic campaigns from coordinated ones. She described state-sponsored actors who peddle propaganda and attempt to spread misinformation through what she called "seed-and-fertilizer campaigns."
Yasmin Greendirector of research and development, Jigsaw
"The goal of these state-sponsored campaigns is to plant a seed in social conversations and to have the unwitting masses fertilize that seed for it to actually become an organic conversation," she said.
"There are a few dimensions that we think are promising to look at. One is the temporal dimension," she said.
Looking across the internet, Jigsaw began to understand that coordinated attacks tend to move together, last longer than organic campaigns and pause as state-sponsored actors waited for instructions on what to do. "You'll see a little delay before they act," she said.
Other dimensions include network shape and semantics. State-sponsored actors tend to be more tightly linked together than communities within organic campaigns, and they tend to use "irregularly similar" language in their messaging.
The big question is can behavioral tells -- identified by machines and combined with automated detection -- be used to effectively identify state-sponsored campaigns? No doubt, time will tell.
Machine learning's training data vulnerable to attack
Machine learning mitigates, doesn't eliminate cyberthreats