News Stay informed about the latest enterprise technology news and product updates.

The uphill battle of beating back weaponized AI

AI experts at the recent EmTech conference in Cambridge, Mass., painted a frightening picture of weaponized AI.

Artificial intelligence isn't just for the law-abiding. Machine learning algorithms are as freely available to...

cybercriminals and state-sponsored actors as they are to financial institutions, retailers and insurance companies.

"When we look especially at terrorist groups who are exploiting social media, [and] when we look at state-sponsored efforts to influence and manipulate, they're using really powerful algorithms that are at everyone's disposal," said Yasmin Green, director of research and development at Jigsaw, a technology incubator launched by Google to try to solve geopolitical problems.

Criminals need not develop new algorithms or new AI, Green said at the recent EmTech conference in Cambridge, Mass. They can and are exploiting what is already out there to commit fraud and manipulate public opinion.

The good news about weaponized AI? The tools to combat these nefarious efforts are also advancing. One promising lead, according to Green, is bad actors don't exhibit the same kinds of online behavior that typical users do. And security experts are hoping to exploit the behavioral "tells" they're seeing -- with the help of machines, of course.

Variations on weaponized AI

Cybercriminals and internet trolls are adept at using AI to simulate human behavior and trick systems or peddle propaganda. The online test used to tell humans from machines, CAPTCHA, is continuously bombarded by bad guys trying to trick it.

In an effort to stay ahead of cybercriminals, CAPTCHA, which stands for Completely Automated Public Turing Test to Tell Computers and Humans Apart, has had to evolve, creating some unanticipated consequences, according to Shuman Ghosemajumder, CTO at Shape Security in Mountain View, Calif. Recent data from Google shows that humans solve CAPTCHAs just 33% of the time. That's compared to state-of-the-art machine learning optical character recognition technology that has a solve rate of 99.8%.

"This is doing exactly the opposite of what CAPTCHA was originally intended to do," Ghosemajumder said. "And that has now been weaponized."

He said advances in computer vision technology have led to weaponized AI services such as Death By CAPTCHA, an API plug-in that promises to solve 1,000 CAPTCHAs for $1.39. "And there are, of course, discounts for gold members of the service."

A more aggressive attack is credential stuffing, where cybercriminals use stolen usernames and passwords from third-party sources to gain access to accounts.

Sony was the victim of a credential-stuffing attack in 2011. Cybercriminals culled a list of 15 million credentials stolen from other sites and then tested if they worked on Sony's login page using a botnet. Today, an outfit by the good-guy-sounding name of Sentry MBA -- the MBA stands for Modded By Artists -- provides cybercriminals with a user interface and automation technology, making it easy to test the veracity of stolen usernames and passwords and to even bypass security features like CAPTCHAs.

"We see these types of attacks responsible for tremendous amounts of traffic on some of the world's largest websites," Ghosemajumder said. In the case of one Fortune 100 company, credential-stuffing attacks made up more than 90% of its login activity.

Shuman Ghosemajumder, EmTech, Shape Security, credential-stuffing attacks
Shuman Ghosemajumder shares a snippet of traffic from a Fortune 100 retailer. 'We see that on a 24/7 basis, more than 90% of the login activity was coming from credential-stuffing attacks,' he said.

Behavioral tells in weaponized AI

Ghosemajumder's firm Shape Security is now using AI to detect credential-stuffing efforts. One method is to use machine learning to identify behavioral characteristics that are typical of cybercriminal exploits.

When cybercriminals simulate human interactions, they will, for example, move the mouse from the username field to the password field quickly and efficiently -- in an unhumanlike manner. "Human beings are not capable of doing things like moving a mouse in a straight line -- no matter how hard they try," Ghosemajumder said.

Jigsaw's Green said her team is also looking for "technical markers" that can distinguish truly organic campaigns from coordinated ones. She described state-sponsored actors who peddle propaganda and attempt to spread misinformation through what she called "seed-and-fertilizer campaigns."

The goal of these state-sponsored campaigns is to plant a seed in social conversations and to have the unwitting masses fertilize that seed for it to actually become an organic conversation.
Yasmin Greendirector of research and development, Jigsaw

"The goal of these state-sponsored campaigns is to plant a seed in social conversations and to have the unwitting masses fertilize that seed for it to actually become an organic conversation," she said.

"There are a few dimensions that we think are promising to look at. One is the temporal dimension," she said.

Looking across the internet, Jigsaw began to understand that coordinated attacks tend to move together, last longer than organic campaigns and pause as state-sponsored actors waited for instructions on what to do. "You'll see a little delay before they act," she said.

Other dimensions include network shape and semantics. State-sponsored actors tend to be more tightly linked together than communities within organic campaigns, and they tend to use "irregularly similar" language in their messaging.

The big question is can behavioral tells -- identified by machines and combined with automated detection -- be used to effectively identify state-sponsored campaigns? No doubt, time will tell.

Next Steps

Machine learning's training data vulnerable to attack

Machine learning mitigates, doesn't eliminate cyberthreats

The 'threat hunter' arrives

Dig Deeper on Enterprise information security management

PRO+

Content

Find more PRO+ content and other member only offers, here.

Join the conversation

2 comments

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

How does your company protect itself against weaponized AI?
Cancel
unhumanlike manner. "Human beings are not....
Obviously, criminals can read things like this, and recognize their human experience, and make bots to mimic human behavior. 

seed-and-fertilizer campaigns
seems no different than clik-bait used by mainstream advertisers, media and politicians.

Solution? We could set a multi-tier system.  Tier 1: the senders real geo address is displayed and verifiable.  Tier 2: Anonymous.  Traffic that pretends to be Tier 1 and isn't because it can't be verified is blocked.  Traffic in Tier 2 still gets through, but is stigmatized and easy to send to the junk folder.

Ultimately, users, we humans, need to not be so gullible.  There was fake news that Clinton was a sure win and Trump a sure loser.  Any US human in touch with reality knew that was fake.  The gullible need to face their gullibility honestly.  That is the only real solution.


Cancel

-ADS BY GOOGLE

SearchCompliance

SearchHealthIT

SearchCloudComputing

SearchMobileComputing

SearchDataCenter

Close