News Stay informed about the latest enterprise technology news and product updates.

AI attacks are coming soon to a network near you

The rapid development of artificial intelligence and machine learning is a double-edged sword. The technologies are becoming cheaper and easier to apply to the enterprise, which is also making it easier for bad actors to utilize the emerging tech.

Twenty-six researchers, policy and industry experts from institutions such as Oxford, Cambridge and Yale and non-profit organizations such as OpenAI and the Electronic Frontier Foundation are sounding the alarm bell on AI attacks. In a new report “The Malicious Use of Artificial Intelligence: Forecasting, Prevention and Mitigation,” which was published last week, the authors warn that AI will pose serious threats to the safety and security of systems, people and governments in the next few years. They advise researchers and policymakers take matters into their own hands now by closely collaborating, talking about and implementing new policies, including possibly delaying the publication of new AI breakthroughs, as an IT security strategy.

As AI technologies become more commoditized, the report states that the low cost and fast-paced evolution of these tools will add purpose and power to existing security threats and give rise to new threats, the likes of which CIOs have never faced before.

Specifically, the authors predict that the kinds of attacks companies face today will come at a faster rate with a broader focus, be harder to attribute and involve more attackers than they do today. For example, spear phishing, which is the practice of sending bogus emails to individuals that look like they came from a trusted source, is done manually and could be sped up with AI, threatening a company’s cybersecurity strategy.

The authors also predict new kinds of attacks will emerge as AI is used to automate tasks and even analyze human behavior. Attackers will also exploit weaknesses in AI technologies such as manufacturing a person’s speech to gain access to a system. These attacks won’t just threaten today’s digital security practices but could also threaten a person’s physical security by, for example, causing autonomous vehicles to crash. AI attacks could also threaten a country’s political security by targeting propaganda, as well as through the use of automated surveillance or deception techniques such as manipulating videos.

The 101-page paper, which came out of a two-day workshop at Oxford a year ago, included recommendations on how to prepare for the impending future of AI attacks. But the recommendations are more a call to action than actual solution, including such anodyne advice as the need for close collaboration between policy makers and researchers, continued discussion as well as curating and sharing best practices on how to handle the exploitation of technology.

A section for “open questions and potential interventions” provides a forum for more radical ideas, including the suggestion to delay the publication of some findings so that they can undergo a “risk assessment” to “determine what level of openness is appropriate.” Another suggestion was to borrow techniques  from the more mature domain of cybersecurity such as red teaming, where a group of AI developers would seek out and fix security vulnerabilities, and apply best practices to other targeted areas. The authors also suggest developing a code of ethics that holds AI researchers to standards that include ones on social responsibility.

Cloud Computing
Mobile Computing
Data Center
Sustainability
and ESG
Close