When AI is used for evil


Artificial intelligence (AI) holds out the promise of solving some of the world’s most persistent challenges and pernicious problems, from disease prevention and cures, traffic-related deaths and injury, crime and terrorism, energy conservation, and chronic food shortages.

On a less-dramatic level, AI and machine learning are being used to transform how enterprises operate by improving business processes, supply-chain management, financial modeling, customer service, workplace safety, and digital security.

It’s almost as if AI is a super weapon that can be harnessed for good! But if you’ve seen enough Marvel Comics movies (and I have a 13-year-old son, so I’ve seen them all), you know that super weapons and superpowers also can be used for evil. For every Spiderman, Captain America, and Black Panther, there’s a Green Goblin, Red Skull, and Erik Killmonger.

Which is the point of a recent 100-page report released by a group of academic, public interest, and technology organizations from the U.S. and England. The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation surveys the “landscape of potential security threats from malicious uses of artificial intelligence technologies, and proposes ways to better forecast, prevent, and mitigate these threats.”

The authors say they expect that the growing use of AI systems — particularly the use of AI to automate attacks — will expand existing threats, introduce new threats, and change the typical character of threats across three specific security domains:

Digital security. The report authors say they “expect novel attacks that exploit human vulnerabilities (e.g. through the use of speech synthesis for impersonation), existing software vulnerabilities (e.g. through automated hacking), or the vulnerabilities of AI systems (e.g. through adversarial examples and data poisoning).”
Physical security. Researchers “expect novel attacks that subvert cyber-physical systems (e.g. causing autonomous vehicles to crash) or involve physical systems that it would be infeasible to direct remotely (e.g. a swarm of thousands of micro-drones).”
Political security. The research team anticipates “novel attacks that take advantage of an improved capacity to analyse human behaviors, moods, and beliefs on the basis of available data. These concerns are most significant in the context of authoritarian states, but may also undermine the ability of democracies to sustain truthful public debates.”
That last one is particularly relevant given the low-tech (yet highly effective) efforts of a Russian troll factory to influence the 2016 U.S. election. Couldn’t that same type of attack be used on shareholders of large corporations to influence their votes or put pressure on boards of directors and CEOs regarding policy changes, mergers and acquisitions, and other crucial corporate decisions?
Not scared yet? Well, check out these potential scenarios, as described in the report:
  • “As AI develops further, convincing chatbots may elicit human trust by engaging people in longer dialogues, and perhaps eventually masquerade visually as another person in a video chat.”
  • “Large datasets are used to identify victims more efficiently, e.g. by estimating personal wealth and willingness to pay based on online behavior.”
  • “AI-enabled automation of high-skill capabilities — such as self-aiming, long-range sniper rifles – reduce the expertise required to execute certain kinds of attack.”
  • “Highly realistic videos are made of state leaders seeming to make inflammatory comments they never actually made.”
The researchers devote the second half of their report to recommendations and potential responses, none of which involve contacting Tony Stark or the X-Men. Rather, the report urges policymakers, researchers, and engineers to take seriously the threat of AI being deployed for bad intentions, instead of pretending our future will be filled only with human-friendly machines, bots, and drones eager to do our benign bidding (“Watson, analyze these cancer genomic data sets. Also, please play my Chris Stapleton station on Pandora.”) 
For enterprises simply struggling to get a chatbot to answer a customer query, the notion of defending against AI-powered attacks may seem overwhelming. Well, as the report authors note, it is. That’s why they’re telling us to wake up and get in the game.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: