The dangers of stealth AI in the enterprise


All new technologies bring with them new and sometimes unique security concerns. And while some of these concerns can be overblown by security vendors (and, let’s face it, tech writers), a couple of recent news items regarding artificial intelligence (AI) raise some genuinely frightening prospects for enterprise IT security pros and enterprise employees.

Item No. 1: AI soon may get crazy good at conversing with humans. Researchers at Osaka University have developed a method of dialogue that enables chatbots, voice assistants and other AI-powered platforms to learn information from humans that isn’t already in their knowledge base. Right now voice assistants such as Alexa and Google Home, for example, learn from humans “by asking simple repetitive questions,” the researchers write. Consequently, users begin to lose interest because it’s like talking to a nice but simple-minded relative.

But the research team headed by Professor Kazunori Komatani developed a dialogue method called “lexical acquisition through implicit confirmation.”

“This method aims for the system to predict the category of an unknown word from user input during conversation, to make implicit confirmation requests to the user, and to have the user respond to these requests,” researchers explain. “In this way, the system acquires knowledge about words during dialogues.”

In other words, AI voice assistants may become so smooth at extracting information from humans during seemingly innocuous informational conversations that the humans won’t even know they gave away that password/secret project/strategic plan! They will be socially engineered by robots!

Item No. 2: The headline in the UK version of Wired pretty much tells the story here: AI cyberattacks will be almost impossible for humans to stop. Lovely.

“As early as 2018, we can expect to see truly autonomous weaponised artificial intelligence that delivers its blows slowly, stealthily and virtually without trace,” writes Mike Lynch.

Like cyber-Manchurian candidates, sleeper cells, or aliens disguised as humans, these AI algorithms controlled by cyber criminals will blend into their surroundings while absorbing useful information and awaiting the opportunity to attack.

“They will learn how the firewall works, the analytics models used to detect attacks and times of day that the security team is in the office,” Lynch warns. “They will then adapt to avoid and weaken them. All the while, it will use its strength to spread, creating inroads for compromise and contaminating devices with brutal efficiency.”

It gets worse. These AI programs, Lynch writes, will be able to impersonate people (by studying writing styles, usage patterns, etc.) and take over your trusted AI assistant — without you even knowing it!

Welcome to 2018.


4 tips for better threat hunting against cyber attacks

Successfully defending against social engineering attacks

Think security data analytics applies just to the good guys? Think again.


  1. Ronald Sonntag says:

    Good article that gets right to the punch. This reminds me of the Kessler syndrome. At some point, all this “junk” has the potential for rendering the Internet useless. The flip side, of course, is designing AIs that look for malicious AIs and clean them out.


  1. […] einem Blog-Beitrag greift der Tech-Journalist Chris Nerney zwei interessante Security-Aspekte auf, die im […]

  2. […] example, researchers in Japan have developed a method of dialogue that enables chatbots, voice assistants and other AI-powered platforms to learn information not yet in their knowledge base through conversations with humans. […]

Speak Your Mind


This site uses Akismet to reduce spam. Learn how your comment data is processed.