Deception as a cyber defense: More critical than ever


As more devices are connected to networks, and as more “smart” technologies such as voice assistants and chat bots become digitally indistinguishable from humans, bad actors are gaining new opportunities to breach enterprise systems and data using forms of deception both time-honored and new.

For example, researchers in Japan have developed a method of dialogue that enables chatbots, voice assistants and other AI-powered platforms to learn information not yet in their knowledge base through conversations with humans. That’s great if you’re trying to “onboard” the technology by freeing up information from human silos; not so great if the voice assistant or chatbot is being controlled by someone with a sinister agenda.

Then again, it’s not like you wouldn’t notice something a little “off” about your trusted voice assistant, right? (Sorry . . . we’re only human.)

“We already have AI assistants that do our scheduling, email on our behalf and ask us what we’d like to order for lunch,” writes Mike Lynch in Wired. “But what happens if your AI assistant gets taken over by a malicious attacker? Or, indeed, what happens when weaponised AI is refined enough to convincingly impersonate a real person who you trust?”

In other words, you can be socially engineered by an algorithm. (Let that sink in.)

Even if the attackers don’t resort to impersonation, they still can use AI to infiltrate a network “for months, perhaps years, without getting noticed,” Lynch warns. “They will learn how the firewall works, the analytics models used to detect attacks and times of day that the security team is in the office. They will then adapt to avoid and weaken them. All the while, it will use its strength to spread, creating inroads for compromise and contaminating devices with brutal efficiency.”

A suboptimal security situation, to say the least! To counter the increasingly sophisticated breach threats posed by AI, enterprises need to increase their focus on deception technology, argues Security Boulevard contributor Tony Cole.

Cole isn’t talking about technology that allows IT to detect nefarious activities in the network (though that’s part of it); instead he means technologies that deceive the intruders by leveraging the defender’s internal tools, architecture and knowledge.

“Among the major home field advantages deception technology provides is that it enables the security defender to quickly identify attackers or policy violations, close the detection gap and shrink dwell time by rapidly detecting the growing number of in-network threats that other security controls miss,” Cole writes. “To accomplish all of this, deception must be highly authentic so the attacker cannot discern the difference between true production assets and deceptive assets.”

If the deception is authentic enough, intruders will stumble into the trap, revealing their network presence in the process.

“Since the deceptive environment has no employee production value, security teams know that every alert, based upon deception engagement, indicates a real threat or vulnerability,” Cole explains. “Once an alert is raised, security analysts can either remediate the threat or monitor the adversary and collect intelligence based on their activity.”

In a way, deception technology combines the old concept of “honeypots” with AI and machine learning, creating a more proactive intrusion detection posture. Rather than set a static trap and hope an intruder stumbles into it, today’s deception technology is adaptable.

Deception technology “structurally learns and adapts to your organization’s network and cloud environments,” TNW‘s Doron Kolton explains. “Decoys change to match the real environment as it changes. Additionally, solutions that use ‘breadcrumbs’ can strategically lure attackers and malicious insiders to the decoys. This ‘personalization’ is critical to a modern deception defense – to ensure that the deception components always look and feel real to bad guys.”

In a digital world where bad actors could be anywhere or even posing as anyone, enterprises need a way to leverage what Cole calls home field advantage. Deception technology gives enterprises adaptive, proactive security tools and techniques to smoke out the malicious intruders lurking in their networks.


  1. […] is not an idle question. As I wrote in August, voice assistants and chatbots can be hacked and taken over. Why, then, couldn’t a supervising robot be hacked by a bad […]

Speak Your Mind


This site uses Akismet to reduce spam. Learn how your comment data is processed.