Is your enterprise vulnerable to ‘deepfake’ technology? (The answer is yes)


You get an audio message from your CEO urgently requesting a money transfer. Or the VP of marketing leaves a voice message asking you to immediately send that file containing the company’s product launch schedule to an ultra-secure email address because she’s presenting to the board of directors in 20 minutes.

Are you going to demand proof that your CEO and VP of marketing are who they say they are? You might. Then again, you might not. After all, time is of the essence! Plus, how could it not be them? What, were they deepfaked?

Maybe. As Axios reports, “In the first signs of a mounting threat, criminals are starting to use deepfakes — starting with AI-generated audio — to impersonate CEOs and steal millions from companies, which are largely unprepared to combat them.”

Most of us are familiar with deepfakes in the celebrity realm, in which artificial intelligence (AI) and deep learning (hence the “deep” part of the portmanteau) are used to fake the voices — and sometimes the faces and bodies — of famous people. Deepfake audio is even easier than deepfake video (not that any deepfakes are easy; they do require some skills, time, and expense on the part of the creators). If the target is a prominent CEO, there’s a wealth of audio sources available online — interviews, earnings calls, conference speeches and panel appearances — that can be fed to an AI system capable of “learning” how to imitate not just a voice, but a personality.

This isn’t just a theory: Security vendor Symantec last summer revealed that deepfake perpetrators stole millions of dollars from three companies whose chief financial officers were duped by fake CEOs into transferring large amounts of cash.

“I don’t think corporate infrastructure is prepared for a world where you can’t trust the voice or video of your colleague anymore,” Henry Ajder of Deeptrace, a deepfakes-detection startup, told Axios.

Sure, Ajder works for a deepfake-detection vendor, but it’s hard to argue that he’s wrong. Deepfakes are one of many AI-related technologies for which societies and entire nations — never mind businesses! — are unprepared.

While Symantec, Deeptrace, and others are working on solutions to detect deepfakes, as often is the case with evolving digital security threats, the vendors are playing catchup. In the meantime, as deepfakes become more sophisticated and difficult to detect, enterprises may be forced to revise communications protocols to build procedural safeguards against digital deception.

Finally, the best defense for enterprises against deepfakes is awareness and, perhaps ironically, human instinct. Most of us have the ability to sense when something is a little “off,” whether it’s a person or a person’s story. It’s part of our survival instinct, built up and reinforced over thousands of generations. It’s the one edge we have over machines — for now. Let’s use it.

Speak Your Mind


This site uses Akismet to reduce spam. Learn how your comment data is processed.