How to reduce employee fears about artificial intelligence

scared-child

Humans fear a seemingly endless list of things, but on the broadest level, what they fear most is the unknown. And one of the greatest fears throughout history is fear of new technology. There probably were some Mesopotamians who thought the abacus was a tool of the devil! In the more recent past, people have feared cameras, automobiles, planes, microwaves, and computers.

We now can add artificial intelligence (AI) to the ever-growing list of technologies that inspire dread. Will AI make my job easier or harder? Will AI make me irrelevant? Will I be evaluated by an AI algorithm? Perfectly reasonable and understandable questions.

Nonetheless, AI is becoming a reality in the workplace, and enterprises would be well-served to help employees overcome their trepidation. Alec Gardner of Think Big Analytics has some practical advice for enterprises “to help users understand how AI works so they can trust AI-generated insights.”

“Showing is always more powerful than telling, so to increase understanding, project leaders need to demonstrate the important variables and trends around the outputs the AI tool is targeting,” Gardner writes.

Indeed, demystification is a classic technique for helping people overcome their fear of the unknown. Demonstrate to people how a camera works, and they’ll cease worrying that taking their pictures will steal their souls. (Fools!) Similarly, showing enterprise workers how AI works by walking them through the process can negate the “black box” effect.

In a nutshell, here’s what Gardner recommends:

Change the variables affected by the algorithm

“It’s possible to reveal the inner workings of the tool by showing that the outputs of the algorithm are sensitive to changes with certain variables.”

Change the algorithm itself

“Removing a layer of nodes and then assessing the impact can show people how it works. Sometimes, a slight change in one variable leads to a significant change in the output.”

Build global surrogate models

“Where the AI algorithm is complex, you can build a surrogate model in parallel, which is simpler and easier to explain. While the results won’t necessarily align perfectly, the surrogate model’s results should strongly echo the AI tool’s results.”

Build LIME models

The acronym stands for “local interpretable model-agnostic explanations.” According to Gardner, LIME models allow project leaders to focus on one event, thus breaking down the process into more understandable segments.

Gardner goes on to explain some steps designed to build more trust around AI to make it easier for projects to get off the ground. If you’re ready for some hands-on advice that will ease employee concerns about AI, it’s a good read.

Trackbacks

  1. […] the latter case, demystify AI by walking employees through the entire process. This means demonstrating what happens when you […]

    Liked by 1 person

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s

%d bloggers like this: