When your corporate strategist is a machine

AI-playing-chess

Many enterprise leaders are eager to leverage artificial intelligence (AI) to better understand and serve customers, to optimize efficiency, and to help with decision-making.

But what if AI itself actually sets corporate strategy? It’s an intriguing notion, one which Forbes contributor Daniel Shapiro explores in a recent column.

Shapiro goes into great detail about how AI is evolving to better analyze situations in which “quality and quantity of information used to make decisions varies wildly,” something with which most enterprise decision-makers are painfully familiar.

“Corporate strategy crafted by artificial intelligence will become more popular,” he writes, urging enterprise leaders not to “freak out.”

“Don’t freak out” is almost always good advice! That being said, I have to imagine some enterprise decision-makers might have serious qualms about handing over the strategic reins to intelligent machines. Fortunately, they have some time to get their heads around the idea, for despite impressive progress in teaching AI to formulate strategies despite unknown variables, there remain obstacles to AI autonomously devising business strategy and execution. According to Shapiro, those obstacles include “interpretability, dataset handling tools, automated machine learning, and bias.”

It’s the last one, in my opinion, that poses the greatest danger. Bias basically means you — whether the “you” is a person or a machine — have a non-objective and skewed view of reality. Indeed, the premise behind using AI is to prevent human bias from undermining the decision-making process.

“Artificial intelligence systems are very good at learning and perpetuating bias that exists in observed data,” Shapiro says. “It is the process of verifying that certain biases do not exist in the generated advice that adds significant cost to machine learning projects of this nature.”

A much bigger cost would come from not understanding that AI can perpetuate biases, and then allowing AI to set strategy without questioning its conclusions or submitting them to an intensive review by relevant stakeholders.

So how can you avoid bias in AI? Ensuring a diversity of data is a start, but by itself is hardly a panacea. James Golden, CEO of WorldQuant Predictive, argues on the World Economic Forum website that we must rethink our entire approach to AI.

“Rather than top-down approaches that seek to impose a model on data that may be beyond its contexts, we should approach AI as an iterative, evolutionary system,” Golden writes. “If we flip the current model to be built-up from data rather than imposing upon it, then we can develop an evidence-based, idea-rich approach to building scalable AI-systems. The results could provide insights and understanding beyond our current modes of thinking.”

Yes, that undoubtedly is easier said than done. The key here is patience and perspective; it’s still relatively early in the AI era. For now enterprise leaders should view AI as a handy tool to help them make decisions, provide better products and services to customers, and streamline operations. There’s no need to hand over the strategy-making keys to an AI system whose own abilities and view of the world are evolving. Best to walk until you can run. That goes for humans and machines.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: