Machine learning models can’t always handle reality (but most humans can)

question-marks-on-white-board

A growing number of enterprise leaders view artificial intelligence (AI) and machine learning (ML) as transformational technologies that can enable better decision-making, increase efficiency, eliminate human error, and lower costs.

For many enterprise workers, however, the relentlessly consistent performance promised by intelligent machines looms as a threat to their jobs. After all, what human can match the productivity and effectiveness of an ML system that doesn’t get tired, take lunch breaks, or goof off on Instagram?

But when the theoretical collides with reality, reality usually wins. And that’s because reality is messy and imperfect, full of unanticipated flaws and plot twists. Translated into the AI and ML realms, this means machine learning models are fed a lot of incomplete, confusing, and inaccurate data, which corrupts the learning process, distorts their perception, and leads to incorrect predictions about patterns and outcomes.

“The moment you put [an ML learning] model in production, it starts degrading,” writes Forbes contributor and Pacific AI CTO David Talby. “Your model’s accuracy will be at its best until you start using it. It then deteriorates as the world it was trained to predict changes.”

While Talby’s the expert here, my take is slightly different: The problem is not that the world ML models are trained to predict changes. Rather, the problem is that the ML model was not trained to operate in such a world. Talby relates his experiences with a hospital project in which ML learning was deployed to predict 30-day re-admissions. Within three months the project had degenerated into a disaster, with the ML model’s predictive capabilities getting progressively worse.

“Changing certain fields in electronic health records made documentation easier but made other fields blank,” he says. “Switching some lab tests to a different lab meant that different codes were used. Starting to take one more type of insurance changed the kind of people who went to the ER. Each of these changes either breaks the features the model depends on or changes the prior distributions the model was trained on, resulting in degraded accuracy in prediction.”

If this were a scene in a movie, smoke would be coming out of the confounded machine.

Talby offers some practical advice, the best of which is to keep humans in the loop after the project is launched. Don’t be lulled into thinking that your ML models are all grown up and ready to make rational adult decisions. Data scientists and engineers should keep a close eye on changes in input data to minimize the chances of skewed data leading to inaccurate predictions.

The larger lesson here is that humans still can contribute valuable insights, knowledge, and experience to enterprise AI and ML initiatives. Machines may be able to collect and analyze more data much faster than people, but machines still can be fooled by bad information, lack of information, lack of context, and inadequate programming. Yes, humans can be misled as well, but we still have that elusive mix of intuition, experience (both direct and indirect), skepticism, and a far longer collective history of thinking than intelligent machines. We’re better equipped to anticipate and adjust to the chaos, noise, and confusion of reality. That’s no small thing.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: