Should we be preparing for an AI bust?

AI bubble DXC Blogs

In my last blog, I talked about the danger of entering an AI winter, a period of time when disappointment in a lack of progress toward artificial intelligence leads to a reluctance to pursue new research in the area.

This is a concern today because our community is repeating cycles of the past. We have not fundamentally changed our approach to how we create AI. We still use the same artificial neural network technology that was developed in the 1980s. What we do have that’s different is a few new machine learning tricks and much faster computers.

But we did not solve any of the fundamental problems I talked about in my last blog. We did not find a new way to create AI. We did not invent a machine that could act intelligently through understanding. We still rely on syntax — granted a very “deep” syntax in deep neural nets — but still only syntax. We have no technology that implements semantics into machines.

Some may argue that this approach is OK. We have the speed, now, and the memory. And we are publishing algorithms on daily basis. But how can one be sure? If we failed to achieve AI goals before – and repeatedly ran into a wall – how can we know this will not happen again?

What makes things more challenging is that, since our last Gold Rush, our ambitions have grown much higher. While in the 1980s, we hoped to get computers to reliably understand the message from a few spoken words (something Siri is kind of doing, but still struggling with, today), many companies are ambitiously announcing far headier goals for AGI.

So we have, fundamentally, the same technology as the one that drove us into the last AI winter. We simply have more of it and with faster computers. Add to this the fact that we have much higher goals than the ones we failed to reach last time, and it makes me think: Good luck to us.

Are we in an AI bubble?

In fact, the situation reminds me of economic bubbles, a phenomenon that occurs when the value of certain goods or services is grossly overestimated, and the prices paid on the free market exceed their reasonable value.

We often use the term “bubble” in housing markets, to describe a scenario when prices become much higher than customers are capable of paying. Bubbles also happen when companies of an entire market segment are valued as if they are addressing a market size much higher than they could ever address.

With bubbles, the law of supply and demand stops working properly, and investors bet on a demand that does not really exist. But one thing is true of all bubbles: They burst sooner or later.

The burst corrects the value and prices suddenly, returning them to more realistic values. The bigger the bubble, the bigger the damage. If an economic bubble has been let to grow for a long time, a burst can be followed by a recession. This is when jobs are lost, investments stall. It’s something like an economic winter.

In the past, AI winters did not result in significant economic winters, simply because the business aspects of AI were small. A few startups may have closed, and a few big companies may shut down some research. That was about it.

But today, things are different. Companies are pouring billions into AI. And this trend will, in all likelihood, continue for a while. So, it is reasonable to ask whether there is a chance of the current AI boom having a ripple effect on the greater economy, if the bubble were to burst.

Dotcom anyone?

In this scenario, we may witness at least two types of bubbles. One would be the classical one — a repetition of the Dotcom bubble that burst in the year 2000 as the real value of the commercial Internet became known. (There are already signs of those times coming; Google recently paid $600 million for a non-profit AI startup, DeepMind.)

The second type of bubble could merge with the one above, accelerate its growth and significantly delay its burst. And, despite our extensive experience with a dotcom type of bubble, we lack experience with this second type, and therefore could suffer dramatic results.

This much scarier type of bubble has to do with unrealistic expectations of how AI technology will scale. And this is what I’ll delve into in my next post.


Are we heading toward an AI winter?

Enrolling in Artificial Intelligence Kindergarten

Putting machine learning into context



  1. Khaled Soubani says:

    Some of the readers of your posts are shocked to read that there is any relevance or relation between AI/Robotics and Ethics. Why do these people have to interfere in a purely technical field that has not even existed a couple of decades ago?

    Here are a few extremely valuable references that address Artificial Intelligence and Ethics. The first one contains several resources.

    1. Ethics and Governance of Artificial Intelligence

    Artificial intelligence and complex algorithms, fueled by the collection of big data and deep learning systems, are quickly changing how we live and work, from the news stories we see, to the loans for which we qualify, to the jobs we perform. Because of this pervasive impact, it is imperative that AI research and development be shaped by a broad range of voices—not only by engineers and corporations—but also social scientists, ethicists, philosophers, faith leaders, economists, lawyers, and policymakers.

    2. Top 9 Ethical Issues in Artificial Intelligence

    Tech giants such as Alphabet, Amazon, Facebook, IBM and Microsoft – as well as individuals like Stephen Hawking and Elon Musk – believe that now is the right time to talk about the nearly boundless landscape of artificial intelligence. In many ways, this is just as much a new frontier for ethics and risk assessment as it is for emerging technology. So which issues and conversations keep AI experts up at night?

    3. Why the biggest challenge facing AI is an ethical one

    Artificial intelligence is touching our lives in ever more important ways – it’s time for the ethicists to step in, say our panel of experts.

  2. “We did not find a new way to create AI. We did not invent a machine that could act intelligently through understanding.”. We need to incorporate complexity in AI. Systems based on AI should provide answers/options/strategies/choices based on complexity, i.e. suggest, whenever possible, least complex solutions.


  1. […] Well, I’ll explore that in my next blog. […]

  2. […] Should we be preparing for an AI bust? […]

  3. […] – The failure to recognize the problem of scale may have contributed to AI winters in the past, and it could do so again. And, due to the sheer amount of investments in AI at this time, it could bring with it a wider […]

Speak Your Mind


This site uses Akismet to reduce spam. Learn how your comment data is processed.