Are we heading toward an AI winter?

AI winter DXC Blogs

Most people understand “strong AI” as a term interchangeable with Artificial General Intelligence (AGI), or an artificial intelligence (AI) that has the ability level of an average human adult.

The idea is that, one day, we will be able to build an AI that is as smart as we are, and that AGI will eventually replace humans at tasks such as driving a car, making medical diagnoses, prescribing medicine, making business decisions, designing a new product, writing poetry and who knows what else. Many of the current efforts in the AI field are working toward this goal.

However, historically, the term “strong AI” means something else. The original meaning, as introduced by the philosopher John Searle, describes hypothetical technology that works in the same way biological intelligence works. Searle did not place requirements on the amount of intelligence a machine must display, but rather on how the intelligence works.

According to Searle, only biological systems can consciously understand what they are doing, while machines do not. Instead, machines create intelligent behavior based on something like a long look-up table: if X, do Y. This is fundamentally different from what humans do. Searle thus distinguished syntax (how machines work) from semantics (or how biological systems work).

One place where Searle’s definition of strong AI collides with AGI is in the intelligence of animals.

Consider, for example, a dog. A dog cannot do mathematics or understand many words. With today’s approach to AI, researchers would not gain much from studying the mind of a dog. But, according to Searle’s definition, a dog is a very valuable area of study, because dogs understand, while machines don’t. Dogs operate on the basis of semantics, and machines only work from syntax. (For more, check out this video of a talk by John Searle, in which he describes his dogs as strongly intelligent.)

Thus, to exhibit strong AI according to the original definition, machines don’t have to reach human levels of intelligence. The level of a bee or a worm could suffice to start. What’s important instead is how the machines accomplish that intelligence.

So what’s the problem?

With the way we think and approach AI today, only human-level intelligence fits the bill. We expect the AI we create to be as smart as us, or even more so. Does it matter if today’s definition of AI doesn’t quite match Searle’s original thinking, as long as the engineers and scientists can successfully create intelligent machines?

Well, the AI world is not that simple. The problem is that, if Searle was right in his critique, then modern technology may not only create intelligence in a different way, but in an inferior way. How we implement AI may not be good enough to match up to biology. Thus, the deep learning architectures and other cool tools we envision may never achieve AGI.

At its heart, Searle’s critique worries that an AI approach based on symbolic representations and manipulations of those representations will be insufficient to create machine intelligence as we wish to have it.

Another philosopher, Hubert Dreyfus, made this point even clearer in his 1972 book, “What Computers Can’t Do.” He listed out in clear and entertaining language the limitations of symbolic machines, basically predicting that the promise of AI in the near future would not come true. And the resulting disappointment – when the hype fails to materialize – leads to what’s known as an AI winter.

Brrr…bundle up!

An AI winter follows a period of excitement over the technology. When the promises do not come true and people become largely disillusioned, not only does interest in AI decline, but the very term becomes something to avoid.

During this time, engineers and researchers interested in AI are perceived as naive, unprofessional and, possibly, incompetent. Investments in the field go down, and it’s practically impossible to get research funding for the topic

AI winters have happened in the past. And they’ve occurred more than once.

Yet we don’t seem to learn from AI winters. The enthusiasm for the technology tends to come back, only to die again. In 1992, 20 years after his first book, Dreyfus wrote a follow-up, “What Computers Still Can’t Do.” The critique of AI had not been solved. The dream of AI faced trouble once again.

A summer day

Today, we are again enjoying an AI summer. The investments and funding for AI have never been larger. The enthusiasm has never been higher.

Unlike in the past, today’s companies are finding numerous commercial applications for AI, and the technology is becoming a significant part of our economy. For the first time, AI is a real business.

At the same time, more advances are emerging every day. News about groundbreaking technological achievements or applications are all around us, a direct result of companies pouring billions of dollars into the field.

So it looks like, this time it is for real. This time, strong AI will happen.

AI is on the rise and there seems to be nothing in its way. What could possibly go wrong?



Enrolling in Artificial Intelligence Kindergarten

Machine intelligence still requires gray matter

2017: Bigger, faster data makes for smarter machines


  1. Khaled Soubani says:

    “The idea is that, one day, we will be able to build an AI that is as smart as we are … replace humans at tasks such as driving a car, making medical diagnoses, prescribing medicine, making business decisions, designing a new product, ….”

    So, where is the wisdom in introducing technologies that replace human economic activity? I am very much for science and technology and strongly believe that they are capable of solving complex human and environmental problems. Having said that, AI/Robotics is a field that I believe should have a plan and ethical oversight. Take a look at the jobs that you describe in the paragraph above. These are all middle class jobs. Do you really think that there exists a society that is willing to layoff all these humans and replace them with machines? By the time we even get to these jobs, entire classes of working class jobs would have already been lost! We all know now what kind of political and social systems emerge when unemployment rises considerably. So, what is being described with this ultimate achievement of AI/Robotics goals is enormous gains for AI/Robotics professionals and economic devastation for everyone else. Do AI/Robotics professionals think this is sustainable or even possible?

    This is not a simple matter to consider by the science and technology community and, unfortunately, I have not read a satisfying answer yet.

    • Dave Knight says:

      Ethical oversight? Holding-back technology because it might take jobs away from humans is a bad idea. The technology is inevitable and resistance to it is futile. painters unions wanted to ban the paint roller when it was first introduced because of fears it would eliminate jobs and make those whom did not adopt the new tech incapable of competing with those that do. This being said, there will be turmoil. There will be the first generations of permanently-unemployed humans. Governments and economies will have to adapt to new tech, or they will fail. And failing isn’t necessarily the worst thing. Profound rebirths come from the most fiery of crashes. Don’t fear the changes that new tech will bring – instead, embrace all they have to offer.

  2. Yasin Kara says:

    Sad to heart Hubert Dreyfus died just a few days ago

  3. An interesting problem when trying to compare the relative intelligence of humans to animals to AI to AGI is that we don’t have an agreed upon definition of intelligence. For example maybe dogs aren’t intelligent in the same way we are, but they’re very tuned in to our emotional states, which either makes them seem intelligent, or let’s them “piggyback” on our intelligent. We could perhaps say the same thing for many machine learning applications. Maybe they’re not intelligent, they’re just “simple route learning” applied in intelligent ways by people.

    Whether you agree with those ideas or not, the fact that we don’t have a good definition​of inteligence makes it difficult to come to a conclusion either way.

    I think we should try to narrowly define intelligence, for example, as a measure of the ability to make comparisons:

  4. Leomar D. Perez says:

    Really interesting article, didn’t knew the term AI Winter before.


  1. […] my last blog, I talked about the danger of entering an AI winter, a period of time when disappointment in a lack […]

  2. […] – The AI technology of today is still no match to the intelligence capabilities of a real brain, human or otherwise. What we have is weak AI. And we still have not found a way to begin building […]

  3. […] Are we heading toward an AI winter? […]

  4. […] for scaling the intelligence of our machines. But there may be a solution I have hinted at in other posts — and we can learn it from […]

  5. […] is an AI winter in the cards, as my colleague Danko Nikolic recently wrote about? In my view, based on what I have seen and the […]

  6. […] Il sole di York che ha spazzato via l’inverno della AI è il deep learning, una varietà di intelligenza artificiale che in realtà affonda le sue radici proprio in quei freddi anni Novanta, ma che solo negli ultimi anni è fiorita, accumulando primati scientifici e applicativi. Ha tanto successo che sta eclissando gli approcci alternativi, e porta qualcuno a chiedersi se il settore dell’intelligenza artificiale non stia rischiando troppo, mettendo tutte le uova nello stesso paniere. Nello specifico, non stia rischiando una nuova overdose di promesse impossibili da mantenere, che porterebbero dritti verso un nuovo inverno. […]

This site uses Akismet to reduce spam. Learn how your comment data is processed.