Enrolling in Artificial Intelligence Kindergarten

Building really intelligent machines, or so-called strong artificial intelligence (AI), is a daunting task for technology.

I’ve worked on an approach that requires two technological novelties: One is a new way to organize knowledge; the other is a new way to acquire knowledge. Both heavily mimic biology.

The resulting method is called AI-Kindergarten and allows for the development of AI that is intelligent in a way that mimics biological processes.

While almost everyone is glad to see the recent progress in AI, some people take it as a sign of tremendous development to come. Will AI advance to the level of intelligence comparable to that of a human — or higher?

In general, it seems that those who work every day to develop and improve AI are more conservative in their predictions — perhaps because, for them, the discrepancy between what we hope for and what we actually have is far too obvious.

This variety of opinion is due, to a large part, to the fact that we do not have an overarching theory of intelligence. We do not have a theory that tells us what intelligence is, what it takes to build it and what the limitations are.

You may expect these questions to be answered by brain and cognitive sciences. However, despite the optimistic picture that popular articles sometimes paint, as an insider, I can tell you that science does not properly understand how our brain works yet. Our understanding of the whole picture is, to say the least, incomplete. We certainly do not have an adequate explanation of how our mental operations—that is, our abilities to perceive, decide, think and act—emerge from our physiological hardware.

The state of brain-mind sciences reminds me of chemistry during the time of alchemy. There were a lot of cool phenomena to be observed and replicated in labs, but no one had an overarching insight of what really was going on. The lack of a theory made it difficult to know what chemical substances were, what it took to create new ones and what the limitations were. This produced a fertile ground for speculations and led to unfulfillable promises and ungrounded fears. The modern situation in the brain-mind sciences is similar; we witness both far-reaching promises and doomsday fears.

But this may change soon.

A theory has been proposed recently that can, for the first time, offer an overarching account of how biology manages to create intelligence. This theory, under the name “practopoiesis,” proposes a whole new set of experiments to study the brain and mind and, by doing so, paves a new path for understanding the physiological substrate of human consciousness. But equally so, the theory has implications for understanding what intelligence is, how to create it artificially and where the limitations are.

AI in history and theory

Before I go there, I want to talk through a critique of the classical approach to AI, the one that we have utilized until today, and discuss why it may be fundamentally insufficient.

One critic is philosopher John Searle who formulated a Chinese Room thought experiment to show that looking-up into a database and computing outputs based on that database was not equivalent to human thought.

He claimed that humans do not just execute programmatic statements, or look up in databases for information. Instead humans understand what they are doing, while databases cannot possibly understand. And hence computers (based on such databases) cannot either.

He then distinguished syntax (what computers do) from semantics (what humans actually do). For example, a computer would apply certain rules on a picture to decide whether there is a car. A human would understand that the presented object is a car. Importantly, he did believe that, in principle, human mental operations are based solely on physical processes and, thus, that machines could mimic them. But this would have required an approach different from databases. He thus, distinguished weak AI, or what machines did at the time, from strong AI, as a hypothetical machine that would be able to understand in way humans and animals do.

Searle was by no means alone in calls for better approaches to AI. A popular list of the limitations faced by the dominant approach to AI was extracted by philosopher Hubert Dreyfus in his 1972 book “What Computers Can’t Do,” and then reiterated 20 years later in “What Computers Still Can’t Do”.

But to see what it means to understand, let’s walk through an exercise.

Take a look from the following picture. Can you see a car in it?

ai-pic1

It should be no problem for a human. A human can understand that this picture can represent a car. For today’s, AI this is a problem. Unless the AI was trained explicitly on these types of picture, it is impossible to “see” a car in this image. If an AI has been trained only on real-life photographs of cars it is hopeless in this case.

Humans, in contrast, can understand how this image can represent a car even if they see the image for the first time. Children can do it without any problem, too. Moreover, a child does not need 1,000s of examples of real cars in order to see a car in the above image. He or she has the power to understand what the car is about. This power of understanding (semantically) the car, rather than extracting statistical properties of pictures of cars, is the difference between strong, biological intelligence and the weak, machine intelligence of today.

But this is not where the advantages of strong AI stops. Consider the next picture. If the preceding one was a car, then what is this?

ai-pic2
This becomes immediately a truck. Even a child who has played with just one truck-toy can immediately generalize from a car above to a truck here.

And a similarly easy generalization happens here:ai-pic-3

The image becomes a train. For a child that has seen and understood the concept of just one train, this image easily becomes a train.

Humans understand. Today’s AI does not. There is a big difference on how we go about our lives and how machines do their jobs. To be biological-like intelligent means to be able to use previous knowledge in order to immediately understand and interpret novel situations.

But there is even more to our understanding. As humans, we can flexibly change our interpretations. For example, let us, for a moment, consider the image above not as a car any longer, but instead as a table with two chairs:

ai-pic1

No problem!

If that was a table, how about this one?ai-pic-4

Is this a setting in a restaurant? Again, no problem for a human.

And do you see a classroom here?ai-pic-3

Everyone does.

A change of perception is easy if you are strongly intelligent and impossible if you are a machine. We acquire understanding from the relations between the functional and structural aspects of tables, chairs, cars, trucks, trains and so on. This is what makes semantics.

On the other hand, today’s AI just learns stimulus-response associations from a number of images, and this is defined as syntax. This approach is insufficient to satisfy tasks that probe domains outside those images.

Although AI has made admirable progress, it has a history of over promising, which leads to waxing and waning of enthusiasm. Today, we have hype again — and AI “summer.” A winter may come again if we don’t find a way to create a new type of technology, one that would allow solving the Searle’s problem of understanding.

The breakthrough may be in the theoretical insights that the theory of practopoiesis can bring for understanding the nature of intelligence.

 What is the nature of (strong) intelligence?

Any intelligent computer software would need two components: For one, it needs something that we call algorithms, and these are programs that are conceptualized and specified by human persons. Algorithms have the purpose of collecting knowledge, storing and manipulating it. But algorithms alone cannot make a machine intelligent.

The second component involves an extensive interaction with the outside world. A smart machine must acquire a bulk of its knowledge by its own work. In other words, it has to learn.

For example, an artificial neural network may need to be exposed to thousands of images of cars before it learns to classify them accurately.

The classical approach to AI can be illustrated like this:

ai-pic-6

But strong AI organizes its knowledge in learning rules, and not in the box. The “box” contains only the knowledge needed to deal with a current situation — the one occurring right now. In strong AI, it is the learning rules that take up most of the memory resources, and not the box:

ai-pic-7

Strong AI perceives and decides through learning. For example, perceiving a car may involve several quick learning steps: Is that a car? Maybe it is not a car. Oh, I think it is a car. But what kind of car? Could it be my car? No, it doesn’t seem to look like my car. It is definitely not my car.

This cognitive process, based on learning, changes the box continually and is the very reason why biological intelligence has working memory, and today’s machines do not. Also, this fast learning process makes our cognition situated.

Knowledge stored in learning rules is more general than that in a box. This is what gives strong AI the capability of semantics and understanding. For examples, it is the application of such learning rules to the pictures above that make us recognize a car or a truck.

Therefore, the problem of creating strong AI, is the problem of making machines learn new learning rules. In biological systems this happens through an inborn set of learning rules stored in our genes. What we assume normally under “learning” — as in “I learned to drive a car” — is an application of these genetically encoded learning rules to create our long-term memories in a form of more specialized learning rules.

Therefore, a more complete illustration of how to organize the resources of strong AI is:ai-pic-8

According to practopoietic theory, an intelligent agent that has these three components is called a T3-intelligence. In contrast, today’s AI is a T2-intelligence as it has only two components.

Limitations of strong AI: No bootstrapping

According to practopoiesis, intelligence cannot bootstrap, meaning it can’t explode rapidly to become smarter and smarter. An AI explosion would be for intelligence what a perpetuum mobile is for energy; one can’t get something from nothing.

The fundamental property of intelligence is the need for learning from environment. And an intelligence cannot predict what knowledge it will receive from the environment before it has actually received that knowledge from the environment. Hence, it is not possible to boost intelligence by a new algorithm such that no time would need to be invested in learning.

This also means that our understanding of the world is limited. This holds true for us and for machines. Hence, neither we nor machines can directly engineer a strong AI in a way we can engineer a car, for example. Higher levels of intelligence can only be evolved.

AI-Kindergarten: How to build strong AI

The problem of building strong AI boils down to creating a proper set of learning rules at the level of “machine genome.” For biological evolution, it took millions of years to create proper genetic knowledge, and we don’t have that much time.

That means that we have to “steal” knowledge from biology. AI-Kindergarten is about transferring knowledge from our biological genome into a “machine genome.” AI-Kindergarten works through interactions between the machines that need to acquire knowledge and the humans who have that knowledge.

To accelerate the evolution of machines (acceleration factor: millions of times) machines need to get proper challenges in a correct order and with quick feedback. This process is somewhat similar to the transfer of our civilization to a new generation of kids. The 20,000 years or so that it took to bring our civilization to the present level, we manage to transfer in just a few years, applying adequate schooling, teaching, training, playing, etc.

AI-Kindergarten provides adequate training and feedback to evolve machine genomes, and bring them gradually up to our own level. Hence, AI-Kindergarten deals in total with four learning components (a T4-system):

ai-pic-9

The details of how AI-Kindergarten works are described in this video and are specified in a provisional patent.

The approach functions similarly to selective breading of animals — for example, when we turn wolves into dogs. The difference is that AI-Kindergarten is more efficient and that it starts from the earliest stages of evolution. It would be something like selectively breading bacteria with a plan to eventually create dogs.

It is possible to apply AI-Kindergarten to the development of commercial products based on T3-technology. It starts with developing a specialized, strong AI system for applications in time series analyses and forecasting. Later it can expand to sound and image processing, and other applications, with a long-term goal to approach, gradually, human-level intelligence.

This technology is safe because intelligence cannot grow without extensive training and is completely controlled by the contents of that training. Thus, it will be solely up to us to develop, or not develop, AI that obeys Asimov’s laws of robotics.

RELATED LINKS

Why artificial intelligence will never be smart enough to replace a good leader

Putting machine learning into context

Meet your new bot assistant

Comments

  1. fishballs says:

    Best article I have read in a while. A big part of the current AI hype is just the usual euphoria when stocks go up. Unsubstantiated optimism, which is one of our best human traits. Lol

    Situated Cognition is hard to induce. Needs time and resources. Aka patience. And that’s not what current AI is about. At all. So we might not see much of that being applied. Unfortunately or fortunately we don’t know.

    Like

  2. Awesome post. Best article I have seen during the time. Building actual intelligent machines or so-called strong artificial intelligence (AI), is a terrific task for technology. I hope this technology is secure because intelligence cannot be developed without large-scale training and it is completely controlled by the contents of that training.

    I believe it totally depends on us how to develop or not develop. AI that carry out Asimov’s laws of robotics.

    Thank you for a great article.

    Like

Trackbacks

  1. […] Enrolling in Artificial Intelligence Kindergarten […]

    Like

  2. […] Enrolling in Artificial Intelligence Kindergarten […]

    Like

  3. […] Enrolling in Artificial Intelligence Kindergarten […]

    Like

  4. […] Enrolling in Artificial Intelligence Kindergarten […]

    Like

  5. […] Enrolling in Artificial Intelligence Kindergarten […]

    Like

  6. […] Enrolling in Artificial Intelligence Kindergarten […]

    Like

  7. […] Enrolling in Artificial Intelligence Kindergarten […]

    Like

  8. […] Enrolling in Artificial Intelligence Kindergarten […]

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: