Building AI: How exactly did we do it?

Building AI

Editor’s note: This is a series of blog posts on the topic of “Demystifying the creation of intelligent machines: How does one create AI?” You are now reading part 6. For the list of all, see here: 1, 2, 3, 4, 5, 6, 7.

As I discussed in my last posts, I have been working with colleagues at DXC to build an artificially intelligent fan, one that can monitor its operations, report issues and even, sometimes, fix problems itself.

In my last post, I discussed how we taught the fan to classify errors. By doing a good preparation in the first stage, we were able to perform the second stage with a relatively simple classifier, for which we used logistic regression.

Now you may be wondering why we would use simpler tools. One might think that more complex tools, such as a neural network, should be always preferred over simpler ones such as linear regression. After all, neural networks are more powerful, capable of working in non-linearities and so on. But actually, the opposite is true.

One should use simpler tools whenever possible. The reason for that can be found in what we already discussed about induction biases and learning theorems: If your data is such that it fits well to the tool (i.e., the tool is a good model of the data), then training will be more efficient, and one can work with smaller data sets.

This brings us to one more advantage of our architecture. If we want to expand the intelligence of the classifier and add more error categories later on, this can be done more easily with simpler tools. Retraining applies only to the fairly quick learning of the second component of the AI – doable even by Raspberry Pi, if necessary.

Logistic regression is a cousin of linear regression but has one more twist allowing it to provide as output a likelihood that the input belongs to a category.

We fed the logistic regression using only a selected set of values obtained from Fourier transform. We did not use the entire spectrum as we did for anomaly detection. This required performing an analysis to identify the frequency bands containing the most error-related information. After these activities, we created a component that was computationally easy to implement, fast in training and trained well even with a relatively small number of data points.

For our demo, we chose to train the AI with the following categories:

  1. Semi-obstructed air flow
  2. Fully obstructed air flow
  3. Physical obstruction of rotation

Semi-obstructed air flow was simulated by positioning a flat piece of plastic board to block the air intake of the fan. Fully obstructed air flow was created by completely covering the opening for air flow. Physical obstruction was simulated by using an ordinary business card and gently touching the fins of the fan while they were rotating.

With these data, we could train the AI effectively to classify these three types of errors.

In theory, one can add more categories relatively easily, retraining only the second stage of detection. Reliability of anomaly detection will not be affected by additions of more error categories. But of course, as we add more categories, the confusion table in the second stage will become richer, meaning that the count of mistaken classifications will increase. For that reason, there is a limit on how far the number of error categories can be increased before needing to use tools more elaborate than logistic regression.

Final component, GOFAI

The history of AI started with good-old-fashioned-AI, also known as GOFAI. In simple words, GOFAI is intelligence directly programmed into a machine by a human. Machine learning, on the other hand, is a mixture of human work and the learning of a machine on its own. GOFAI is often implemented through a number of if-then statements.

Historically, we have seen two phases of AI development. In the first one we thought we could simply program AI (a lot of if-then rules). When this did not work, and perhaps led to the first AI winter, doors opened for the second phase in which machines were built to learn by themselves. We are now exploring the second phase.

There is even a possibility that there will be third and fourth phases in which machines will learn to learn and evolve, rather than being programmed.

If you are developing an AI for real-world application, you will likely use state-of-the-art technology as well as some old-fashioned components. In other words, for a part of your solution, you may have to use GOFAI.

In fact, it is hard to imagine an elaborate AI solution that does not, in some way, include directly programmed decisions in the form of if-then rules, a look-up table or similar.

We used GOFAI in some important parts of the overall solution. In our case, the GOFAI component consisted of only a small number of rules. We chose to use hand-coded if-then statements when:

  1. The decision on whether a threshold for a detected anomaly had been reached.
  2. The decision on the severity of an error had to be made; once an error has been identified, the fan could either alert the user but continue operating (low severity) or stop its operations (high severity)
  3. Autonomous experimentation with the environment occurred. After having shut off for a while, the fan could try starting again on its own to see whether the problem persisted or maybe went away.

The final schematic diagram of the AI architecture looks like this:

AI Architecture BW


The actual physical implementation of the hardware placed in a suitcase can be seen in this picture:


Suitcase Hardware BW

As one can see, the fan can be easily carried to remote places, for example, for a demo at a customer site.


DIY your own AI assistant

Putting machine learning into context

How machine learning and AI are transforming the workplace



  1. […] does one create AI?” You are now reading part 1. For the list of all, see here: 1, 2, 3, 4, 5, 6, […]

  2. […] does one create AI?” You are now reading part 2. For the list of all, see here: 1, 2, 3, 4, 5, 6, […]

  3. […] does one create AI?” You are now reading part 3. For the list of all, see here: 1, 2, 3, 4, 5, 6, […]

This site uses Akismet to reduce spam. Learn how your comment data is processed.