Singularity in AI: Are we there yet?

AI singularity DXC Blogs

Are you familiar with the concept of singularity in artificial intelligence?

Hungarian computer scientist John von Neumann, in an article published in 1950, suggested that artificial intelligence (AI) will continue to grow exponentially through rapid redesign and self-improvement cycles. Each cycle, he said, would reduce in time span but improve in capabilities, until the technology reaches a point where it is self-sustaining and capable of continuous upgrades.

It’s at this point that a powerful “super intelligence” emerges, surpassing all collective human intelligence. This is singularity – and it’s a big deal.

The pace of technological progress leading to singularity is thought to be so fast that unenhanced human intelligence would be unable to follow it. The intelligence explosion would result in exponential, disruptive changes, affecting all aspects of human life as we know it.

There’s been a lot of research dedicated to understanding how singularity may be achieved.

The well-known Moore’s Law, introduced on April 19, 1965, suggests that the processing capability of conductors will continue to double every 18 months. The growth in computing power has, by and large, followed this course, making it a very useful reference for defining pace of technological innovation and expansion.

In 1950, Alan Turing proposed the “Turing Test” to assess a machine’s ability to think like humans. For AI and machine learning to solve a problem, it needs an algorithm that allows it to develop and compare options to come up with the best answer.

With the expansion of big data, cloud, IoT sensors and devices, it is now possible to train artificial intelligence to learn from cognitive experiences. But there is still a risk that AI can choose options that are not necessarily “desirable.” This can create a “value alignment” problem.

Remember, AI and machine learning “learn” by observing patterns in data. Human decision making is influenced by human values and human behavior, which we know is not always rational. While it is generally expected that AI can help remove unconscious bias, there is always a risk of prevalent biases creeping in. The Twitter AI chatbots that had to be shut down after picking up racial biases from data sets proved this.

The question of values highlights the impact AI can have on security and safety. For the first time in its 2017 Global Risk report, the United Nations recognized and addressed AI emergence and related risks.

According to the report, AI can be classified as follows:

Type Strong AI – Artificial General Intelligence (AGI) Weak/Narrow AI – Artificial Specific Intelligence (ASI)
Capability Equivalent to human-level awareness Directed at solving specific problems or taking actions within a limited set of parameters, some of which may be unknown and must be discovered and learned
Status Does not exist yet – timeline projection of 2040s Exists now to trade stocks, keep cars in their lane on the highway (self-driving cars), fly military aircrafts
Risks Poses existential threat to Human Society Carries risk of operating in unforeseeable ways with unanticipated real world impact* or operating outside human control

*Impact on employment, economy, social inequality

 

While we may not be at the point of singularity yet, the growing capability of AI to make decisions, learn and correct its own decision-making process does seem to raise moral, ethical, social and security concerns. Consider the dilemmas being confronted now around self-driving cars. Insurance companies are questioning who owns the liability and risks, who carries the insurance policy. Developers are faced with unimaginable decisions about whose life gets saved in a deadly collision.

A few other aspects of singularity worth exploring are:

The human impact

The post-singularity or post-human era will see bits and bytes merging with flesh and blood, enhancing human capacity in man-machine collaboration as never before. Man-machine hybrids will have capabilities far outreaching human alone.

Gene crypting and slicing or DNA tweaking will allow us to eliminate genetic disorders that also result in a diversity of human personalities and experiences. This could alter human identity itself. These technologies also run the risk of turning society into an enhanced human race versus a non-enhanced human race, leading to “haves” and “have nots,” conflicts and societal unrest.

Warfare

Currently, most defense systems are built around the deterrence and defense of an attack, rather than a pre-emptive strike. Post-singularity, AI-driven Autonomous Weapon System (AWS) could wipe out defense mechanisms through swarm-type coordinated, concentrated attacks (think scenes from the “Independence Day” movie). Based on an Ender’s game approach, swarm strikes present a real risk, as does their “winning at all costs” objective.

Economic impact

Currently, machine intelligence or artificial intelligence needs money (via humans) to source power to run it. But it is predicted that by 2045, AI will achieve singularity and be able to control its own power generation. This eliminate its dependence on humans or money, and the current global economy based on capitalism runs the risk of becoming obsolete.

Industrial robots are expected to grow by 1.4 million (on top of the existing 1.8 million) by 2019. A 2016 Forrester study found that 6% of jobs in U.S. alone are expected to be taken over by robots by 2021. A similar study by the OECD says 9% of jobs in its 21 members countries could be automated.

As robotic capability increases, there’s a real risk of the human workforce becoming redundant. The nature of jobs will change, and people will need to be retrained. This will not only impact employment but the economy and society. A few countries like Finland and Canada are already experimenting with Universal Base Income (UBI) to see how a basic safety net can be provided so that people are not left bankrupt.

With all of these uncertainties around artificial intelligence, one thing is clear: The future will be changed by AI. It’s unlikely to happen through the big bang arrival of singularity, instead involving a gradual evolution of machines enhancing human capability.

Either way, now’s the time to think about and prepare for this tomorrow, before the limits of human intelligence startle us like a soft whisper.


Annu Singh is an advisor at DXC. Connect with her on LinkedIn.

 

RELATED LINKS

Honey, where’s my super suit?

How much would you pay a robot or AI to do your job?

Are we heading toward an AI winter?

Comments

  1. I heavily recommend reading this very interesting and in-depth article by Kevin Kelly, that covers the most common misconceptions and assumptions we tend to make when talking about Artificial General Intelligence

    Like

  2. geosupergirl says:

    Once again Anu another great and thought provoking Blog.

    Like

Trackbacks

  1. […] But how far can the robots go? When will their intelligence overtake human intelligence? When will we reach singularity and what will its impact be? That’s the question posed by a colleague Annu Singh: […]

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: