Why artificial intelligence needs real empathy

kind-robot-face

Artificial intelligence (AI) systems have the potential to become both smarter than humans and smart in ways we’re not. I anticipate that AI will quickly bring tremendous benefits to both citizens and the government. Even so, today’s AI systems lack the capacity to understand and share the feelings of others. In short, AI systems lack empathy.

This lack of empathy within AI has the potential to create ethical, policy and legal challenges that the public sector needs to explore, understand and address. This conversation should start now in a public forum, before AI effects on government social processes become pervasive.

As AI systems augment and replace human work, they may undertake and change public sector processes. And they may do so in ways we won’t be able to either anticipate or understand.

For example, imagine an AI system that’s used to help decide who qualifies for various human services benefits. Underlying the system’s decisions would be certain algorithms. These algorithms, in turn, would embody specific values. But what are these values? Who gets to choose them? And are they values others would endorse?

Now add a complicating factor: Over time, AI systems can learn and modify their behavior. What if this same human services system learns practices that discriminate against certain racial or ethnic groups? No, the system wasn’t programmed to be racist. But over time, it certainly could develop what we’d call a bias. How would we detect this bias? And once the bias was detected, could we even remove it?

Fortunately, these examples are still largely hypothetical. But the public sector cannot afford to leave this issue for the future. The time to begin coping with these ethical, legal and policy challenges is now.

Paths to bias

You may wonder how an AI system could develop biases. As it turns out, there are several ways.

Data bias is one way. Data in an AI system may be incomplete, and this can be difficult to detect. For example, an AI system may be learning only from its “yes” judgments, neglecting important lessons from the “no” decisions. But the reason that one person was denied a service may be just as important as why that same service was granted to another.

Algorithm bias is another way. In a perfect world, no algorithm would have a bias. But AI systems are developed by people. And people have their biases. Even without malice, biases can appear in a system.

There is another bias lurking in the background: a digital bias. This bias begins with the assumption that the decision or outcome from a digital system is correct. Digital bias occurs when decisions from expert systems are accepted and given preference over those from human experts. Over time, this bias will diminish as trust develops between the system and its users. The three smart traits below speak to the foundation upon which that trust must be built.

Yet another issue is the possibility for system manipulation. As we’ve seen, government-run systems are attractive targets for hackers and other cyber criminals. AI systems won’t be an exception. In fact, adversarial AI systems will be used to execute the attacks, but that’s a topic for another day.

Smart traits

Three smart traits speak to the foundation on which AI trust can be built. They’re important characteristics the public sector should add to its AI systems now:

  • Transparency: How does an AI system make its decisions? Using what data? And if there is a bias, how can it be identified?
  • Predictability: Because advanced AI systems do not think the way humans think, their decisions can be difficult for us to predict. For example, just because an AI system acted without bias in the past does not mean it will always do so in the future. We need logical “guardrails” to ensure that AI decisions are kept within the bounds of accepted ethics, laws and policies.
  • Resistance to manipulation: Too many of AI’s designers and users alike default to the system, trusting it to make good decisions on our behalf. Instead, we need the ability to question AI’s perceived infallibility — and to block, detect and remediate any attempts at covert manipulation.

The U.S. and UK governments have already taken early steps toward formulating AI-adoption guidelines. Now it’s time for the public sector in the Australian and New Zealand region to join them. With empathy, AI can help shape government; without empathy, AI may misshape it. The choice is ours.


Jack Story headshotJack Story is DXC’s Chief Technologist for the Public Sector in Australia and New Zealand. With more than 27 years’ experience in the outsourcing and service provider industry, having worked across a broad client base helping guide customers build innovative business solutions focused on business value creation, Jack provides thought leadership on the appropriate application of technologies balanced with the advancement of technology innovation.

Trackbacks

  1. […] enterprise’s strategic goals, or being a master of soft skills that are beyond even the most empathetically programmed machine, it’s imperative that employees recognize and proactively respond to AI and machines in the […]

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: