The ethics of AI are up in the air, and that’s dangerous


If you had a super power, would you use it for good or evil?

That’s a question every organization faces as they begin to use artificial intelligence (AI) to do business. And though AI isn’t a super power per se, it is like having the Batcomputer or Tony Stark’s J.A.R.V.I.S. on hand to thwart the criminal antics of Two-Face or Crimson Dynamo.

AI can make your organization super-insightful about customers, markets, marketing techniques, competitive research, and much more. So what happens when someone in your organization — the CEO, say — observes that while a certain AI-based activity may be “sketchy,” it isn’t illegal (yet). Not only that, who’s to say what’s sketchy anymore? You can be sure your competitors are going to use AI to their every advantage!

How to handle the inevitable ethical questions surrounding AI’s use in business, healthcare, government, science, the military, and other realms that impact civilization is the subject of considerable thought and debate these days — as it should be! Over at MIT Sloan Management Review, Tom Davenport and Vivak Katyal make the argument that a lack of regulations means “AI-oriented companies must establish their own ethical frameworks.”

This strikes me as a recipe for trouble, but the authors correctly observe that, until there are agreed-upon rules governing ethical AI use, there are no other reasonable alternatives. The development of AI technology simply is outpacing efforts to create and implement (however loosely) some kind of ethical best practices.

Davenport and Katyal propose seven actions AI-oriented organizations should take to create their own AI ethics operating principles, including making AI ethics a board-level issue, avoiding bias in AI applications, and striving for transparency by disclosing to users (customers and/or employees) the use of AI.

“Perhaps the most important AI ethical issue is to build AI systems that respect human dignity and autonomy, and reflect societal values,” they conclude. “It may be difficult to anticipate all the ways in which AI might impinge on people and society before implementation — although certainly companies should try to do so. But when signs of harm appear, it’s important to acknowledge and act on emerging threats quickly.”

Getting back to the recipe for trouble: I’m not terribly confident that decision-makers of a company facing disastrous financial consequences if they divulge an AI-related danger will do the right thing — unless there’s a greater cost to not doing so. And that’s why regulations as well as frameworks for ethical behavior are so important.

Which brings us to a set of AI ethics guidelines developed by the European Commission’s High-Level Expert Group on Artificial Intelligence (AI HLEG), a group appointed by the European Commission (EU). AI HLEG released its first draft in December 2018, with a final version expected in March 2019. It’s 37 pages of suggestions for setting the “fundamental rights, principles, and values” with which AI should comply, but here’s the high-level take:

Having the capability to generate tremendous benefits for individuals and society, AI also gives rise to certain risks that should be properly managed. To ensure that we stay on the right track, a human-centric approach to AI is needed, forcing us to keep in mind that the development and use of AI should not be seen as a means in itself, but as having the goal to increase human well-being. Trustworthy AI will be our north star, since human beings will only be able to confidently and fully reap the benefits of AI if they can trust the technology.

The conversation over ethical AI is only beginning, and it’s important that enterprise decision-makers, employees, and consumers are part of it. The stakes are high.


  1. Kshama Pandey says:

    Very insightful.

Speak Your Mind


This site uses Akismet to reduce spam. Learn how your comment data is processed.