All ready to grow up: Fostering AI’s growth in insurance

growth sprouting

The use of artificial intelligence (AI) technologies has spread across the insurance value chain. In product development, it enables insurers to create more profitable and effective products based on insights from past claims and product uptake in the market. In underwriting, it creates a better understanding of risk for new and underserved markets. It’s improving fraud detection during claims processing and customer service via conversational interfaces.

Even so, the potential of AI has yet to be fully realized. For now, AI’s role in the industry is largely limited to optimizing existing business processes rather than developing new and disruptive business models. There are a few key reasons for this.

Insurers’ AI challenges

 

Defining AI

To begin, AI is an umbrella term that encompasses several disparate technologies. In most solutions, AI is limited to pattern detection in data and using robotic process automation (RPA) to automate tasks. Other applications use machine learning/deep learning (ML/DL) models to glean insights from the enormous amount of data insurance companies generate. Lacking a common definition of AI makes it difficult to explain and its value less apparent to executive sponsors.

AI’s complexity

Secondly, AI is often highly centralized, which makes it expensive. The increasing complexity of ML/DL models requires enormous amounts of compute power. (By some estimates, the computing power required by some newer ML/DL models increased 300,000 times from 2012 to 2018.) Further, only a few niche technology companies have the skilled data scientists required to develop the complex models, the enormous datasets required to train these models, and the infrastructure required to deploy these models at scale.

A matter of data

There’s not a lack of data in the insurance industry — quite the contrary. However, insurance entities do struggle with getting that enormous amount of data ready for use in AI. Data needs to be cleaned, integrated, moved to appropriate infrastructure, governed and managed continuously. It also has to be labeled to be useful to the models AI relies on for decision making. This labeling process is time-consuming and expensive. Furthermore, negative data is not easily available to train ML/DL models in failure scenarios. For example, you would never send a fleet of self-driving cars out on a mission to crash on purpose just to help an AI model decide what went wrong at a crash scene.

Data interpretation models

Models are a bit like a black box. Data goes in, decisions come out. For industries such as insurance that operate in strict regulatory environments and where impartiality is important, the opaqueness of these models is an issue. It’s possible for creators to imbue models with unintentional bias that skews decisions in unexpected ways. Further compounding this is the fact that the technology to explain why and how AI models made their decisions is still in its infancy.

Overcoming AI obstacles

Judging by the pace of AI-related activity in the industry, these issues are temporary obstacles, not deal killers. And there are clear ways to overcome them. Tech companies are pursuing multiple strategies to increase AI’s value and boost its adoption. Some solutions involve combining emerging technologies. Other strategies involve customers in the solution by creating incentives to contribute new information or filter data that feeds AI applications. Either way, the broader concept is the same — to move AI away from monolithic solutions managed solely by large tech companies and put it into a decentralized, democratized form. Here’s how that can happen.

Leverage distributed ledger technologies to create unique capabilities

Distributed ledger technology (DLT), commonly referred to as “blockchain,” is founded on a couple of concepts critical to AI decentralization. DLT enables the proper attribution and audit of data usage in a fully decentralized architecture. This means insurers can gain access to

datasets that would have been inaccessible in a typical siloed data architecture. Access to unique data means insurers will be able to find ways to train their ML/DL models to help distinguish their business. This is a value-add for insurers looking to harness AI for separation from their peers that are using AI for similar use cases.

Put algorithms on a data diet

Many algorithms require large, expensive datasets to identify patterns and generate insights. Humans, on the other hand, can often make decisions at a glance. Developing ML/DL algorithms that can function on comparatively small datasets — as people do — will make them less expensive and more widely available. One solution of this type draws on social physics, a concept that can predict how groups of people make decisions by analyzing how information and ideas flow from person to person.

Leverage the community to build data and models

Data needs to be scrubbed and labeled before most AI implementations can use it. That’s time-consuming and expensive. Creating models is expensive, too, largely because the data scientists skilled in creating them are in short supply. The answer is to find ways to engage the community to help with these tasks. For example, an insurer could employ a community of policyholders to help identify images of water heaters and potential water heater issues (such as rust around a gas line) that could be used to inform an AI-driven model that performs underwriting or evaluates claims. Contributors could be compensated with cryptocurrency for contributing images as well as labeling them and by helping models learn how to interpret the data.

A community-based approach not only speeds the task and lowers the cost, it can also reduce bias because models are taught how to make decisions from many perspectives. Concerns about the privacy of data can be managed using homomorphic encryption, which analyzes data in its encrypted form, protecting user privacy and keeping data secure.

Push computing to the edge

The enormous amount of computational capability needed to develop, train and run sophisticated ML/ DL models is creating demand for solutions that leverage computing power available on the edge, mitigating the need for large-scale centralized compute infrastructure. These include platforms that leverage unused graphics processing unit (GPU) clusters, compute infrastructure and mobile computational capacity from users that monetize their idle machine time.

Build an AI ecosystem

As AI decentralizes, insurers will find benefits in creating an AI ecosystem in which datasets and models can be shared among partners and even other insurers. To get a sense of the advantages, consider the process for detecting fraudulent claims. An individual insurer would typically build its own datasets to train ML/DL models to process handwritten notes by adjusters, estimation documents, images and videos, and the plethora of information online about the insured. Doing this alone confers some benefit to the company.

In a decentralized AI model, however, all participating companies could benefit even more by sharing datasets using DLT and employing pretrained models. In this approach, the ecosystem improves every company’s ability to detect fraud because of the companies’ access to more data. This can also lead to the development of new and disruptive business models. And, insurers can also use their own datasets in addition to the ones from the ecosystem to create new value-added features in the ML/DL model.

For example, a commercial insurer of a wind turbine farm could use drones with ML capabilities to check for potential issues with the blades on a turbine. Combined with data coming from the turbine and other data, such as wind forecasts, the company could provide a service to its customers to improve turbine performance and improve power generation. This provides new revenue streams, while mitigating the risk of a breakdown and a resulting loss for the insurer.

Realizing AI’s true potential

Insurers understand that AI technologies have the potential to create much more value than they contribute today, but the way to achieve that has been less obvious. Taking a decentralized approach will provide the compute resources, data management and training needed to enable the technology to grow. Improving model making, sharing datasets and encouraging contributions from policyholders will make this fast-evolving technology truly useful and value-added.


Chak Kolli is the global chief technology officer for insurance at DXC Technology. In this role, he is responsible for DXC’s global insurance technology strategy and vision and helps guide clients in their digital transformations.

Speak Your Mind

*

This site uses Akismet to reduce spam. Learn how your comment data is processed.