How AI is Changing the Future of Insurance

From customer service to automated underwriting and even fraud detection, artificial intelligence promises to be a game changer in the insurance industry. AI's dual-edged potential has the potential to both develop and disrupt the insurance sector.

Artificial intelligence is already present in the insurance industry and will take numerous forms in the future. Large language models, similar to ChatGPT, as well as natural language understanding and machine learning, promise to perform a variety of functions. With its ability to create and analyze images, movies, and sounds, generative AI has enormous potential for use cases in the insurance sector.

While artificial intelligence has recently received a lot of attention in several industries, the insurance industry has been employing AI tools in some form for years.

"From the carrier side, we have been seeing predictive modeling since it really started to grow 10-15 years ago," said Bill Holden, senior vice president of executive perils for The Liberty Company Insurance Brokers. "This is where they use vast amounts of data to make predictions."

Even more common tools, such as telematics devices used in usage-based insurance policies, as well as the basic chat bots accessible on most insurance company websites, rely on artificial intelligence underpinnings.

Emerging Threats

Although AI tools have been commonplace in the insurance industry for many years, some of the new artificial intelligence functions that are developing in the market are actually introducing new risks that may need to be covered.

Take, for example, big language models. AI businesses train these models to answer queries on their own using predictive text based on the data they have been supplied. Problems arise because the AI isn't designed to answer the prompts perfectly, but rather to try its best to forecast what word will likely come next. And no one is watching what it says in real time.

This has resulted in a condition known as AI hallucinations, in which the program simply makes up some part of the response out of thin air because it seemed correct.

There is no harm if the hallucination was simply a gibberish response, but one lawsuit has already been brought in Australia based on the AI accusing a mayor, who was a whistleblower in a case, of being the one who committed the offense.

This type of published damaging defamation falls under the legal category of libel, and lawsuit judgments based on those damages can easily reach millions of dollars. The key to determining who must pay in those circumstances is who was at responsibility for the publishing, which is where the interesting insurance question comes in.

Was it the fault of the individual who asked the query, which provoked the publishing in the first place, that the machine produced the statement? Was it the company's fault that hosted the AI chat bot's code on its server? Was it the programmer's fault? Which insurance firm will have to pay to defend the lawsuit and, if found guilty, pay to reimburse the judgment?

A similar issue of malfunction arises when an AI operates autonomous vehicles. Who is to blame if an AI-driven autonomous drone, delivery truck, or taxi collides with another vehicle or, worse, kills a pedestrian? Would it be the proprietor? Who is the programmer? Who is the manufacturer?

These issues have yet to be addressed in the courts or statehouse capitals.

Copyright breaches have also prompted lawsuits against generative AI. A generative AI algorithm is trained to create art or music by feeding it examples of existing works. However, the original creators of those works have taken issue with this, stating that their work and styles are being stolen. They are also suing.

Deep forgeries raise a comparable liability issue.

As these issues play out, policy language in personal lines, umbrella policies, corporate general liability policies, errors and omissions policies, directors and officers policies, media liability policies, and other areas will need to be modified.

"Liability is like a pebble in the pond," Holden explained. "It ripples out, and things that you don't think about come into play."

Operational Effect

Aside from the risks that must be addressed, artificial intelligence has and will continue to transform many aspects of how insurance firms function, from the first point of contact with the consumer to the way policies are underwritten and claims are processed.

The most noticeable effect will be in paperwork. By simplifying services and automating tasks, AI automation has the potential to reduce the risk of human mistake.

Everyday insurance functions including filling out forms, filing insurance certificates, checking policies, and other clerical tasks will be shifted to AI technologies as soon as practicable.

"I'm sure they're already working on briefs," Holden added. "I'm guessing that if they aren't already, it won't be long before they start writing coverage opinions."

Historically, insurance businesses depended on customer-supplied data, some commercial databases, and minimal human research to aid in policy underwriting.

Underwriters can use artificial intelligence to read through things like Yelp reviews and thousands of public document filings and public records, as well as scrape social media feeds, to build profiles on applicants that can help assess risk.

The next phase in artificial intelligence would be to completely eliminate the human underwriter and use the automatically obtained data to generate an automated coverage determination and rate almost instantly. However, this must be done with caution in order to avoid unforeseen results.

Insurance businesses may use machine learning and modeling to automate many of the functions that were previously done through labor-intensive, hands-on processes.

Following the filing of claims, artificial intelligence can intervene and utilize generative AI to assess photographs and video of damage and communicate with sensors.  AI can compare that damage and information to policy documents, returning coverage determinations and payment offers in a fraction of the time it would take a human to do so.

Machine learning has the potential to detect fraud by studying patterns that might otherwise go unnoticed by a human reviewer, indicating questionable claims or behaviors that would indicate something isn't quite right.

Many of these tools are already available, while others are being phased in. Many more are undoubtedly on the way.

Job Advancement & Downfalls

With all of that automation, many in the business will be looking over their shoulders to see whether theirs is the one at risk.

The most monotonous and repetitive employment will most likely be lost at the start of the AI revolution, as will many front-line customer-facing roles that were previously outsourced to call centers.

Chat bots will continue to play an increasingly essential and expanding role as massive language models capable of returning answers and beautiful responses become available. With generative AI capable of understanding and producing human-sounding voice responses, many phone-based customer support tasks that haven't already been automated will almost certainly be outsourced further.

But that doesn't mean that every insurance employment is in jeopardy.

Anyone who has dealt with an automated contact center understands the frustration of requesting a human attendant because the AI simply isn't up to the task.

And, while a drone can capture post-disaster damage and a phone's camera can feed video and photographs to an insurance company's AI to assess the damage after a car crash, Holden believes that something is still lacking when people aren't involved in the process — at least for the time being.

"AI can't adjust claims on its own until it can emulate emotion and empathy," Holden added. "It's still working on its bedside manner."

Bias and Discrimination

When it comes to automating insurance jobs that were previously performed by humans, there is a ghost in the machine: bias and discrimination. There are rigorous regulations limiting insurance discrimination, but many of the AI tools' decision-making processes are hidden behind a black box.

Many scholars have pointed to AI producing systemically racist consequences in a variety of scenarios.

According to Bob Gaydos, CEO of Pendella Technologies, while AIs can comprehend information considerably faster than humans, that speed is often their undoing.

"You must protect it from its own speed." "Speed is a wonderful thing, but it is also a dangerous thing, and AI makes assumptions at breakneck speed," Gaydos says.

He stated that if an AI makes an assumption that suggests it will make a biased coverage judgment, the nature of AI dictates that it would compound that hint and double down, transforming that hint into an inflated bias.

A human may rely on wisdom or experience to recognize that a prejudiced judgment was incorrect, ill-informed, or even illegal, but steps must be deliberately made to ensure that the AI does not discriminate.

Underwriters will need to be especially aware of the implications of automated underwriting in terms of bias and discrimination. If they do not, they will be subjected to an onslaught of political supervision and regulation.

Colorado has already proposed legislation to ban AI-driven discrimination in insurance.

"With Colorado, the political door is open." "State regulators are going to say, 'If you're using AI, you're going to have to show us how you're going to use it,'" Gaydos predicted. "But that will open Pandora's Box."

The Future of AI & Insurance Industries

In the insurance industry, AI is already adept at automating repetitive and predictable operations. However, for the time being, the human touch is still required. However, as AI advances, more complicated activities will be delegated, potentially offering up additional opportunity for supervisory positions and efficiencies.

From the perspective of the customer, the AI future is a pipe dream of an automated, frictionless experience. Getting into a car that displays real-time insurance costs for several routes to work that morning, based on traffic and road conditions. Then, if there is an accident, the claim can be completed immediately with the click of an app, and the car can drive itself to the repair while a new car makes its way to the driveway without requiring any time off work.

As data and feedback models become more available and help drive real-time decisions, a world with more usage-based and real-time pricing is practically probable.

Risk management and mitigation will be as important in customer service as underwriting is now, with aerial pictures combined with generative AI processing analysis providing agents with knowledge to help their customers prevent things like roof leaks from occurring.

But, for the time being, things are changing quickly, and AI potential appears to be everywhere.