P&C the October 2023 issue

AI Risks and Coverage Evolution

As businesses begin to use generative AI in a variety of ways, they need to consider new risks.
By Zach Ewell Posted on September 27, 2023

John Farley, managing director of the cyber practice at Gallagher, says there are several coverages that could be used to protect against a company’s risk when using these AI LLM applications. “You think about the technology companies that may be starting to build these platforms,” Farley says. “There can be a technology errors and omissions policy that may come into play here to help transfer that risk. But, that said, the cyber insurance market is always evolving, and there is no such thing as a standard cyber policy.”

According to Steptoe and Johnson partner Tod Cohen, an expert on internet law and policy, these risks concerning LLMs and AI chatbots will evolve from being covered in errors and omissions policies to cyber coverage until their risks become so specific and understood that they generate their own line of business. “Any time there’s a new technology, the risks first go into the errors and omission insurance lines,” Cohen says. “That’s where cyber insurance came out of. Originally, it was part of E&O, and then it became its own line because it was divisible across industries. Everyone was adopting technology. There’s no company that doesn’t have a website. Right now you’re seeing a similar new technology that will come out of E&O. It may stay within cyber, but my guess is—and because it’s a different type of technology—it’ll become separated in its own line.”

Another issue that emerges is where the responsibility lies when it comes to collecting data—the inputs—that users agree to share when using LLMs. And when it comes to the content and information users are given—the outputs—how are these companies who create chatbots held accountable if the information they share is false? “The cyber insurance risks are breaches and misuses of data,” Cohen says. “This is much more about both input and output. For input, my data flows into these LLMs. Who has the power to take my data? Where does it go into? Can somebody misuse my data? Can I insure against that? Or is it on the other side, on the outputs? What are the outputs that it produces, and who bears the liability for mistakes or for errors or for damages caused by that?”

Reputational risks also can come from using AI chatbots. Last spring a lawyer representing a man who was suing an airline made national news when he was caught citing fake court filings, made up by ChatGPT, in an effort to support his client’s case. The lawyer claimed he didn’t realize ChatGPT had falsified past court cases, leaving his firm to face accusations of fraud and reputational damage as a judge considered sanctioning the lawyer.

“There’s a term that gets used quite frequently called hallucination,” says Marcus Daley, co-founder of NeuralMetrics, an insurtech that incorporates AI into solving underwriting solutions. “It’s when ChatGPT responds back to you as though it’s hallucinating because it’s saying something that sounds convincingly true but, in fact, is not true. We’ve been so conditioned, particularly by Google, where we type in a question and we usually get back relatively accurate information. With ChatGPT or other large language models, it could come back with something not true. In fact, sometimes it can be horrifically wrong.”

To combat this risk of learning and then reciting false information when using an LLM, Daley suggests double-checking the facts someone receives from an application such as ChatGPT. “The reputational risks are really about common sense,” Cohen says.

Zach Ewell Content Specialist Read More

More in P&C

Saving Lives
P&C Saving Lives
Strong prevention components to active shooter policies can mitigate gun violenc...
P&C Preventing School Shootings
Enormous effort has been expended on addressing the threat but security can be i...
Brain Pain
P&C Brain Pain
Should workers compensation cover mental health?