P&C the November 2025 issue

Ill-Fitting AI Coverage

Artificial intelligence-related risks can fall across multiple lines of coverage. The challenge is sifting through policies to determine what’s covered, excluded, and silent.
By Russ Banham Posted on October 31, 2025

But the wide-ranging business benefits of AI do come with increased financial, operational, legal, and reputational risks. Algorithmic bias and discrimination, intellectual property infringement, more sophisticated cyberattacks, lack of transparency in agentic AI decisions and actions, and sweeping ethical and liability issues are among the primary risks.

The potentially massive threats were put on full display last year in news reports out of Hong Kong. A finance employee in the local branch of a U.K.-based engineering firm received an email from the company’s chief financial officer requesting a confidential transfer of money.

Artificial intelligence (AI) has several features that may lead to or enable significant liability, including inherent features like hallucinations, as well as algorithmic bias and discrimination, intellectual property infringement, lack of transparency in decision-making, and more.

AI losses can show up in multiple lines of coverage, requiring brokers and employers to work closely together to ensure the employer’s insurance policies provide proactive coverage for related risks. At the same time, insurers should determine if they risk exposure for “silent AI”—ambiguities in older policies that may leave an insurer unexpectedly on the hook for an AI-related loss.

Insurance risk management professionals play a key role in helping develop governance and compliance frameworks for their clients to minimize liability from AI-related risks.

The employee initially surmised that it was just another phishing attempt. But after attending a video conference call with the CFO and other executives, his suspicions eased. Since everyone attending the online meeting looked and sounded exactly like his colleagues, the employee remitted the money in 15 transfers totaling approximately $25 million. Only later, when checking with U.K. headquarters, did he learn that the CFO and other executives on the conference call were not on the call. Rather, they were all AI-generated deepfakes—a synthetic video, image, or audio that appears to be real but has been manipulated.

The success of the social engineering attack was an eye-opener. Reports suggest the hackers most likely used publicly available video and audio recordings of the executives in creating the deepfakes to assure the legitimacy of the money transfer.

“The conventional wisdom in phishing attacks is to target employees who may not be as sophisticated as other employees,” says Sean Scranton, a consultant on insurance broker WTW’s cyber risk solutions team for financial and executive risk. “As the Hong Kong case illustrates, the ability to effectively impersonate a senior officer using AI-generated deepfake technology takes this to a new level of urgency.”

To protect against deepfake frauds, companies should require more than just a single email and video conference to authorize a significant financial transfer, says John Romano, a principal at Baker Tilly and leader of the law firm’s internal audit and enterprise risk management service line.

“The employee ended up trusting a conference call with what looked like real people but were actually deepfakes. I’m not sure what the multifactor verification protocols were at the firm, but had the employee followed up the conference with phone calls to the people in the meeting, the scam would have been evident,” Romano explains.

Coverage Clarity

The insurance markets are moving toward covering the costs of phishing of this type, says John Farley, managing director of the cyber liability practice at insurance broker Gallagher. “We’re seeing many cyber insurers add affirmations [to insurance policies] that there is coverage for incidents like the one that happened in Hong Kong. That definitely was not your father’s phishing attack.”

Insurance policies absorbing the diverse loss exposures associated with AI include general liability, product liability, product recall, errors and omissions (E&O) liability, bodily injury coverage, media liability, cybersecurity, and copyright, trademark, and patent infringement, among others.

“Insurers try to fit all the liabilities into narrow, well-defined buckets, but AI does not fit into narrow, well-defined buckets; some policies offer affirmative AI coverage or endorsements while others exclude specific AI perils or are silent about it, which results in ambiguity,” says Kevin Kalinich, who heads the global intangible assets team to identify and manage client AI and other emerging technology exposures at insurance broker Aon. “We have had actual AI cases that have impacted eight different insurance policies.”

A challenge for brokers is sifting through each policy’s coverages, exclusions, and endorsements to determine if it meets the client’s needs. Take deepfakes. In a 2025 report on AI risk management, Aon said insurance absorbing deepfake-related losses is explicitly excluded in general liability, media liability, product liability, and directors and officers (D&O) liability policies, unless adding custom contingent liability. Coverage is generally available under cybersecurity, technology E&O, and miscellaneous professional liability insurance.

Other traditional insurance policies may or may not cover different risks. Standard policies like D&O and E&O may exclude claims for algorithmic bias, repeatable errors in AI systems that create inequitable outcomes for certain groups; inaccurate or misleading outputs known as “hallucinations”; and AI decisions that cannot be explained or justified because of opaque “black box” errors. (See sidebar: How Bad Is Algorithmic Bias in AI?) A much-cited example of a black box error involves an autonomous self-driving car that makes an incomprehensible and dangerous error due to misinterpreting its surroundings. In a case involving an Uber crash, the system’s sensors struggled to identify a pedestrian on a bicycle as an object requiring the vehicle to stop.

Other coverage challenges include “AI drift,” performance decay that occurs over time in an AI model’s predictions, and “shadow AI,” use of unauthorized or unvetted AI tools, models, and applications by employees. In one example of drift, online real estate firm Zillow used an AI model that created an automated valuation to determine the cash offer price for each home. However, the model was trained on historical data from the prepandemic real estate market, which operated under relatively stable conditions. In a case involving shadow AI, Samsung employees began using AI assistant ChatGPT for work without gaining formal approval. The data submitted by the employees leaked proprietary Samsung information to a third-party server. It is unknown whether the companies were insured for the resulting damages. In many cases, existing insurance policies specifically exclude such losses.

Insurance considerations for online healthcare providers are equally daunting for brokers. “If the AI platform provides medical advice that harms a patient, the provider can be exposed to a medical malpractice claim—if the policy in fact is written to cover that type of loss,” Gallagher’s Farley says. “At this point, we’re not seeing insurers tack on broad exclusions [for medical AI platforms], but we need to keep a close eye on it.”

Another major concern for insurance brokers is “silent AI”—hidden risks that may lurk in an insurance policy but aren’t explicitly covered or excluded. Older insurance policies that weren’t written with AI technologies in mind may have unforeseen ambiguities that expose insurers to unexpected financial losses if a system fails. “Carriers could be covering a number of claims they didn’t foresee,” says Farley.

Risk managers collaborate closely with their insurance brokers to determine whether insurance policies specifically cover, exclude, or are silent on different AI risks, says Lynn Haley Pilarski, chair of the Public Policy Committee at the Risk and Insurance Management Society (RIMS). “The brokers are a pathway to new, up and coming insurance policies related to silent AI,” she says, pointing to MGA Armilla’s recent announcement of a specialized liability policy providing affirmative insurance coverage for silent AI risk.

Fortunately, the softness of today’s insurance market gives brokers greater opportunity to find alternative coverage when AI-related exclusions surface. “I always say [to clients], if you’ve seen one insurance policy, you’ve seen one insurance policy. In today’s buyer-friendly market, we can negotiate expansions in coverage to address most AI risks,” Farley says.

AI Governance Framework

Like other interviewees, Romano does not fault insurance companies for their hesitance in providing comprehensive coverage for many AI risks. “When an AI system causes harm, it is often unclear who is at fault—the AI developer, deployer, or the provider of the data the system was trained on. Insurers are mostly trying to be safe. Some are trying to understand the context, others are updating their policy language to exclude certain aspects of AI use, and some have decided not to assume any AI risks,” he says.

In the meantime, brokers are advising clients to assemble a comprehensive AI governance framework that prioritizes transparency, accountability, and “human-in-the-loop” oversight. “By their very nature, many AI platforms lack transparency, resulting in an inability to interpret what went into the models,” Farley says, citing the potential for algorithmic bias and discrimination. “When you have a self-learning platform that can train itself on potentially discriminatory data sets, it amplifies the [risk of] discrimination. That’s a big problem if you don’t catch it, hindering accountability and complicating our efforts to figure out what went wrong to ensure compliance with state AI standards.”

To solve such puzzles, Pilarski says risk management professionals must collaborate with their organization’s IT and cybersecurity teams in developing a proactive global AI governance and compliance framework. “To identify and prioritize AI risks and vulnerabilities, a culture of compliance—who has ownership of specific controls, makes employees aware of potential threats, creates and tests response plans, and the specific training this requires— is critical to ensure a coordinated response,” she says.

Timothy Shields, a partner at Kelley Kronenberg who leads the law firm’s data privacy and technology business unit offers a similar perspective. He believes companies should build governance and compliance procedures into their AI systems before they implement them to avoid costly retrofits. To ensure accurate and unbiased data, the training data needs to be cleansed and regularly audited for discriminatory outputs.

“Even though we don’t know how the regulations ultimately will play out, organizations nonetheless need to establish a robust governance framework, where people are given roles and responsibilities overseeing the development and deployment of AI systems and the continuous monitoring of these platforms, making sure the outcomes are aligned with ethical standards,” Farley says. “At the end of the day, that AI system is an agent of your organization making decisions on your behalf. If things go wrong, you’re likely to be stuck in litigation and the subject of a regulatory investigation.”

More in P&C

Covering Creepiness
P&C Covering Creepiness
Like any business, a haunted house attraction can’t open its doors (and then c...
P&C Easy Street
The directors and officers market is enjoying favorable pricing and policy terms...
A Selection of AI Washing Lawsuits
P&C A Selection of AI Washing Lawsuits
Lawsuits over "AI washing" fall into three different categories—here are some ...