Product Liability Evolves Alongside Technology
Innovations in autonomous mobility—aka “self-driving” cars, trains, or other vehicles—and artificial intelligence have led to new approaches to doing business across disparate sectors, from Waymo taxis to agentic AI hard at work in a brokerage’s back office.
While much has been written about the new risks these technologies introduce for users, equally important for the insurance industry is the liability for manufacturers, sellers, and distributors of those products, says Ashley Moffatt, senior vice president of brokerage primary casualty at Nationwide.
Typically, product liability policies treat covered items as static after being sold; as such, those policies assume discrete defects traceable to a single point of failure in the manufacturing chain, Moffatt says. But “smart devices and autonomous mobility differ from traditional products because they’re no longer just a static product,” she explains. “They rely on software, connectivity, and AI, and those things continuously update and change the product after it’s sold.”
This means that liability risk now continuously evolves over the product’s entire life cycle. New types of failure beyond a single tangible defect with a clear chain of causation emerge; responsibilities for failure blur between hardware makers, software developers, and data providers—and carriers must evolve their approach to product liability to meet this new paradigm, Moffatt believes.
Smart Devices and AI
AI-enabled smart devices, like toys, fitness trackers, and home security systems are perhaps the most tangible example of this shift. By design, Moffatt points out, AI models keep learning and updating, so a device’s operation can change over time as the system develops and ingests more training data.
This creates an intersection with cyber risk that traditional product liability approaches may not consider, Moffatt explains. “Cyber data poisoning, model tampering, or hacked sensors can trigger both bodily injury and data or privacy harm in a single event,” she says. “Practically, that means we have to [implement] secure design: monitoring patching of the AI as a core product safety obligation and not just IT hygiene.” AI systems can also learn in ways their manufacturers cannot fully control or make decisions that lack clear explanations.
This “dynamic mix of software, data, cyber, and shared control risks that are far more complex to insure,” as Moffatt describes it, means brokers should choose the right product liability carrier partners. That means carriers that monitor industry trends to stay abreast of how the technology is changing and are committed to understanding the technical aspects of the risks, Moffatt says. That does not necessarily demand technical experts, but underwriters who ask the right questions about how connectivity, ongoing updates, and third-party dependencies can shift exposure over a product’s life cycle.
Autonomous Complexity
In turn, autonomous mobility illustrates how responsibility for product failure can extend across manufacturers, software developers, and data providers. When a self-driving vehicle crashes, but investigations show that the autonomous mobility system functioned exactly as designed, identifying the liable parties becomes more difficult using the traditional approach of looking for a discrete failure, subsequent discrete loss, and clear chain of causation.
As such, Moffatt believes carriers should consider how the autonomous mobility system behaves over time. For her, this means assessing whether the vehicle’s intended performance is adequate for any reasonable scenarios, how humans may interact with it, and how and when the system is updated.
This gives rise to new theories of liability that focus on inadequate performance of the system under edge conditions, poor integration of the technology into a company’s workflows, or failure to anticipate downstream use of the system. Approaching liability in this way allows for better determination of where fault lies—with the manufacturer for not building for edge conditions, or the software developer for not anticipating downstream usage, and so on.
As with AI, this new approach to product liability requires carriers with enough knowledge to ask the right questions and understand key risk drivers, in this case system training and updates, interactions with humans, and the gray areas between product, technology, and operational exposures. As Moffatt puts it: The “best partners are curious, adaptable, and follow the regulatory and industry trends closely so that their underwriting keeps pace with how autonomy is actually being used.”




