Charting the Course of AI Governance
As artificial intelligence systems proliferate and grow more powerful, the challenge of governing them becomes increasingly urgent and more complex.
Regulatory frameworks are struggling to keep pace with the rapid development of AI, leaving gaps and inconsistencies in the law. The technology’s distributed nature further complicates matters, making it difficult to assign responsibility for decisions made or facilitated by AI. At the same time, unique ethical dilemmas will require those in the insurance industry to confront difficult questions about fairness, safety, and trust.
Against this backdrop, policymakers at the state level, as well as at the National Association of Insurance Commissioners (NAIC) and National Conference of Insurance Legislators (NCOIL), are racing to establish AI guardrails without sacrificing innovation.
NAIC and NCOIL Developments
AI regulation remains a top priority for both NAIC and NCOIL. In December 2023, the NAIC issued a model bulletin entitled Use of Artificial Intelligence Systems by Insurers. The bulletin relies on existing laws and regulations, including the NAIC’s own Unfair Trade Practices Model Act and Unfair Claims Settlement Practices Model Act, for enforcing AI standards. Among other things, it provides that insurers should have a governance framework and risk management protocols designed to ensure that AI use does not result in unfair trade or claims settlement practices. As of November 2025, it has been adopted by 24 states.
In May, the NAIC issued a request for information to explore converting the bulletin into a model law, but many industry stakeholders and regulators believe that any such efforts are premature and the NAIC should focus on continued adoption of its model bulletin. Discussions are ongoing.
At its summer meeting in July, the NCOIL Financial Services and Multi-Lines Issues Committee released its own draft Model Act Regarding Insurers’ Use of Artificial Intelligence. The model focuses closely on claim denials, providing that AI may not serve as the sole basis for determining whether to adjust or deny a claim. The model is in its early stages and has not been enacted.
State Developments
While about half the states have adopted the NAIC model bulletin, others have taken different approaches to regulating use of AI in insurance.
- California: In June 2022, the California Department of Insurance issued Bulletin 2022-5, which addresses allegations of racial bias and discrimination in marketing, rating, underwriting, and claims practices by insurance companies and other licensees. It directs insurance companies and other licensees to avoid both conscious and unconscious bias or discrimination that can result from the use of AI, as well as other forms of “Big Data,” in marketing, rating, underwriting, processing claims, or investigating suspected fraud relating to any insurance transaction that impacts California residents, businesses, and policyholders.
- Colorado: In May 2024, Colorado enacted the nation’s first comprehensive bill for private-sector AI. The law imposes a risk-based regulatory framework that primarily focuses on “high-risk artificial intelligence systems,” which it defines as a system that, “when deployed, makes or is a substantial factor in making a consequential decision.” A “consequential decision” has a material legal or significant effect on the provision or denial of insurance to any consumer or on the cost or terms of various services. The Colorado law requires both AI developers and deployers to use reasonable care to avoid algorithmic discrimination in high-risk systems and creates a rebuttable presumption that a developer or deployer is using reasonable care if it takes specified compliance actions when deploying the system. The law now takes effect in June 2026.
The new law supplements existing Colorado anti-discrimination provisions that govern life insurers’ application of algorithms and predictive models that use external consumer data and information sources. - New York: In July 2024, the New York Department of Financial Services published Insurance Circular Letter No. 7 relating to the use of artificial intelligence systems and external consumer data and information sources in industry underwriting and prices. The letter requires insurers that operate in New York to establish a risk management framework to manage their use of AI and ensure that these systems are not used to unfairly discriminate.
- Texas: In June 2025, the state enacted the Texas Responsible Artificial Intelligence Governance Act (TRAIGA). Unlike Colorado’s law, TRAIGA does not provide an AI governance compliance road map. Instead, it prohibits specific actions, including deploying an AI system with the sole intent to infringe, restrict, or otherwise impair an individual’s constitutional rights. While the Texas attorney general holds enforcement authority, insurance entities remain regulated solely by the state Department of Insurance.
- Utah: In March 2024, the state enacted the Utah Artificial Intelligence Policy Act (UAIP), which imposes disclosure requirements on entities using generative AI tools with their customers. The UAIP establishes liability for use of the technology that violates consumer protection laws if not properly disclosed and requires disclosure when an individual interacts with AI in a regulated occupation.
Federal Developments
Comprehensive legislation at the federal level remains largely stalled. The Trump administration has taken a deregulatory approach in favor of accelerating American AI development. The administration’s recently issued America’s AI Action Plan directs the White House Office of Management and Budget to “consider a state’s AI regulatory climate when making funding decisions and limit funding if the state’s AI regulatory regimes may hinder the effectiveness of that funding.”
Meanwhile, in July, the U.S. Senate voted 99-1 to remove a controversial 10-year federal moratorium on state AI regulation from the One Big Beautiful Bill Act. The federal AI preemption proposal aimed to prevent states from enforcing laws and regulations that targeted AI models, AI systems, or automated decision systems entered into interstate commerce for a 10-year period. The moratorium’s champion—Sen. Ted Cruz (R-Texas)—remains committed to exploring its legislative imposition.
Strategies For Effective AI Governance
Understanding the current patchwork of laws and regulations is the first step. For insurance agents and brokers, the real challenge lies in translating these rules and the gaps between them into effective governance strategies. Below is a breakdown of key considerations.
- Regulatory Compliance: Insurance leaders should stay apprised of evolving AI regulatory developments and ensure their firm’s AI use aligns not only with insurance regulations, data privacy, and other state and local technology laws, but also with broader laws of general applicability. This includes compliance with federal and state civil rights laws, state tort laws, and—increasingly— national security laws.
- Fairness and Improper Bias Testing and Audits: Organizations should establish internal policies and procedures to mitigate the risk of AI rendering problematic decisions—such as an AI model or agentic agent that improperly adjusts claims based on protected personal characteristics or that hallucinates and uses inaccurate information in processing claims and recommending or making decisions. Third-party AI tools and consulting services are available to test and audit algorithms and systems to mitigate improper bias and help ensure fairness in operations.
- Data Privacy and Security: AI governance should include strong data protection measures. Firms should collect only what is necessary. They should ensure AI doesn’t use data in unintended ways and that they have taken appropriate cybersecurity precautions.
- Transparency and Explainability: Firms should understand AI outputs enough to verify their accuracy and be able to adequately explain how a system reached a decision, particularly in claims processing, prior authorization, and the provision or denial of insurance, areas that have received increased attention at the state level and at the NAIC and NCOIL.
- Human Oversight: There should be sufficient human supervision and intervention in any consequential decisions and clear procedures for when human review is needed.
- Vendor and Third-Party Risk: If your firm is using third-party AI tools, ensure the tools meet your firm’s governance standards and satisfy any applicable legal requirements, and that contractual safeguards are in place, including compliance, data handling, and liability provisions.
- Continuous Monitoring and Auditing: To maintain compliance with applicable laws and regulations and minimize liability, firms should track system performance and watch for harmful errors or drift from intended behavior. As noted above, they should review or test systems to ensure they do not result in algorithmic discrimination. There should be a clear plan to investigate and fix problems as they arise. Audit processes should be designed and deployed to test AI governance controls and issue reports on compliance or any needed corrective action to senior management.
- Training and Education: Firms should design, deploy, and track completion of training and education of employees regarding responsible use of AI and should provide specific training to employees interacting with AI with respect to making insurance-related decisions.




