INSIGHTS

AIS Policies for Underwriting

CONTENTS

SHARE THIS BLOG

 

This installment explores a structured approach for developing AIS policies specifically for P&C Underwriting, featuring the National Institute of Standards and Technology (NIST) AI Risk Management Framework (AI RMF 1.0), the Generative AI Profile (NIST AI 600-1), and the NAIC Model Bulletin on AI Systems.

Consider this concise overview of NIST core functions and the NAIC Model Bulletin guidance:

NIST AI RMF Core Functions

  • Govern: Define oversight roles and responsibilities for AI systems. 
  • Map: Identify potential risks and stakeholder impacts of AI use in Underwriting, claims, and fraud detection.
  • Measure: Develop quantifiable metrics to assess fairness, accuracy, and resilience in AI models.
  • Manage: Implement mechanisms to mitigate identified risks, ensuring compliance with state and federal guidelines.

NAIC Model Bulletin Guidance

  • Transparency: Insurers must disclose the role of AI in decisions affecting consumers.
  • Accountability: Entities using AI systems are responsible for ensuring compliance with applicable laws, including prohibitions on unfair discrimination.
  • Fairness: AI systems should not proxy for prohibited characteristics like race or gender.
  • Regulatory oversight: Insurers should maintain documentation detailing AI model development and use, as regulators may request this during audits.

Let’s break this down into specific action steps and implementation examples for integrating NAIC Model Bulletin principles and NIST AI RMF guidelines into AIS policies for Underwriting: 

Step 1: Governance & Transparency

NIST Guideline: Govern 

  • AIS Policy Action: Establish an AI governance committee to oversee the underwriting system, ensuring all AI model changes are reviewed for compliance with ethical and legal standards.
  • AIS Policy Implementation Example: The AI governance committee reviews proposed underwriting algorithm updates, including risk assessment criteria changes, to ensure they do not inadvertently introduce bias or violate regulatory standards before approving deployment.
  • Supporting Technology Stack Must: Ensure the AI governance committee can effectively review, monitor, and approve updates to underwriting systems while maintaining ethical, regulatory, and operational standards.

NAIC Principle: Transparency

  • AIS Policy Action: Create a customer-facing explanation of how AI is used in Underwriting, including a plain-language description of the data considered and the rationale behind risk scores.
  • AIS Policy Implementation Example: Embed the explanation into policy documents; make it available through the website and customer service channels.
  • Supporting Technology Stack Must: Deliver transparent, plain-language explanations of AI underwriting decisions across multiple channels, ensuring compliance, improving customer trust, and enhancing engagement.

Step 2: Mapping & Accountability

NIST Guideline: Map 

  • AIS Policy Action: Conduct a risk assessment to identify potential sources of bias and map stakeholders impacted by underwriting decisions, including customers and regulators.
  • AIS Policy Implementation Example: Identify potential biases; adjust the model by introducing additional risk factors to ensure fair outcomes.
  • Supporting Technology Stack Must: Enable insurers to identify potential biases, map impacted stakeholders, and adjust underwriting models with additional risk factors.

NAIC Principle: Accountability

  • AIS Policy Action: Establish a clear accountability framework by assigning responsibility for the AI underwriting system to a designated team, including a Chief Risk Officer (CRO) and a data governance lead.
  • AIS Policy Implementation Example: The team documents how data is collected, processed, and used in underwriting decisions, ensuring all features used in the model comply with federal and state laws.
  • Supporting Technology Stack Must: Integrate to implement accountability framework, document data, and AI processes, and ensure compliance with federal and state laws.

Step 3: Measuring & Fairness

NIST Guideline: Measure

  • AIS Policy Action:  Evaluate model performance with fairness and accuracy metrics, such as measuring premium differences across demographic groups to ensure no systemic bias (fairness) and assess whether risk scores accurately predict claims outcomes (accuracy).
  • AIS Policy Implementation Example: Conduct quarterly evaluations analyzing premium variations across demographic groups to identify and mitigate systemic bias while comparing predicted risk scores against actual claims data to validate accuracy and refine the model.
  • Supporting Technology Stack Must: Continuously monitor, validate, and refine models for fairness and accuracy, fostering transparency and compliance.

NAIC Principle: Fairness and Ethical Use

  • AIS Policy Action: Implement a fairness check within the underwriting model validation process to ensure decisions do not inadvertently favor or disadvantage specific groups.
  • AIS Policy Implementation Example: Review whether location-based data (e.g., zip codes) correlates with prohibited characteristics and adjust or remove such features if a proxy effect is identified.
  • Supporting Technology Stack Must: Provide the ability to detect, analyze, and mitigate proxy effects in features like zip codes, ensuring fairness and compliance in AI underwriting models.

Step 4: Management & Oversight

NIST Guideline: Manage 

  • AIS Policy Action: Deploy a continuous monitoring system designed for P&C data profiles to track model performance; when anomalies, discrepancies, or biases emerge, implement model retraining protocols.
  • AIS Policy Implementation Example: Automated continuous monitoring tracks underwriting model performance against key metrics, such as claims prediction accuracy and fairness across demographic groups; if anomalies or biases are detected, the system triggers alerts and initiates model retraining protocols.
  • Supporting Technology Stack Must: Continuously monitor and proactively detect anomalies, biases, and model performance issues while automating model retraining protocols to ensure reliable, accurate underwriting outcomes.

NAIC Principle: Regulatory Oversight

  • AIS Policy Action: Establish a documentation and audit framework for AI underwriting models, ensuring that all decision-making processes are logged and reviewed regularly; if regulatory concerns or compliance gaps are detected, implement corrective measures promptly.
  • AIS Policy Implementation Example: Conduct quarterly reviews of underwriting model logs to verify compliance with anti-discrimination laws, and immediately retrain the model if discrepancies in approval rates for protected groups are identified.
  • Supporting Technology Stack Must: Integrate to support a comprehensive audit and documentation framework for underwriting models, automate compliance reviews, and implement corrective measures to ensure fairness and adherence to anti-discrimination laws.

Our next blog installment will cover specific action steps and implementation examples for integrating NAIC Model Bulletin principles and NIST AI RMF guidelines into AIS policies for Claims. The final installment will provide a sample technology stack to support an enterprise implementation of NIST/NAIC policy actions across the P&C insurance value chain.  

Contact us today to learn how OverseeAI can help you deploy AI with confidence.

Related Articles

Explore how multimodal Gen AI is reshaping the insurance industry from automating claims & risk forecasting to enhancing customer service.
Read to know how top insurers are applying AI to reduce processing times, eliminate manual work, and align data strategy
This blog explores how insurers can responsibly balance AI innovation with evolving regulatory expectations in the U.S. and abroad.