This installment explores a structured approach for developing AIS policies specifically for Claims, featuring the National Institute of Standards and Technology (NIST) AI Risk Management Framework (AI RMF 1.0), the Generative AI Profile (NIST AI 600-1), and the NAIC Model Bulletin on AI Systems.
Consider this brief overview of the NIST core functions and the NAIC Model Bulletin guidance:
NIST AI RMF Core Functions
- Govern: Define oversight roles and responsibilities for AI systems.
- Map: Identify potential risks and stakeholder impacts of AI use in underwriting, claims, and fraud detection.
- Measure: Develop quantifiable metrics to assess fairness, accuracy, and resilience in AI models.
- Manage: Implement mechanisms to mitigate identified risks, ensuring compliance with state and federal guidelines.
NAIC Model Bulletin Guidance
- Transparency: Insurers must disclose the role of AI in decisions affecting consumers.
- Accountability: Entities using AI systems are responsible for ensuring compliance with applicable laws, including prohibitions on unfair discrimination.
- Fairness: AI systems should not proxy for prohibited characteristics like race or gender.
- Regulatory oversight: Insurers should maintain documentation detailing AI model development and use, as regulators may request this during audits.
Let’s break this down into specific action steps and implementation examples for integrating NAIC Model Bulletin principles and NIST AI RMF guidelines into AIS policies for Claims:
Step 1: Governance & Transparency
NIST Guideline: Govern
- AIS Policy Action: Establish a dedicated AI Claims Governance Committee to oversee deployments and model updates across the claims management lifecycle. This committee ensures compliance with ethical, legal, and regulatory standards while mitigating risks such as bias, explainability, and data privacy.
- AIS Policy Implementation Example: The Committee conducts quarterly reviews of all AI models used across the claims process lifecycle to assess Bias Mitigation, Test models for fairness across demographic groups, and adjust algorithms as needed. Transparency: Ensure explainability by documenting how AI decisions (e.g., automated claim approvals) are made. Compliance: Validate that AI deployments meet all applicable legal and regulatory standards. Operational Monitoring: Implement dashboards for ongoing oversight of AI performance, flagging anomalies or deviations for immediate review.
- Supporting Technology Stack Must: Ensure that AI models are transparent, compliant, and continuously monitored for risks like bias and explainability
NAIC Principle: Transparency
- AIS Policy Action: Create a customer-facing explanation of how AI is used in claims management, including a plain-language description of the AI’s role in assessing claims, automating processes, and determining decisions like payouts, fraud detection, or settlement recommendations.
- AIS Policy Implementation Example: Embed explanations into claim acknowledgment communications, include them in website FAQs, and make them available through customer service channels. When AI influences a claim decision, provide a simplified rationale and data factors that are considered directly by the claimant.
- Supporting Technology Stack Must: Deliver clear, accessible, and customer-friendly explanations of AI-driven claims decisions across multiple communication channels while ensuring accuracy, transparency, and compliance with regulatory standards.
Step 2: Mapping & Accountability
NIST Guideline: Map
- AIS Policy Action: Conduct a risk assessment to identify potential sources of bias in AI-driven claims processing map stakeholders impacted by claims decisions, including policyholders, claims adjusters, and regulators.
- AIS Policy Implementation Example: Analyze claims models for potential biases in fraud detection, payout recommendations, and settlement timelines; introduce additional factors or adjust algorithms to ensure equitable and consistent outcomes for all claimants.
- Supporting Technology Stack Must: Identify, monitor, and mitigate bias in AI-driven claims models while providing actionable insights and documentation to ensure fair and transparent outcomes
NAIC Principle: Accountability
- AIS Policy Action: Establish a clear accountability framework by assigning responsibility for the AI claims system to a designated team, including a Chief Claims Officer (CCO), a Chief Risk Officer (CRO), and a data governance lead.
- AIS Policy Implementation Example: Document how claims data is collected, processed, and utilized in AI-driven decisions, such as fraud detection, payout recommendations, and settlement timelines, ensuring all data features comply with federal and state regulations while maintaining fairness and transparency.
- Supporting Technology Stack Must: Enable robust tracking, documentation, and oversight of AI claims systems to ensure compliance, accountability, and transparency across all decision-making processes
Step 3: Measuring & Fairness
NIST Principle: Measure
- AIS Policy Action: Evaluate AI claims model performance using fairness and accuracy metrics, such as analyzing payout differences and settlement timelines across demographic groups to identify systemic bias (fairness) and assessing whether predicted outcomes align with actual claims results (accuracy).
- AIS Policy Implementation Example: Conduct quarterly evaluations to analyze payout amounts and processing times across demographic groups to detect and address inequities. Simultaneously, compare predicted fraud scores, claim severity, or settlement recommendations against actual claims outcomes to validate accuracy and refine the claims model for improved fairness and performance.
- Supporting Technology Stack Must: Analyze and monitor AI claims models to ensure fairness across demographic groups and validate accuracy against real-world claims outcomes while enabling continuous performance improvements.
NAIC Principle: Fairness and Ethical Use
- AIS Policy Action: Implement a fairness check within the claims model validation process to ensure decisions do not inadvertently favor or disadvantage specific groups.
- AIS Policy Implementation Example: Review whether location-based data (e.g., zip codes) or other features correlate with prohibited characteristics, such as race or income level, and adjust or remove such features if a proxy effect is identified to ensure equitable claims outcomes.
- Supporting Technology Stack Must: detect, monitor, and mitigate proxy effects in AI claims models, ensuring decisions remain fair, transparent, and compliant with ethical standards.
Step 4: Management & Oversight
NIST Guideline: Manage
- AIS Policy Action: Deploy a continuous monitoring system designed for claims data profiles to track AI model performance; when anomalies, discrepancies, or biases emerge, implement model retraining protocols.
- AIS Policy Implementation Example: Automated continuous monitoring tracks claims model performance against key metrics, such as fraud detection accuracy, settlement time, and fairness across demographic groups. If anomalies, such as inconsistent payouts or biased decisions, are detected, the system triggers alerts and initiates model retraining protocols to ensure accuracy and fairness.
- Supporting Technology Stack Must: Enable automated tracking, detection, and resolution of anomalies or biases in AI claims models, ensuring continuous performance, fairness, and regulatory compliance.
NAIC Principle: Regulatory Oversight
- AIS Policy Action: Establish a documentation and audit framework for AI claims models, ensuring that all decision-making processes, such as fraud detection, settlement recommendations, and payout determinations, are logged and reviewed regularly; if regulatory concerns or compliance gaps are detected, implement corrective measures promptly.
- AIS Policy Implementation Example: Conduct quarterly reviews of claims model logs to verify compliance with anti-discrimination laws and regulatory requirements. If discrepancies, such as unequal settlement amounts or processing times for protected groups, are identified, retrain the model and adjust processes immediately to ensure compliance and fairness.
- Supporting Technology Stack Must: Provide robust documentation, monitoring, and audit capabilities to ensure AI claims models meet regulatory requirements, maintain transparency, and enable prompt corrective action when compliance gaps are detected.
Our final blog installment will provide an overview of the supporting technologies designed for enterprise implementation of NIST/NAIC AIS policy actions across Underwriting and Claims. Contact us today to learn how OverseeAI can help you deploy AI with confidence.