Skip to main content

Insurance

InsurTech 2.0 is consolidating fast — the carriers absorbing distressed startups are inheriting distribution and technology while discarding the loss ratio blindness that caused the collapse. What's emerging is more durable: parametric AI underwriting, embedded insurance via API, and agent-first claims pipelines that handle FNOL through payment without a human adjuster in the loop. The carriers that win this decade will be the ones that treat AI governance as an engineering discipline under the NAIC FACTS framework, not a compliance task to handle after deployment.

Insurance industry
Overview

Insurance is the industry where agentic AI delivers the most immediate, measurable ROI — and where deployment failures carry the heaviest regulatory consequences. The InsurTech 2.0 wave burned billions proving that technology enthusiasm without actuarial discipline produces bad loss ratios. What is left is more interesting: the survivors and the incumbents adopting their technology understand exactly which workflows AI can automate safely and which ones still require a human in the loop.

···

What AI Is Actually Changing

The near-term impact is concentrated in three areas: claims triage, document processing, and fraud detection. Claims triage routes incoming claims to the right adjuster or to straight-through processing based on complexity scoring, coverage type, and fraud risk indicators. Document processing handles the unstructured data problem: policy applications, medical records, repair estimates, and contractor invoices that previously required manual data entry. Fraud detection applies pattern analysis at a scale and speed that human investigators cannot match — and unlike rule-based systems, ML fraud models adapt as fraud patterns evolve.

Parametric underwriting is the structural change with the longest tail. Pay-automatically-when-conditions-are-met products eliminate the claims process entirely for qualifying events. The actuarial AI revolution — ML models outperforming traditional actuarial tables on loss prediction — is enabling parametric pricing that was not feasible with manual actuarial approaches. The engineering challenge is the real-time data pipeline, not the model.

Where the Architecture Breaks

The technical problem that most insurance AI projects underestimate is the integration layer between modern inference infrastructure and legacy policy administration systems. AI models run at millisecond scale. Legacy policy administration systems — many built on COBOL mainframes in the 1980s and 1990s — were designed for batch processing. The mismatch between real-time inference and batch-oriented core systems is where most production deployments develop problems.

Common Integration Failure Points
  • Real-time fraud scoring requires features from claims history databases that are batch-updated nightly — stale features produce stale scores
  • Embedded insurance APIs need sub-second response times from policy systems designed for overnight batch processing
  • Agent-first FNOL workflows need bi-directional state management with ClaimCenter or Duck Creek — the APIs exist but the latency assumptions were not designed for agentic loops
  • Parametric trigger pipelines need data freshness guarantees that batch-oriented core systems cannot provide without a real-time facade layer

The solution pattern is consistent: build a real-time data facade in front of the legacy system, replicate the high-velocity features to a low-latency store (Redis, DynamoDB, Snowflake dynamic tables), and let the legacy system remain the system of record for regulatory compliance while the AI layer operates against the replicated data.

···

The Regulatory Engineering Problem

The NAIC FACTS framework — fairness, accountability, compliance, transparency, security — reads like a governance checklist but is actually a set of architectural requirements. Transparency means every automated decision must produce a structured audit record that explains the inputs, the model version, and the output reasoning. Accountability means there is an identified human responsible for each AI system in production. Compliance means the model outputs must not disparately impact protected classes under applicable state law.

RequirementWhat It Means for Engineering
TransparencyEvery automated decision stores structured justification — inputs, model version, feature values, output reasoning
AccountabilityModel registry with identified human owners, change approval workflows, version pinning in production
FairnessDisparate impact testing across protected class proxies before deployment and on production traffic samples
SecurityModel serving infrastructure isolated from core system write paths, adversarial input detection
ComplianceAdverse action notices generated automatically when coverage is declined or modified

These are not insurmountable requirements — they are design constraints that need to be in the architecture from the start. Retrofitting explainability into a model that has been in production for six months is difficult and expensive. Building it in from the beginning adds modest complexity and pays off immediately at the first market conduct examination.

Building Compliant AI Infrastructure for Insurance

01
Model Registry First

Before any AI system touches underwriting or claims, build the model registry: version tracking, human owner assignment, approval workflows. This is the accountability layer the NAIC requires.

02
Feature Store with Freshness Tracking

Real-time inference requires real-time features. Build a feature store that tracks data freshness and alerts when features exceed acceptable staleness thresholds — especially for fraud detection and parametric triggers.

03
Decision Logging as Infrastructure

Every automated decision writes a structured record to an append-only log: inputs, model ID, output, timestamp. This feeds both regulatory audit requirements and model monitoring.

04
Human Handoff Protocols

Define the exact conditions that route an AI decision to human review. Document them. Test them. The handoff protocol is where agentic systems most often fail in production.

Domain Challenges
  1. 01

    Legacy policy administration systems — many still COBOL-based — have no native API surface. Every integration requires a wrapper layer, and those wrapper layers become the actual long-term maintenance burden when the underlying system can't change.

  2. 02

    The NAIC Model Bulletin on AI Systems requires explainable, auditable decisions for underwriting and claims. Black-box gradient boosted models are a regulatory liability that grows with every automated adverse decision you can't explain at examination.

  3. 03

    Parametric products need real-time IoT and weather data tied directly to payout triggers. The freshness gap between sensor data update and claims initiation is where disputes originate — and where you lose regulatory trust.

  4. 04

    State-by-state rate filings mean a single product change can require 50+ regulatory submissions with different schemas, approval timelines, and data requirements. There's no single abstraction layer that makes this simple.

  5. 05

    Catastrophe modeling now feeds real-time reinsurance pricing via climate AI. Latency and data freshness matter at a level that batch actuarial pipelines were never designed to support.

  6. 06

    Embedded insurance distribution through fintech and e-commerce APIs requires quote-and-bind latency that traditional carrier systems weren't architected to handle — this is a core system architecture problem, not a UI problem.

Why it’s different with us
  • We build explainability as a first-class output, not a retrofit. Every automated underwriting and claims decision our systems produce includes structured justification output that satisfies NAIC FACTS requirements — fairness, accountability, compliance, transparency, security — at inference time.

  • We've worked directly with Guidewire, Duck Creek, and Verisk data pipelines. We know where the API surfaces exist, where they don't, and what the wrapper strategy looks like when they don't.

  • We design for incremental modernization using strangler-fig patterns over legacy policy administration systems. Carrier core system rip-and-replace projects routinely fail at the five-year mark. We don't propose them.

  • Our parametric pipeline work treats the IoT and weather data ingestion layer as the risk control layer — with the same audit trail requirements you'd apply to an underwriting model, because regulators will treat it that way.

  • We understand the multi-channel distribution reality: captive agents, independent agents, MGAs, and embedded APIs each have different data access controls and integration requirements. We've built for all of them.

Domain Insights
01InsurTech 2.0 Collapse Is a Signal, Not a Setback

Lemonade, Hippo, Root — the InsurTech darlings of 2019–2022 — failed on capital allocation and underpriced risk, not because AI doesn't belong in insurance. They proved that distribution and UX don't override loss ratios. The incumbents absorbing these businesses are taking the technology and the policyholder base while quietly discarding the 'move fast' posture that ignored actuarial fundamentals. A Guidewire deployment with a well-governed AI layer is more defensible than a greenfield insurtech with better design — and that's the actual lesson the industry has internalized.

02Parametric Products Change the Engineering Problem Entirely

Traditional claims require an adjuster to assess a loss. Parametric products pay automatically when a defined condition is met — a wind speed threshold, a rainfall measurement, an earthquake magnitude — with no claims process at all. The engineering problem shifts from AI-assisted adjudication to real-time data pipeline integrity: if the IoT or weather feed that triggers payment is stale, incorrect, or manipulated, you pay incorrectly. Building parametric products requires treating the data ingestion layer as the risk control layer, with the same auditability you'd apply to an underwriting model, because disputes won't be about coverage — they'll be about data.

03Agent-First FNOL Is Where the Adjuster Workforce Transition Begins

AI agents handling first notice of loss — receiving the claim, collecting documentation, running fraud screening, determining coverage, and initiating payment — can process straightforward property and auto claims end-to-end without human intervention. This is in production at several carriers today. The human adjuster role shifts to exception handling, complex coverage disputes, and litigation oversight — but the carriers that aren't designing structured escalation protocols are creating gaps. When an edge case hits an agentic pipeline without a clean human handoff path, the litigation exposure increases, not decreases.

Industry Trends

Agent-first FNOL: end-to-end automated processing for straightforward property and auto claims, with human adjusters handling only exceptions, disputes, and litigation — in production now at multiple carriers

Parametric products expanding from catastrophe coverage into agriculture, travel, and SMB business interruption, all requiring real-time IoT and weather data pipeline infrastructure at the core system level

NAIC AI governance adoption accelerating — majority of states expected to adopt the Model Bulletin by late 2026, making FACTS-compliant AI governance a baseline carrier requirement, not a differentiator

Embedded insurance distribution growing through fintech, e-commerce, and gig economy API partnerships — coverage sold at point-of-need requires real-time quote-and-bind capability that most carrier systems don't have natively

Climate AI feeding catastrophe models: satellite imagery, IoT sensors, and ML-based exposure modeling are replacing annual property surveys, with reinsurance pricing increasingly driven by real-time climate data feeds

InsurTech consolidation continuing through 2026 — incumbents acquiring distressed InsurTech 2.0 survivors for distribution and technology assets, accelerating the pace of AI adoption inside traditional carriers

Common Pitfalls
  1. 01

    Deploying gradient boosted underwriting models without structured explainability output — every automated adverse decision without a documented justification trail is a NAIC FACTS liability that compounds at examination. Carriers have already faced regulatory action for this.

  2. 02

    Building AI fraud detection on batch-updated feature stores with 24–48 hour refresh cycles — the staleness gap between feature update and inference is exactly where sophisticated fraud exploits the system, and it's invisible until the loss materializes.

  3. 03

    Treating embedded distribution as a front-end integration — quote-and-bind API partnerships require changes to core system architecture for latency, data access controls, and contract versioning. Treating it as a UI layer creates technical debt that blocks scale.

  4. 04

    Proposing full core system replacement over incremental modernization — large carrier policy administration replacements routinely exceed budget and timeline by multiples and frequently fail at go-live. Strangler-fig modernization is slower to plan but actually ships.

  5. 05

    Building digital-first carrier experiences that ignore independent agent workflows — in commercial lines especially, independent agents control the majority of distribution volume, and systems that don't support their integration patterns lose the market segment regardless of how good the consumer UX is.

Regulatory Landscape

U.S. insurance is regulated state-by-state under McCarran-Ferguson, coordinated through NAIC model laws. The NAIC Model Bulletin on AI Systems — adopted by 24+ states as of early 2026 — requires carriers to implement AI governance programs covering the FACTS framework and to document model lineage, maintain audit trails for automated decisions, and provide consumer-facing explanations for adverse actions. NAIC Regulatory Notice 24-09 extends these obligations to generative AI use cases, and the NAIC Insurance Data Security Model Law (#668) requires comprehensive cybersecurity programs — both are now standard scope in state market conduct examinations alongside financial filings.

Our Approach

We build insurance systems where explainability and auditability are engineering requirements from day one, not compliance features layered on after the model is in production. For parametric products, we build the full IoT-to-payout pipeline with data freshness monitoring and audit trails that satisfy what regulators will ask for when a disputed payout goes to examination. We use strangler-fig modernization patterns when integrating with legacy policy administration systems — wrapping, not replacing, until the core can be migrated incrementally. Agent-first claims workflows we build include structured human escalation paths for edge cases, because agentic pipelines without clean handoff protocols increase litigation exposure when the exception hits.

Ready to build for Insurance?

We bring domain expertise, not just engineering hours.

Start a Conversation

Free 30-minute scoping call. No obligation.