The tempting idea: “Why don’t we just use ChatGPT/Claude?”

If you manage a portfolio of properties or a book of construction projects, you’ve probably had this thought:

“What if we just upload our COIs and contracts into a general AI tool and ask it to flag issues?”

It’s not a silly idea. Modern models can summarize documents, extract fields, identify patterns, and even write explanations that sound convincing.

But insurance compliance in the built world is where AI looks like it should work… and then quietly fails in ways that matter.

Because compliance isn’t about having a smart answer. It’s about having a defensible decision — one that holds up later, inside a real operating process, with real consequences.


1) Compliance isn’t “document understanding.” It’s risk transfer.

A COI is not the same thing as compliance.

In construction and commercial real estate, the point of compliance is not to create neat files. The point is to ensure risk is properly transferred before an incident occurs.

That means the work isn’t finished when an AI says: “Looks compliant.”

The work is finished when:

  • requirements are interpreted correctly,
  • endorsements and policy terms match what’s required,
  • exceptions are handled consistently,
  • gaps are resolved,
  • and the outcome is tracked and enforced across projects or properties.

Prompting can help with parts of that. It can’t reliably own the full chain.


2) “Looks right” is the most dangerous failure mode

General AI tools are optimized to be helpful and fluent. In compliance, fluency is not the goal — correctness is.

Insurance compliance has edge cases everywhere:

  • certificate says one thing, endorsements say another
  • additional insured language is almost right… but not right
  • named insured mismatches across documents
  • limits are correct, but the form or wording doesn’t meet contractual requirements
  • exclusions that change the meaning of coverage

In this domain, the worst outcome isn’t “AI missed something obvious.”
It’s “AI gave you confidence when it shouldn’t have.”

That’s why prompting alone is risky: it can produce plausible conclusions without the guarantees, validation, and evidence chains compliance requires.


3) Compliance requires consistent rules, not clever prompts

If your process relies on prompts, your “logic” is often:

  • undocumented,
  • inconsistently applied,
  • hard to test,
  • hard to audit,
  • and hard to improve systematically.

In the real world, compliance teams need:

  • standard requirements templates (with controlled exceptions),
  • consistent interpretation across teams/projects/properties,
  • repeatable decisions,
  • and visibility into why a decision was made.

A prompt can be rewritten, reinterpreted, or used differently by every user.

That works for brainstorming. It’s a fragile foundation for regulated decisioning.


4) You need evidence-backed decisions, not just answers

A property manager or risk leader doesn’t just need a verdict. They need:

  • what is missing
  • where it appears in the document
  • what requirement it failed
  • what to do next
  • who owns the next step

This is an “auditability” requirement, not a convenience feature.

Prompting tools often struggle here because:

  • they can’t reliably cite the right source sections,
  • they can’t maintain stable references over time as new docs arrive,
  • and they don’t naturally produce a decision trail that’s easy to defend.

Compliance decisions are operational artifacts. They need provenance.


5) The hardest part is not analysis — it’s third-party behavior

Insurance compliance breaks down because the people you depend on are outside your organization:

➔ vendors

➔ tenants

➔ subcontractors

➔ brokers

Even a perfect AI analysis doesn’t matter if:

  • third parties don’t respond,
  • upload the wrong thing,
  • send partial documents,
  • ask the same questions repeatedly,
  • or get stuck in a confusing process.

Prompting doesn’t solve:

  • collection at scale,
  • renewals and expiration management,
  • follow-ups,
  • gap notifications,
  • or reducing back-and-forth.

Compliance is a behavior-and-workflow challenge as much as it is a document challenge.


6) “DIY AI” doesn’t connect to the systems that actually run the asset

Even if a general AI tool could reliably interpret documents, it still lives outside your operational systems.

That’s a big deal because compliance only becomes real when it’s connected to:

  • vendor onboarding
  • payments and approvals
  • work orders
  • project mobilization
  • tenant improvement workflows
  • property management systems
  • construction management systems

If compliance status is sitting in a chat window or a separate folder, it won’t drive action.

Modern real estate and construction operations are moving toward “Automated Assets,” but automation requires integration and enforcement hooks, not just insight.

This is why platforms that integrate into systems like Procore (construction) and MRI (property management) are so important:


7) Prompting doesn’t scale governance, QA, or accountability

At small scale, “DIY” seems workable:

  • one property
  • one project
  • a manageable list of vendors

At enterprise scale, you need governance:

  • roles and permissions,
  • standardized requirements,
  • QA and review processes,
  • exception handling,
  • reporting,
  • trend analysis,
  • and performance management.

The question becomes less:

“Can AI read this?”

and more:

“Can we trust this across thousands of records, with consistent outcomes, month after month?”

That’s where generic prompting collapses, because it isn’t designed to be an accountable operating layer.


An “AI + Workflow + Trust” system

In the built world, AI creates real value when it’s paired with:

  1. Domain logic (rules that reflect real-world requirements and edge cases)
  2. Evidence-backed verification (traceable decisions with audit trails)
  3. Human review where trust matters (human-in-the-loop for high-stakes decisions)
  4. Third-party workflows (collection, renewals, gap resolution that people actually complete)
  5. Deep integrations (so compliance drives actions in the systems of record)
  6. Operational reporting (portfolio-level insight, trendlines, and accountability)

That’s the difference between “AI that can analyze a document” and “AI that can run a compliance operation.”

A simple rule of thumb

If your compliance process ends with:

“The AI says it looks fine.” — you’re exposed.

If your compliance process ends with:

Decision + Evidence + Next Step + Enforcement — you’re operating.


Can AI help with insurance compliance at all?
Yes. AI is powerful for accelerating parts of the workflow, like extracting relevant information, suggesting issues, prioritizing records, and reducing cycle time. But it must be paired with workflows, governance, and auditability to be trusted at scale.
Why can’t we just use OCR + AI extraction?
Because extraction is not verification. Compliance often depends on endorsements, wording nuances, exclusions, and cross-document consistency. “Fields” alone don’t capture the full requirement.
What’s the biggest risk of using prompting tools for compliance?
False confidence. AI can produce plausible explanations that are wrong or incomplete, without strong evidence chains or consistent rule enforcement.
What does “embedded compliance” mean?
It means compliance status and actions live inside the systems teams already use, so compliance can trigger real operational steps (approvals, payments, onboarding) instead of becoming another standalone dashboard.

Prompting tools are great at generating answers. Insurance compliance requires operational truth.

In construction and commercial real estate, the winners won’t be the teams who can summarize documents fastest. They’ll be the teams who can make defensible, enforceable compliance decisions — at scale — so risk transfer is real and operations run smoothly.

If you want to see what that looks like in practice, you can explore how Jones integrates compliance into the workflows teams already use: