Healthcare Operations Intelligence

Know where
to build AI.
Build it smarter.

Amandil's Operational Context Graph maps 1,926 healthcare processes — provider and payer — layers on real CMS benchmarks to identify where AI delivers the highest return, and gives your agents the operational knowledge to actually perform.

Operational Context Graph — Live
Scroll to explore
22
Operational
Domains
194
Subdomains
Mapped
1,926
Processes
Scored
5,000+
Hospitals
Benchmarked
600+
Health Systems
Benchmarked
The Problem

Every health system wants to automate.
The benchmarks exist — nobody's mapped them to the processes that matter.

Health systems and payers run thousands of operational processes across revenue cycle, claims adjudication, utilization management, member services, credentialing, and more. AI vendors pitch for the processes they've already built. Advisory engagements start with interviews, not benchmarks. Internal teams automate what's loudest. The result: billions in AI spend potentially aimed at the wrong targets.

22 Domains · Provider & Payer Operations · 194 Subdomains
How It Works

From raw data to
smarter agents

Prioritize where to invest. Then build with the operational context that makes healthcare AI actually work. No interviews, no workshops, no three-month discovery phase.

I
Map Operations
Every healthcare operation is already mapped in the Operational Context Graph — 22 domains, 194 subdomains, 1,926 atomic processes with rules, standards, regulations, metrics, and role assignments. The structure is complete. Your organization maps to it.
→ Complete operational baseline
II
Overlay Benchmarks
For providers: real CMS Medicare data — utilization, quality ratings, financial metrics — mapped to the specific operational processes responsible. For payers: automation scoring and simulation use the graph's full payer operations coverage. Both get process-level root cause analysis.
→ Gap signals by severity
III
Score for Automation
Every process is scored across 6 objective dimensions: labor intensity, rule density, data readiness, orchestration complexity, role concentration, and compliance pressure. No opinions. No vendor surveys. Structure-derived scoring.
→ Ranked automation targets
IV
Simulate & Build
Test changes in a digital twin before committing budget. Then build AI agents grounded in the graph — with the operational context, compliance mapping, standards, and process flows they need to actually perform in production.
→ Smarter agents, defensible ROI
What You Get

This is what an AI
roadmap looks like.

Real CMS data. Real gap signals. Mapped to the operational processes responsible. Here's a sample for an anonymized 12-hospital system.

Midwest Regional Health
12 facilities  ·  OH: 5   IN: 4   MI: 3
4 High 3 Medium 7 gap signals across 6 domains
Domain
Metric
System Avg
Natl Avg
Gap
Automation Score
L2.08.02 Concurrent Review
Avg Length of Stay
6.2 days
5.0 days
+24%
78
L2.01.03 Charge Capture
Charge-to-Payment Ratio
3.8x
3.1x
+23%
71
L2.10.01 Quality Measurement
CMS Star Rating
2.8
3.2
-12%
54
L2.01.05 Payment Posting
Medicare Pymt per Discharge
$11,240
$13,890
-19%
68
Gap identified. What happens next:
AI Agent Blueprint Full Automation
Concurrent Review Automation Agent
state machine · tool schemas · compliance mapping · HITL spec · role impact matrix · integration checklist
8Capabilities
15Integrations
6Metrics
4moPayback
This is a live platform — not a consulting deliverable. Every step runs in software, queryable via API, updated in real-time.
Platform Capabilities
What You Get

Decide where.
Build smarter.

The platform answers two questions: where should AI go, and how do you make it work? CMS benchmarks and automation scoring identify the targets. The operational graph gives your agents the context, standards, and process logic they need to perform.

Operational Context Graph
Now you can see every healthcare operation mapped, measured, and connected — spanning both provider and payer functions, from revenue cycle and claims adjudication to credentialing and utilization management. Browse 22 domains, drill into 1,926 processes, trace cross-domain dependencies through 9,694 typed relationships.
2,142 nodes · 9,694 edges · 21 relationship types
CMS Intelligence
5,000+ hospitals and 600+ health systems enriched with CMS Medicare data — benchmarked against national averages at the facility and system level. Not just "your ALOS is high" but "Concurrent Review is the root cause, it scores 78/100 on automation feasibility, and here's the 10-step flow to redesign." Payer benchmarking datasets on the roadmap.
5,000+ hospitals · 600+ health systems · CMS Medicare benchmarks
Automation Scoring
Now you know which processes to automate first — and why. Six objective dimensions (labor intensity, rule density, data readiness, orchestration complexity, role concentration, compliance pressure) produce a composite score. No subjectivity. No vendor influence. Structure-derived ranking.
6 dimensions · 0–100 composite score
Digital Twin Simulation
You can finally test operational changes before committing budget. 19 discrete-event simulation models span provider operations (prior auth, scheduling, denials management) and payer operations (claims adjudication, member enrollment, concurrent review). Monte Carlo simulation projects throughput, cost, and error impact — compare scenarios side by side.
19 models · Monte Carlo · Scenario comparison
AI Agent Blueprints
Go from "we need AI in prior auth" to a complete, graph-grounded agent specification in minutes. The blueprint includes capabilities, integration points, compliance mapping, state machine definitions, tool schemas, and HITL specifications — all derived from the operational knowledge your agent needs to actually work in production.
4-step pipeline · PDF / Markdown / JSON export
Workforce Impact Analysis
Now you know exactly which roles touch which processes, how concentrated their responsibilities are, and what percentage of their work is automatable. 129 canonical roles mapped to SOC codes, with domain coverage heatmaps and automation exposure scoring for workforce transition planning.
129 roles · SOC-mapped · Automation exposure
Real Data, Real Signals

Built on CMS.
Not guesswork.

5,000+
hospitals and 600+ health systems enriched with CMS Medicare data — benchmarked against national averages at the facility and system level.
CMS data.cms.gov · 6 integrated datasets
881K+
enrichment records mapping real performance metrics to the specific operational processes responsible.
Automated gap signal detection · Per-field L2 domain mapping
6,500+
operational metrics with cited industry benchmarks from HFMA, NCQA, MGMA, NAMSS, and Joint Commission.
Curated targets · Not generated estimates
2,493
regulations mapped to the specific operational domains they govern — HIPAA, CMS CoPs, state, NCQA, Joint Commission.
Reverse-indexed · Instant compliance scope
Who It's For

Two buyers.
One platform.

Health Systems & Payers
"Where should we invest in AI?"
Health systems: see exactly where your facilities underperform vs. CMS national benchmarks, mapped to the processes responsible. Payers: explore your full operational landscape — claims adjudication, prior auth, member services — with automation scoring and simulation modeling across every domain. Know which processes to automate first.
  • CMS benchmarking for provider organizations
  • Full payer operations graph + scoring
  • Automation scoring ranked by ROI
  • Simulation-backed business cases
Consulting Firms
"How do we scale healthcare AI engagements?"
Start engagements with data-driven gap analysis instead of interviews. Build client-facing AI tools grounded in real operational context. White-label the platform as your methodology and scale your healthcare practice without proportionally scaling your domain experts.
  • White-label branding per client
  • Natural-language querying via Claude.ai
  • Engagement-ready gap analysis
  • Branded PDF exports for deliverables
The Knowledge Backbone

The operational
context AI needs.

Healthcare AI agents fail when they lack domain knowledge. The operational graph gives them structured context for any operation — standards, regulations, process flows, metrics, and role assignments — delivered via API or direct AI assistant integration. Whether you're a health system building internal agents or a consulting firm augmenting engagements, the graph is the knowledge layer underneath.

Your team can ask questions in natural language and get operational intelligence back instantly — via direct AI assistant integration. Or embed the graph into any application via API — structured, token-budgeted operational context in one call.

Where should our health system focus automation?

→ get_org_enrichment("Your Health System")
Aggregating facilities across 3 states...

HIGH gap: Average Length of Stay +25% vs national
→ Root cause: L2.08.02 Concurrent Review
→ Automation score: 78/100
→ 4 human roles · 75% automatable rules

→ score_automation_opportunities(domain="L1.08")

Top target: L3.08.01.01 Auth Request Initiation
→ Score: 82/100 · Full automation candidate
Grounded in CMS data + operational graph
Why It Works

Structure over
opinion.

Every signal in the platform is derived from process structure or public data — not subjective assessments, vendor surveys, or consultant intuition. The methodology is reproducible because the inputs are objective.

Graph Depth
Not a Taxonomy. A Context Graph.
Every L3 process includes inputs, outputs, business rules (classified by type: validation, calculation, automation, policy, clinical), standards, regulations, metrics with cited industry sources, role assignments, and handoff sequencing. 2,702 source files. 9,694 typed edges encoding real operational dependencies.
2,702 source files
CMS-to-Process Mapping
Metric → Process → Root Cause
Every CMS field maps to its correct L2 operational domain with per-field overrides for accuracy. "Total Discharges" maps to Concurrent Review, not generic billing. Gap signals include severity classification, directional awareness, and process-level recommendations. This mapping is domain expertise encoded as data.
881K+ enrichment records
Scoring Methodology
6 Dimensions. Zero Subjectivity.
Automation scores are computed from process structure: ratio of automatable rules, count of human roles, standards density, regulatory pressure from CMS gap signals, data signal quality, and orchestration complexity. The weights are transparent. The inputs are verifiable. No black box.
1,926 scored processes
Time to Insight
Days to a Prioritized AI Roadmap. Not Months.
Traditional operational discovery takes months of interviews, workshops, and analysis before anyone can recommend where to invest. The graph is already built. CMS data is already mapped. Your organization connects to a complete operational model — gap analysis, automation scores, and simulation-ready scenarios in days, not quarters.
Days not quarters
Who Built This
Experience
25+ Years in Healthcare Technology
Leadership
Former CDO & SVP across Top-10 Health Systems & 140 Hospitals · AWS Healthcare Advisory Council
Scale
Deployed Enterprise AI to 57,000+ Users
See It With Your Data

Where should your
organization deploy AI?

Enter your organization. We'll show you the gaps CMS already sees — and the processes to fix them.

Annual platform licensing · Available via app, API, or AI assistant integration · Live demo with your CMS data