Workflow — HCC Coding Review

HCC chart review at scale — without rebuilding your coding department.

Progress notes, hospital discharge summaries, lab results, problem lists → conditions documented at the level of specificity required for HCC capture, MEAT-criteria validation, documentation-gap identification, provider-feedback report. HCC codes submitted to CMS RAPS / EDS; coder query if documentation insufficient. Replaces certified risk-adjustment coder (CRC) labor and offshore retrospective review at GeBBS, EXL, Episource at a fraction of the per-chart cost.

$1.50–$6.00
Per chart at GeBBS, EXL, Episource (offshore retrospective)
MEAT
Monitored / Evaluated / Assessed / Treated criteria
60–85%
Routine HCC review off the CRC desk after AI cutover
What This Replaces

The Offshore CRC Center Reviewing One Chart at a Time

The work the certified risk-adjustment coder does on every chart — and the cost of leaving it there.

The labor

HCC coding review today moves through certified risk-adjustment coders (CRC credential) at provider organizations and Medicare Advantage plans, plus offshore retrospective-review BPOs at GeBBS Healthcare Solutions, EXL Healthcare, Episource, Cognizant, Optum, AGS Health, and Access Healthcare. Per-chart cost runs $1.50–$6.00 at offshore retrospective-review centers. A mid-size MA plan reviewing hundreds of thousands of charts per year on retrospective HCC chart review routinely spends seven figures on coding alone.

The cycle time

Standard HCC retrospective review cycle runs months from chart receipt to RAPS / EDS submission, with longer cycles when documentation gaps require coder query and provider re-documentation. CMS RAPS / EDS submission deadlines (with the EDPS phase-out and EDS-only operations) compress the cycle, and the 25%+ MA-plan-margin pressure tied to HCC capture means underutilization on the chart population is real money left on the table.

The Workflow

Input · Analysis · Output

What goes into HCC review, what we do to it, and what shows up in the EHR / risk-adjustment system.

Input

Clinical chart + member context

  • Progress notes from EHR
  • Hospital discharge summaries
  • Lab results and imaging studies
  • Problem lists and diagnosis catalog
  • Medication lists and care plans
  • Prior-year HCC submissions for the member
  • CMS HCC model and coefficient updates
Analysis

Identify, validate, gap-flag

  • HCC condition identification at MA risk-adjustment level
  • MEAT criteria validation (Monitored / Evaluated / Assessed / Treated)
  • Documentation-gap identification with provider-feedback evidence
  • Specificity check (uncomplicated DM vs DM with complications, etc.)
  • V28 model alignment for current submission year
  • CMS RADV-defensibility scoring
  • Confidence score per finding; exceptions to CRC queue
Output

HCC submission into the SoR

  • CMS RAPS / EDS submission file
  • Inovalon (REST APIs)
  • Cotiviti (REST APIs)
  • Optum (HCC platform integration)
  • Provider-feedback report with documentation-gap detail
  • Coder query for insufficient documentation
  • Per-chart audit trail with MEAT-citation basis
Side by Side

HCC Coding Review Today vs. With Last Rev

The numbers that matter: cycle time, per-chart cost, accuracy, and CMS RADV defensibility.

Dimension Offshore Retrospective ReviewLast Rev HCC Coding
Cycle time, chart receipt to RAPS / EDS submission Weeks-to-months at offshoreDays per chart batch
Per-chart unit cost $1.50–$6.00 per chartPer-chart, benchmarked at 25–45% of BPO unit cost
Chart-population coverage Bounded by BPO economics — risk-tier-prioritized sampling100% chart-population review at AI cost
MEAT criteria consistency Variable — coder judgment, drift across rotationsPer-chart MEAT validation with the source-citation
Documentation-gap detection Spotty — depends on coder thoroughnessSystematic gap detection with provider-feedback report
CMS RADV defensibility Coder notes, no per-condition lineageSource EHR encounter + MEAT citation + V28 model basis per HCC
EHR / RA-platform integration Manual data extraction, batch upload to Inovalon / CotivitiDirect via documented EHR / Inovalon / Cotiviti / Optum APIs
How It Works

From Clinical Chart to RAPS / EDS Submission

Five steps. Every one logged. Every one reversible if your confidence threshold isn't met.

Submission Lands
Progress notes, hospital discharge summaries, lab results, imaging studies, problem lists, medication lists, and care plans from Epic, Cerner / Oracle Health, or athenahealth — paired with prior-year HCC submissions, CMS HCC model coefficient updates, and the V28 model phase-in schedule.
Extraction & Classification
HCC condition identification at MA risk-adjustment level. MEAT criteria validation (Monitored / Evaluated / Assessed / Treated). Documentation-gap identification with provider-feedback evidence. Specificity check (uncomplicated vs complicated). V28 model alignment for current submission year.
Validation Against HCC Bar
Findings validated against CMS HCC model rules and the plan's risk-adjustment playbook. Anything below your confidence threshold per HCC is routed to the certified-risk-adjustment-coder (CRC) review queue — final HCC submission remains with the CRC under standard CMS-required oversight.
Push to System of Record
CMS RAPS / EDS submission file assembled. Inovalon, Cotiviti, or Optum HCC platform updated via the documented integration. Provider-feedback report with documentation-gap detail generated. Coder query routed for insufficient documentation.
Audit Log Persisted
Every HCC condition identification, MEAT-criteria citation, and specificity finding logged with the source EHR encounter, model version, prompt, and confidence score. CMS RADV-audit-ready and yours.
Compliance & Defensibility

Built to Meet the Quality Bar Risk Adjustment Already Runs On

CMS HCC model conformance
CMS HCC model V24 / V28 phase-in (with V28 fully effective for PY2027) tracked. Per-condition coefficient updates applied for the submission year. CMS RAPS / EDS submission-format requirements respected per submission cycle.
MEAT criteria fidelity
Monitored / Evaluated / Assessed / Treated criteria applied per HCC condition. Each MEAT element traces to a specific encounter / note in the EHR. CMS RADV audit defensibility rests on this evidence chain — and the audit log produces it on demand.
CMS RADV-audit defensibility
When CMS RADV audits validate HCC submissions post-payment, the audit log produces what was identified, which MEAT criteria supported the capture, and what the basis was. Cleaner chain of custody than the offshore retrospective review reconstruction post-audit.
PHI / HIPAA / HITRUST posture
Clinical chart data contains PHI under HIPAA. Deployable in your VPC or our SOC 2 / HITRUST / HIPAA-aware environment. Encryption in transit and at rest; retention policies tied to your CMS recordkeeping rules and HIPAA-specific recordkeeping requirements.
Common Questions

What MA Plans, ACA Plans & Provider Organizations Ask About HCC Coding

How is this different from Inovalon, Cotiviti, Optum, or other risk-adjustment platforms?
Those are the risk-adjustment platforms where HCC submissions and chart-review status live. The competitor on this page is the certified-risk-adjustment-coder (CRC) labor and the offshore retrospective-review BPO line on your operating budget — typically GeBBS Healthcare Solutions, EXL Healthcare, Episource, Cognizant, Optum, AGS Health, and Access Healthcare charging $1.50–$6.00 per chart. We undercut that labor cost, integrate directly into your existing risk-adjustment platform / EHR, and deliver HCC submissions with MEAT-citation basis into the system of record.
We have an offshore retrospective-review BPO running today. How does this work alongside that?
Most MA plans, ACA plans, and provider organizations keep the BPO arrangement in place during pilot and early production — we route exceptions, complex multi-condition cases, and any chart that genuinely requires senior-CRC judgment to the team you already have. Volume to the BPO drops 60–85% on routine retrospective review once cutover completes. CRC time shifts to higher-leverage work like prospective review, complex documentation gap remediation, or RADV audit response.
What's your accuracy bar versus a senior CRC?
Our pilot success threshold is HCC-identification, MEAT-validation, and specificity accuracy at parity with or above your incumbent CRC process, measured on the same shadow-data sample of historical charts. Anything below your defined confidence threshold per HCC is routed to the CRC review queue — your call which queue, ours or yours.
How do you handle CMS V24 vs V28 model phase-in?
CMS HCC model V24 (legacy) and V28 (current, with phase-in completing for PY2027) are both supported. Per-submission-year model selection is configured during onboarding. The audit log records which model version applied to each chart at the time of submission so the RADV defensibility resolves cleanly.
How do you handle MEAT criteria and documentation-gap identification?
MEAT criteria (Monitored / Evaluated / Assessed / Treated) are applied per condition with each element traced to a specific encounter / note in the EHR. Documentation gaps surface as provider-feedback items with the missing element cited so providers can re-document. We don't make the senior CRC call on judgment-laden capture decisions; we surface the basis.
Can you actually integrate with Epic, Cerner, athenahealth, Inovalon, Cotiviti, Optum, and CMS RAPS / EDS?
Yes — through the documented integration surface each platform supports. Epic via App Orchard / FHIR APIs; Cerner / Oracle Health via FHIR APIs; athenahealth via REST APIs; Inovalon, Cotiviti, and Optum HCC platforms via documented integration patterns; CMS RAPS / EDS via the standard submission feed. Your IT, clinical, and risk-adjustment teams review and approve service accounts. We do not require platform-side custom development.
How long until a pilot is running on a live chart-review pipeline?
HCC pilots typically run 6–8 weeks: 1–2 weeks of integration and per-plan / per-region risk-adjustment-rule mapping with the risk-adjustment team, 4 weeks of shadow-mode running on real charts with no CMS-side submissions, 1–2 weeks of supervised cutover on a constrained scope (one plan, one region, one product line). Production rollout is staged after the pilot meets your accuracy and risk-adjustment-management sign-off.
What does pricing look like compared to our current per-chart BPO rate?
We benchmark against your current per-chart cost — typically $1.50–$6.00 at offshore retrospective-review BPOs. Our target is 25–45% of that per-chart cost at higher accuracy and faster cycle time. Pricing structures around volume tiers and outcome SLAs (RADV defensibility), not hourly billable rates.

Two Ways to Start

Take the AI assessment for a structured read on HCC coding feasibility. Or talk to us if you already know retrospective HCC review is the largest line on your risk-adjustment operations budget.

Other Workflows

More Healthcare Admin Workflows We Replace

The same approach, applied to the other document-heavy labor lines on your healthcare-admin budget.