Why Your Coders Keep Missing MEAT Criteria (And How to Actually Fix It)

OLIVIA HARTMAN
8 Min Read

Your coders know what MEAT criteria is. They’ve been through the training. They can recite the definition: Monitor, Evaluate, Assess, Treat. They understand that documentation needs to show active management of a condition.

So why do they keep missing it?

I’ve reviewed thousands of charts where experienced coders assigned HCCs based on inadequate documentation. These aren’t rookie mistakes. These are seasoned professionals who genuinely believe they’re doing MEAT criteria coding correctly. The problem isn’t knowledge. It’s recognition.

The Problem List Trap

The most common MEAT criteria coding error happens with problem lists. A patient has diabetes listed in their active problems. The provider mentions “continue metformin.” The coder assigns the diabetes HCC.

That’s not MEAT criteria. That’s a problem list and a medication continuation. There’s no evidence that the provider actually evaluated the diabetes during this encounter. No symptoms discussed. No exam findings documented. No treatment changes made. Just “continue current management.”

This passes coder review because it feels like documentation. The condition is mentioned. Medication is referenced. But during a RADV audit, this falls apart immediately. CMS wants to see evidence that the condition was actively addressed during the encounter, not just acknowledged as existing.

The fix isn’t more training on what MEAT criteria is. The fix is teaching coders to ask a specific question: “If I removed the problem list from this chart, would I still know the patient has this condition based on the documentation in today’s note?” If the answer is no, there’s no MEAT.

The Vague Documentation Problem

Here’s documentation that passes coders every day: “Diabetes stable. Continue current medications.”

That’s vague. Stable based on what? What metric was evaluated? What symptoms were assessed? This documentation might reflect good clinical care, but it doesn’t meet MEAT criteria for coding purposes.

Compare that to: “A1C today is 7.2, down from 7.8 three months ago. Patient reports good medication compliance and no hypoglycemic episodes. Will continue metformin 1000mg twice daily.”

That’s clear MEAT. We see evaluation (A1C value), assessment (improved control, no complications), and treatment decision (continue current therapy). If these two notes showed up in a RADV audit, only the second one survives.

The problem is that coders often can’t tell the difference quickly. They’re reviewing dozens of charts per day. “Diabetes stable” feels sufficient in the moment. Teaching coders to recognize the difference between vague and specific documentation requires showing them actual examples from your organization’s charts, not generic training materials.

The Condition-Specific Challenge

MEAT criteria looks different for different conditions, and coders don’t always recognize this.

For CHF, you need evidence of volume status. Exam findings like edema or lung sounds. Functional status. Medication management. Just “CHF stable” isn’t enough.

For CKD, you need current kidney function markers. A GFR from two years ago doesn’t count. The documentation needs to show evaluation during this encounter.

For cancer, you need evidence of active disease or active treatment. A history of breast cancer that was treated and cured five years ago doesn’t support an HCC. The cancer needs to be currently affecting the patient’s management.

Most MEAT criteria coding training covers these nuances in theory. But coders don’t internalize them until they see specific examples from actual charts they’ve coded. “Here’s the CKD note you coded last week. It referenced kidney disease but only cited GFR values from 18 months ago. That doesn’t meet MEAT. Here’s what would.”

The Query Decision Problem

Coders face dozens of borderline MEAT situations daily. The documentation mentions a condition but the MEAT evidence is thin or ambiguous. Do you code it anyway? Do you query the provider? Do you skip it?

Many coders default to coding it. Their reasoning is “the provider documented it, so it must have been addressed.” This is dangerous. When in doubt, they’re creating audit risk rather than leaving money on the table.

The better default is query. If you can’t clearly identify specific MEAT criteria in the documentation, send a targeted provider query: “Your note references this patient’s COPD. Can you clarify what symptoms, exam findings, or treatment decisions related to COPD were addressed during this encounter?”

But coders resist querying because query processes are often slow and cumbersome. If it takes two weeks to get a query response and you’re trying to hit submission deadlines, you’re tempted to just code based on what’s there.

Fix this by making queries fast and easy. Simple templates: “MEAT criteria clarification needed for [condition]. Please specify evaluation findings or treatment decisions.” Direct communication channel to providers. 48-hour turnaround expectation. When querying is efficient, coders use it appropriately instead of guessing.

The Documentation Improvement Angle

Here’s the uncomfortable truth: most MEAT criteria coding problems are actually documentation problems. Your coders aren’t missing MEAT. Your providers aren’t documenting MEAT.

Providers are providing excellent care. They’re evaluating conditions appropriately. They’re making good clinical decisions. But they’re not writing it down in ways that satisfy risk adjustment requirements.

The long-term fix isn’t better coding. It’s better documentation. And that requires provider education based on actual examples from their own charts.

Not generic lectures about MEAT criteria. Specific feedback: “Dr. Smith, here are three diabetic patients you saw last month. Your documentation mentioned diabetes but didn’t include A1C values, symptom status, or specific treatment rationale. Here’s what audit-defensible diabetes documentation looks like.”

This kind of targeted, example-based feedback changes behavior. Generic training doesn’t.

What Actually Works

Organizations that excel at MEAT criteria coding do several things consistently.

They show coders actual examples from their own charts, not textbook cases. They build clear decision trees: if you see X documentation, code it; if you see Y documentation, query; if you see Z documentation, skip it. They make querying fast and easy so coders use it appropriately. And they close the loop with providers through targeted documentation education based on real chart examples. MEAT criteria coding isn’t complicated conceptually. But recognition is hard. Fix that by showing coders exactly what good and bad documentation looks like in your actual environment, give them clear rules for borderline cases, and make it easy to get clarification when needed.

Share This Article
Follow:
Olivia is a versatile content writer with a flair for storytelling and brand voice creation. She specializes in blog articles, web content, and editorial features across lifestyle, tech, and business niches. With a degree in English Literature, she blends creativity with clarity to engage diverse audiences.
Leave a Comment