The Data
A health department has $500,000 to spend on diabetes prevention. Two evidence-based programs are available. Each has published effectiveness data and known costs. (Data are simulated for illustration.)
Program A: Lifestyle Coaching
Program B: Group Classes
Do Nothing
Total QALYs Gained by Program ($500K Budget)
Both programs look good compared to "nothing."
But what happens when we can only afford one program, and choosing one means not choosing the other?
Wrong Comparator: Comparing to Nothing
Clinical trials often compare a treatment to placebo or no treatment. This tells us whether the intervention works. But it doesn't tell us whether it's the best use of limited resources.
Comparing Program A to Nothing
$500,000 ÷ 30 QALYs = $16,667 per QALY
If we have $500,000, we can fund Program A or Program B, but not both.
What happens when we compare Program A to the alternative we're actually giving up?
Right Comparator: The Next Best Alternative
Economists compare interventions to the next best use of those resources. If you're deciding between Program A and Program B with the same budget, Program B is the relevant comparator for Program A.
Comparing Program A to Program B
Same budget: A gets 30 QALYs, B gets 50 QALYs. A is dominated.
The same program, two different conclusions.
What are the broader implications of comparator choice for resource allocation decisions?
Implications for Resource Allocation
The comparator choice isn't just a technical detail. It reflects fundamentally different questions about what we're trying to learn and what decisions we're trying to inform.
When "Nothing" Misleads
- Fixed budget decisions: When you must choose between mutually exclusive options
- Displacement effects: When funding one program means defunding another
- Opportunity cost matters: When resources have competing uses
- Priority setting: When ranking multiple interventions for investment
When "Nothing" Works
- New money: When funding is truly incremental, not reallocated
- Proof of concept: When asking "Does this work at all?"
- No alternatives exist: When there's genuinely no other option
- Clinical efficacy: When testing biological effect, not resource allocation
Opportunity Cost
Opportunity cost is the value of what you give up when you make a choice. In health economics, it's not the dollars spent but the health benefits foregone.
When we spend $500,000 on Program A, we don't just lose $500,000. We lose the 50 QALYs we could have gained from Program B. The true cost of choosing A over B is 20 QALYs.
Visualizing Opportunity Cost
Economists think in terms of tradeoffs, not just effects.
What does this mean for how we should evaluate and compare health programs?
Key Insight
These questions help you recognize when comparator choice matters. They won't make the decision for you, but they'll reveal whether an analysis is answering the right question.
Concepts Demonstrated in This Lab
Questions to Ask
- What am I actually comparing to?
- Is this comparison relevant to my decision?
- What would happen to these resources if I don't fund this program?
- Am I measuring "Does it work?" or "Is it the best use of money?"
Red Flags
- "Cost-effective compared to no treatment" when alternatives exist
- Fixed budgets analyzed as if money is unlimited
- Ignoring displacement of other programs
- Treating ICER vs nothing as sufficient for resource decisions
Key Takeaway
The comparator determines what cost-effectiveness means. Comparing to "nothing" answers "Does it work?" Comparing to the next best alternative answers "Should we fund it?" These are different questions with different answers. When budgets are fixed and choices are mutually exclusive, the right comparator is the opportunity you're giving up, not a hypothetical world where you spend nothing. This is what economists mean by opportunity cost.