Foundational concepts in causal inference and economic evaluation. No account required.
Methods, case studies, and worked examples. Sign in or join CAPHE (free) to access.
Program Impact
Did the program actually cause the change you're seeing?
The comparison you're actually making when you claim something worked.
Why "it improved after we started" isn't proof of impact.
What makes a comparison group actually comparable.
How self-selection into programs undermines evaluation.
Why extreme cases improve without any intervention.
Ruling out other things that changed at the same time.
Why passing a parallel trends test does not guarantee valid causal claims.
Separating pre-existing trends from program effects.
Full worked example evaluating community health worker impact.
Policy evaluation when states choose differently.
Evidence Credibility
Can you trust this study enough to act on it?
The gap between observing association and claiming cause.
How identical numbers lead to different policy recommendations.
Would this have happened anyway over time?
What else changed at the same time as your program?
Two different problems requiring different solutions.
Did the ruler change, or the thing being measured?
Which designs can answer which causal questions.
How many analyses did they try before finding significance?
What the coefficients actually tell you about confounding.
Value for Money
Is this program worth funding over the alternatives?
Effective doesn't mean worth funding.
Putting different health outcomes on a common scale.
Calculating and interpreting ICERs.
Compared to what alternative use of funds?
What does "worth it" actually mean in practice?
Head-to-head cost-effectiveness evaluation.
Causal Pitfalls
What's really driving these numbers?
Untangling bidirectional relationships in health data.
Mapping what could explain the pattern you're seeing.
Which assumptions matter most for your conclusions?
Making decisions when the evidence is incomplete.