Two Ways to Think About Confounding
There are two ways to approach confounding assessment. Understanding both reveals why a systematic assessment matters. (This tool is for educational purposes.)
The Adjustment Approach
Identify variables that might affect both exposure and outcome based on subject-matter knowledge.
Include confounders in regression models or use stratification to control for their effects.
Evaluate whether the model fits the data well and whether coefficients change when adding covariates.
Beyond Adjustment
No matter how many confounders you measure, unmeasured factors may remain. Economists ask what those factors would need to look like to matter.
How strong would an unmeasured confounder need to be? How prevalent? If the answer is "implausibly strong," findings are more credible.
Rather than hoping confounding is small, find sources of variation in treatment that are independent of confounders.
What Is Sensitivity Analysis for Unmeasured Confounding?
Sensitivity analysis asks: "How would my conclusions change if there were an unmeasured confounder?" Rather than assuming your adjustment is complete, you explore scenarios where it isn't.
The goal is not to prove confounding is absent. You cannot prove a negative. The goal is to characterize how large and prevalent an unmeasured confounder would need to be to explain away your findings.
- If a tiny unmeasured confounder could flip your conclusion, findings are fragile
- If only an implausibly large confounder would matter, findings are more robust
- This doesn't make observational findings causal, but it calibrates confidence
The key question economists add:
What would unmeasured confounding need to look like to matter? The checklist in the next panel provides a structured way to answer this.
The Confounding Assessment Checklist
This six-part checklist structures your assessment of confounding threats. Work through each section systematically. The goal: characterize both measured and unmeasured confounding, then evaluate what unmeasured confounding would need to look like to change your conclusions.
1 Identify the Treatment and Outcome
What exactly is the exposure or intervention? Be specific about timing, intensity, and measurement.
What is being measured as the result? Over what time horizon? How is it coded?
What is the magnitude of the association? Is it a relative risk, odds ratio, regression coefficient?
2 Map the Selection Process
Was it randomized? Self-selected? Assigned by policy? Provider decision? Understanding the mechanism is crucial.
List every factor you can think of that affected who got treatment and who didn't.
Any factor that affects both treatment and outcome is a potential confounder.
3 Inventory Measured Confounders
These are variables in your dataset that affect both treatment and outcome.
Are these confounders measured well? Error in confounders leads to residual confounding even after adjustment.
Are you controlling for variables caused by the treatment? This can introduce bias rather than remove it.
4 Identify Unmeasured Confounders
List variables that likely affect both treatment and outcome but aren't in your data.
Estimate the association between each unmeasured confounder and the outcome.
Estimate how different the prevalence of each confounder is between treated and untreated.
5 Conduct Sensitivity Analysis
Use established methods (like the E-value or Rosenbaum bounds) to quantify required confounder strength.
Is the required unmeasured confounder stronger than any you've already controlled for?
Would such a strong, prevalent, unmeasured confounder be plausible given what you know about the domain?
6 Consider Alternative Designs
Can you find a policy change, natural experiment, or instrument that assigns treatment independently of confounders?
Are there "almost-treated" groups that share confounders but didn't receive treatment?
Imagine someone who doesn't believe your finding. What evidence would change their mind?
Ready to apply the checklist?
The next panel provides an interactive assessment tool using a realistic scenario.
Interactive Assessment
Apply the checklist to a scenario. Adjust the sliders and selections to see how your assessment of confounding threat changes. This tool demonstrates the logic; real sensitivity analysis requires more formal calculations.
Scenario: Diabetes Prevention Program
A county-level study finds that communities with diabetes prevention programs have 15% lower hospitalization rates. The study adjusts for income, age distribution, and insurance coverage. You need to assess whether unmeasured confounding could explain this finding.
How strong is the observed effect?
How completely does the study measure selection into treatment?
How strong are measured confounders?
Can you identify plausible unmeasured confounders?
Would unmeasured confounding need to be stronger than what you measured?
What does this all add up to?
The final panel summarizes the key insight that distinguishes a design-based approach to confounding assessment.
Key Insight
The confounding assessment checklist shifts the question from "did we adjust for confounders?" to "what would confounding need to look like to matter?" This reframing is central to how economists evaluate observational evidence.
Adjustment is necessary but not sufficient
Controlling for measured confounders is important, but it does not eliminate the threat of unmeasured confounding. The question is whether the remaining threat is large enough to change conclusions.
Quantification creates accountability
Vague statements like "residual confounding may exist" are unhelpful. Specifying that confounding would need to have RR=2.5 to explain the finding creates a concrete standard others can evaluate.
Comparison to measured confounders is informative
If the required unmeasured confounder is stronger than anything you measured, it may be implausible. If it is weaker, concern is warranted.
Design matters more than adjustment
Finding exogenous variation (policy changes, natural experiments, discontinuities) is more powerful than ever-more-sophisticated adjustment. This is the economist's preferred solution.
Concepts Demonstrated in This Lab
Key Takeaway
No amount of statistical adjustment can fix a flawed comparison. The economist's contribution to confounding assessment is not just listing more confounders, but systematically characterizing what unmeasured confounding would need to look like to explain the findings. When the required confounder is implausibly strong, findings are more credible. When it is plausible, the solution is not more adjustment but better research designs that provide exogenous variation. This is what economists mean by "identification."