Determining the Causal Effects of Interventions: Alternative Methods for Evaluation

Image of two paths dividing in a forest.


E4A seeks to fund research that will support evidence-based decision-making in designing and implementing population health programs and policies. For research to be actionable, it needs to answer questions in ways that establish causality. Specifically: if decision-makers intervene to change a system, policy, or action, and health outcomes or health equity change, did the intervention drive those changes, or is there another factor at play?

Example Research Questions

How does college completion affect adult health and life expectancy? If we intervene to incentivize college completion, would adult health improve? Would people live longer?

Possible Approaches – Controlling for Confounding Variables

While researchers might consider conducting a randomized controlled trial (RCT) to compare outcomes for individuals randomly assigned to receive different levels of schooling, there are situations in which we simply cannot randomize, because it isn’t feasible or ethical. For example, it would be illegal and unethical to randomize some elementary-aged children into a group that doesn’t receive elementary education.

If we want to understand whether an intervention is causing the desired outcomes, the evaluation needs to be able to test what happens to an individual under the specific intervention versus what would have happened to the individual if they hadn’t been exposed to the intervention. The challenge is approximating these unknown alternate outcomes.

One such example is the challenge of determining whether health is better if an individual completes college versus if they stop their education at the end of high school. In practice, we only observe the outcome for what the person actually experienced (e.g., only completing high school); we don’t know what their outcome would have been under an alternative situation (e.g., completing college, which they didn’t do). Simply comparing health outcomes of individuals who completed college to those who did not is unlikely to correctly estimate the causal effect of college completion, because those individuals may differ on other characteristics (e.g., family financial capital and childhood health) that influence health. This problem is commonly known as confounding.

There are many alternative quantitative methods for addressing confounding. Different approaches address confounding in very distinct ways, and each requires certain assumptions be made to estimate causal effects. They also have different strengths and weaknesses when it comes to the sample sizes required to make inferences and the types of people that the study tells us about. We go into more detail about two of the most common approaches in our Methods Note.

Putting Evidence into Practice

How can high quality evidence help inform decision-making? Here is an example: we funded a research project estimating the causal effect of college enrollment on health behaviors. The investigators are evaluating the differential opening of new colleges and universities by state over the time period 1960-1995. Research findings from this project will determine to what extent community college attainment affects health behaviors and outcomes and inform decisions about policies and funding for community colleges.

Tools & Resources

Understanding the key distinctions between different approaches to determine the causal effects of intervention, and which may be advantageous when evaluating different programs, policies, and practices is beneficial to develop the highest possible quality evidence. We’ve developed a short Methods Note with the aim of helping researchers understand the diverse causal inference tools available to them and to select the research approach that will best address their questions around the effectiveness and impacts of various interventions. Download the inaugural E4A Methods Note.

We welcome your comments and feedback on these ideas, as well as recommendations for future blog topics and content.

Blog posts

About the author(s)

Ellicott Matthay, PhD, is a social epidemiologist and postdoctoral scholar with E4A. She conducts methodological investigations to improve the way that research in her substantive areas is done, because she believes that improving the methodological rigor of applied studies is one of the most important steps to identifying effective prevention strategies.

Maria Glymour, ScD, MS, is a social epidemiologist and Associate Professor in the Department of Epidemiology and Biostatistics at the University of California, San Francisco. She has dedicated much of her career to overcoming methodological problems encountered in observational epidemiology, in particular analyses of social determinants of health and dementia risk.

Stay Connected