Blog

Research Design Challenges and Moving Results to Action: An Interview with Thomas Cook

March 2, 2020
Interview
Image of a maze

Ellicott Matthay, our E4A Postdoctoral Scholar, sat down with E4A National Advisory Committee member Dr. Thomas Cook to discuss some of the methods challenges facing population health researchers, possible approaches to overcoming those challenges, and how these issues impact real-world decision-making. Dr. Cook is widely considered an expert on causal inference design approaches, having written extensively on the topic and been awarded numerous accolades for his work. He is a Research Professor at George Washington University and a Professor Emeritus of Sociology, Psychology and Education at Northwestern University.

EM: I listened to an interesting talk earlier this week from a prominent social scientist and he said that academic research was not having as much impact on policy as it should, in part because academic researchers are spending too much time on methodological research and debates and not enough time on applied, policy-relevant research. I am curious whether you agree?

TC: We all go into the population health field we are because we want to do good, right? So we want to influence policy, but we want to do so with the closest version of the truth. The issue is how complete the knowledge has to be in order to act, and that varies depending on the urgency of the problem being confronted. The people saying “methodology first” want to ensure the accuracy of research results for the next generation. The people saying we have to do more applied research today do so because they want the research to be relevant today. And there is a tension. There should be a tension. I hope there always will be a tension.

Some people come out and say we should do X, and I hope people in the background say, “Well that would be reasonable. Let’s consider the conditions under which we should do X today but let’s also think about Y.” This is a creative tension. But you have to remember also that public health research exists in a political arena, not a vacuum, so one has to be careful about charging in with well-meaning remedies that don’t work. So it’s a tension we should all live with and appreciate. Some people will be in the vacuum of academia, some people will be in it as thinkers. We need both. 

EM: What do you think are the most important method challenges in population health research? 

TC: The main issue for me is how to justify a set of causal methods for population health, other than randomized assignments, that have some grounding in both statistical theory and empirical experience for generating causal estimates that we can trust. The key question is how to come up with a theory and better practices to generate mechanisms that you can trust – those observational studies that reproduce the same results as randomized experiments when done with the same intervention, same measurements, etc.

EM: It sounds like the emphasis for you is really on internal validity - that the study reflects the true causal impact for the participants. Is that right?

TC: There are a variety of kinds of validity and all are important. I actually recently wrote a paper called “The 26 Assumptions You Have Make to Trust the Results of a Single Randomized Experiment”. I would say internal validity is the most important. In fields where RCTs are common we don’t have to worry so much about internal of validity; then the concern is more about other types of validity (for example, concerns about measurement). But there is still a big fight about the interpretation of those studies. So even if we solve the issue of internal validity in population health research, that does not mean there aren’t lots of issues that would still need to be resolved, especially around which components of the intervention led to the measured outcomes and what other outcomes might have been affected by it. Given the outcomes it did affect, what is the effect of those outcomes on later sequelae in a longer causal chain? So while internal validity is the most important, it’s not the only thing that’s important.   

EM: Do you think there are particularly promising solutions or areas of inquiry that are being pursued or could be pursued to address some of the most fundamental methods challenges?

TC: If the most serious issue methodologically is how to justify causal interference from observational studies, then I would say there are no new designs being evolved. There are a lot of new data analytics being evolved that do a great job of prediction, but not a much better job of causal inference. The best thing going on in causal inference is to design experiments in which you try and find out which other designs consistently yield similar results to randomized experiments, because if the findings are the same, then they are empirically reliable.

There are now about 80 studies looking at when observational studies produce the same results as randomized experiments, which demonstrate that there are other designs that produce the same results as randomized experiments, for specific applications. I think we have a chance to come up with a set of observational study methods that could, in certain circumstances, be shown to routinely produce results very similar to those in a randomized controlled trial (RCT). This is what’s needed for choosing the observational studies worth funding and worth trusting with respect to the results.

EM: Of course, this only applies to the subset of questions that can be answered with a randomized trial or with a similarly rigorous design. Not all questions lend themselves to those designs.

TC: Not every intervention lends itself to an RCT, for ethical or other reasons. For example, when considering the determinants of a successful high quality screening program that get a lot of people in to screening and results in screening with very few false positives or negatives, that can and should be evaluated with randomized experiments. But then the question of the consequences of screening children does not loan itself to randomized experiments because it is ethically inappropriate to withhold screening from kids. So it’s very hard to do randomized experiments on the consequences of pediatric screening, though it’s easy to do about the determinants of good quality screening.

I am never sure that research or decision-makers know about the best methods they could have used for a causal design and analysis. Nor do I see them running through their head about which they could use, which are the best ones. Nor do I even see them having a lot of flexibility in their thinking about causal method use.

EM: That sounds like a fundamental methods issue for population health research as well.

TC: I think so. 

Additional Resources

Interview Transcript

About the Author

Ellicott Matthay, PhD, is a social epidemiologist and postdoctoral scholar with E4A. She conducts methodological investigations to improve the way that research in her substantive areas is done, because she believes that improving the methodological rigor of applied studies is one of the most important steps to identifying effective prevention strategies.