Blog

Checking our bias in “unbiased” research instruments: applying a diversity, equity and inclusion lens to instrument design

April 29, 2021
Abstract image with text reading survey in front of various types of graphs and a blurred city scape.

Introduction

We think of a diversity, equity, and inclusion (DEI) lens as the commitment to incorporate DEI principles and practices into the environments in which we work and the knowledge that we produce. In this blog post, we discuss how these principles can be applied in the design or evaluation of research instruments, such as survey questionnaires, cognitive or psychosocial assessments, tests, or checklists (we refer to any such research instruments simply as instruments throughout the blog). Research instruments can promote DEI principles through the explicit inclusion and participation of historically underrepresented and marginalized individuals throughout the instrument design process. Instruments carefully designed this way can better support/promote DEI in research and contribute to quantitative data that validly measure a construct of interest for all individuals. 

Example Research Questions

When attempting to evaluate the impact of a program, policy, or practice, we must decide what the relevant outcomes are and how to measure them. Several E4A grants specify the construct of “well-being” as an outcome of interest. Several instruments to measure well-being are available. In a recent research article, 99 instruments were identified that measured well-being with 196 dimensions of well-being within them (e.g., ‘emotional,’ ‘spiritual,’ or ‘subjective’ well-being). A fundamental challenge of measuring well-being is the extent of disagreement over its definition and theoretical basis [1]. 

To apply a DEI perspective when choosing among instruments to measure well-being, we might consider evaluating existing instruments based on the extent to which each stage of the instruments’ design process promotes principles of DEI. For example, we could evaluate instruments based on questions related to different stages of the instrument design process:

  • Theory generation and scoping stage* 
    • Is the instrument targeting a definition of well-being that adequately reflects the meaning of this concept to diverse groups? 
      • Does the instrument capture dimensions of well-being that are representative of diverse groups' values or beliefs about what well-being means? 
  • Content generation stage
    • Did diverse groups participate in item writing?
    • Were diverse groups consulted to evaluate or judge the clarity of the content coverage (e.g., is there a clear concordance between the items and the dimensions they purport to measure across diverse groups)?
    • Was a diverse population sampled in an initial pilot study of the item pool?
  • Instrument evaluation stage
    • Was a diverse population sampled to evaluate the final version of the instrument?
    • Do the instrument items have statistical evidence of validity and reliability across diverse groups? 

*Please see the supplemental method note for further definition of these stages.

These and other questions contribute evidence for the use of an instrument from a DEI perspective - we might refer to them as DEI validity arguments, which should be considered throughout the instrument design process. 

Possible Approaches

The Matrix of Evidence for Validity Argumentation (MEVA) [2] - offers a tool for organizing and developing validity arguments at each stage of the instrument design process to incorporate a DEI lens. Using the notion of validity as argumentation, the MEVA allows one to systematically collect, contrast, and integrate pieces of evidence that confirm or disconfirm the validity of interpretations and use of scores from an instrument for diverse populations. In a supplementary methods note, we provide a demonstration of how one might construct DEI validity arguments using the MEVA to help ensure that an instrument adequately addresses or captures issues of DEI. 

Putting Evidence into Practice

The E4A study, Identifying Shared Values to Support an Inclusive Culture of Health Around Firearms, illustrates incorporating a DEI lens in practice. The study aims to identify various gun subcultures and their related values, beliefs, and behaviors and to measure the underlying values that are shared across these different stakeholder groups (or subcultures). The ultimate goal is to develop message-framing strategies that contribute to an inclusive culture of health around firearms. It’s a good example of employing a DEI lens because it purposefully attempts to engage stakeholders with different perspectives in firearm violence prevention to promote the validity and fairness of their findings and conclusions. 

Applying a DEI lens for the design or evaluation of an instrument aims to ensure the relevance of the content, construct, and use of an instrument for all people. The MEVA can offer insight into how valid an instrument may be for rendering an inclusive and representative picture of diverse individuals, including people and communities who are underrepresented in research. The MEVA accomplishes this by facilitating the organization and accumulation of DEI validity arguments that attest to the strength of the instrument when used among diverse populations. We hope that this blog and supplemental methods note help evaluators and researchers incorporate a DEI lens in their quantitative data collection practices.

Tools and Resources

In the supplemental methods note, we offer a snapshot of the instrument design process and discuss using the Matrix of Evidence for Validity Argumentation for constructing DEI validity arguments. Here, we offer a few resources that may facilitate the use of a DEI lens in assessment and evaluation more broadly.

About the Authors

Dakota Cintron, PhD, EdM, MS, is a postdoctoral scholar for the E4A Methods Laboratory. 

Erin Hagan, PhD, MBA, is the deputy director of Evidence for Action.