Full Interview with Dr. Melody Goodman

Dakota Cintron (DC): The original version of the quantitative measure for evaluating community engagement was developed and used in 2013 by the Program for the Elimination of Cancer Disparities.

Can you tell us a bit about the motivation for developing the measure at this time? Why was the idea of evaluating the community engagement process identified as important?

Melody Goodman (MG): Absolutely. To be clear, we created the measure because we couldn't find one. Originally we thought we would just go find a measure. PECaD, that's the Program to Eliminate Cancer Disparities, was a community network partnership grant from the National Cancer Institute.

Just by its mandate, it had to have a randomized controlled trial that used community based participatory research, which seems odd for those of us who do CBPR. But it had to have that. And then because it was a center, it had a lot of different types of community engagement.

It had community partnerships around different types of cancer. So there was the Breast Cancer Partnership, the Prostate Cancer Partnership, lung cancer, colorectal, et cetera. And then there was outreach at the library. So there's just a lot of different types of engagement going on.

Part of what we did in PECaD was to do a sort of annual evaluation where anyone who was touched by PECaD would take a survey and we'd get some information from them. So when we were trying to evaluate PECaD, one of the questions was, how does community engagement impact the work that this center is doing? And how does that relate to all of the great work that was coming out of the center?

Like I said, we thought we would go to the literature and find a measure, stick it on this survey that we had been doing annually, and go on and do something else. But we didn't find anything in the literature that really captured what we wanted because like I explained, we had so many different types of community engagement going on in this center. So we developed our own measure.

So that's really why it was developed. We were trying to evaluate our own work. We couldn't find a measure that we thought was right for what we were trying to do. And so that's why we developed this measure to try to evaluate the community engagement that was being done by PECaD.

DC: How long has the PECaD been doing community engaged research? Is that something that they had been doing for years leading up to that?

MG: When I was there, they were in years five through ten of this center. I left there five years ago and they’re still going. So PECaD had been doing lots of community engagement, we just hadn't been evaluating it, if that makes sense.

Or I shouldn’t say that. We had been evaluating engagement specific aspects, but we didn't have a structured way to evaluate engagement systematically across all the different types of engagement that we were doing.

DC: That's awesome. It's really good background to have and it’s great to hear about the community engagement work that they're doing. When you were initially starting out to develop this measure, who were the community partners that were represented in thinking that through? Were community members involved in that scoping process and how did that look and who was involved?

MG: Yes, the nice thing about PECaD is that it is a community academic partnership. PECaD has an advisory board that we call DEAC, the Disparities Elimination Advisory Committee, and that committee has a bunch of community representatives and also academics. So it was this advisory board, this community academic advisory board, that guided this work. A subset of us from that board were really interested in evaluation, including the community co-chair of DEAC, the Disparities Elimination Advisory Committee, who joined this little evaluation committee.

And then everything we did in that small committee, we took back to the DEAC. So we were trying to develop this measure, iteratively, going back and forth between our small group and the larger group.

The DEAC is really interesting because it has different types of community members, it also has community practitioners or physicians who practice in the community, not at our academic medical center - and then it had member organizations. So there was a representative from the Native American tribes that were in Missouri, lots of representation from African-American communities, and the like.

So we developed this measure from this community academic partnership, and we were sort of going back and forth between sort of a subset of us then back to the bigger group and then even once we got the grant, the DEAC became the advisory board for that grant.

DC: It's really cool that you had that system in place to get that off the ground and work with them right from the beginning. When this process of evaluating the engagement was coming up and you were thinking about it, was there thought around, how does this help achieve equity and maybe eliminate disparities? Was that conversation that was going on at that time?

MG: It was. So, obviously, PECaD is the program to eliminate cancer disparities. So, those of us in PECaD were focused on cancer disparities. I think equity is embedded in our work. I think we were really conscious to align the measure, which we call the REST, the Research Engagement Survey Tool, with engagement principles that we found in the literature.

So we did a systematic literature review of the community engagement literature, the community based participatory research literature, literature on community academic partnerships, and literature on patient-centered outcomes research. And we tried to figure out, what were these engagement principles?

So during this iterative process during which we went back and forth between our small group and the DEAC, we were trying to figure out what were the key engagement principles that one could measure because it had to be measurable if you were trying to evaluate engagement in a partnership. So even though this work comes out of disparities work, we tried to keep the measure really focused on measuring engagement, if that makes sense.

DC: That's great. The establishment of those principles seems like a really important part of the measurement development process. Thinking about the measurement and thinking about latent traits. They're sort of like the latent goals that you would have to find.

So I thought that was really interesting and kind of segues into the next question, which is thinking about the Delphi process that you all used to perform a content validation study of that quantitative measure for community engagement after its initial development. What were the major findings of this content validation study? In what ways did the quantitative measure for evaluating community engagement evolve from its original form? And I know one of those things was that the engagement principles went from I think it was eleven to eight.

MG: Yes, that's probably the biggest change. We went from eleven engagement principles to eight through the Delphi process.

The Delphi panel started with the original eleven that we had developed, and then they basically modified seven of them. So none of them were in their original form. They modified seven of them. They dropped four of them and then they added one in.

The one that they added in was around trust and they felt it was important. It was one of the things that we felt was inherent in some of the other principles, but in the Delphi process it came out that it needed to be its own principle and we needed to measure trust as a principle. The other thing that went along with that and this was part of the charge of the Delphi panel, was to reduce the number of items so we went from 48 items to 32 items.

It’s still too long, but they did a good job of helping us reduce some of the engagement principles that they felt were overlapping a bit to combine those and make the measure a little bit tighter.

DC: Mm-Hmm. In the process of setting up this Delphi panel, how was community representation thought about there as well?

MG: Yes, so we had the Delphi panel had “experts”. We had researchers who do community-engaged research. We also had community members that have participated as partners in community engaged research, and we actually had more, per the funders request. We actually had more community, more people who would identify in the community stakeholder group than in the academic researcher group on the Delphi panel, because that was the voice that we were really trying to get.

I will say, I know you didn't ask me this, but the thing that I got most out of the Delphi panelists, someone who's been really interested in measurement and measured development, is that we often don't include the people that the measure is designed for in the development process, right?

If you were developing a measure for asthma you should probably include asthmatics when you're developing the measure, but we often don't do that. I think that needs to change because one of the things that became really clear in the Delphi process was how much definitions matter, how people thought about things and how people define things.

It was really important for us to make sure that when people took our measure, they were answering the questions that we were posing and not interpreting them in some different sort of way.

DC: Mm-hmm. Yes, I love that. I think about that a lot. You know, the items are crucial and the way that the cognitive processes that we bring into answering them that can vary a lot depending on if you’re a research person who’s had education and you've thought about a lot of stuff versus just someone walking in off the street and looks at it and thinks like, well, this makes no sense to me. So often if you have that mismatch between, who this is trying to measure and who's actually represented when they're answering the items or doing the content validation, I think that that's a big misstep as well. So it sounds like you all had a decent amount of community representation.

Did you feel that you could have still had more representation in this regard or that your content validation study did take a good step in that direction?

MG: I think it was good. The Delphi study definitely leaned more heavily towards community. Then when we did the analysis, we stratified it by whether the respondent was community or academic.

So we could really lift up the community voice in our analysis, even if they were in the minority, in terms of some of their feedback. I think you could always do better, but it depends on how big you want your Delphi panel to be.

One of the things, because it was such an iterative process, is that we wanted to be able to have a high level of retention of our Delphi panel. So we kept it small. We started with 19 people. We had one person drop out after the first round, but then everyone else stayed for the rest of the five rounds.

Because we were working this process in stages, where we were finding the engagement principals, matching the definitions, finding the items. It would have been hard to have a lot of turnover. I think we were working to not have so much turnover, so we'd have the same sort of voices coming back and tweaking this measure. So for that reason, we kept it small.

The other thing we did was, while most of our Delphi process was done on the web using web surveys, we did an in-person meeting for round four. I think that was so important for a lot of reasons. We did it this way on purpose. One reason is, we weren't trying to force consensus, but we wanted to make sure we had a clear understanding if there wasn't consensus, why. Just a clear understanding of what the disagreement was about.

One of the smartest things I've done, and I don't do really super smart things that often, is that I invited an editor to the in-person meeting. That was really smart because I think we reached consensus even though we weren't necessarily trying to get consensus by forcing it. I think we reached consensus because she was so good with words and where there was disagreement, she was able to find a word that we could get common ground on. As a biostatistician my mastery of words is not that great.

Her ability to hear people disagreeing and then find a word that people could agree on, I think is really why we were able to reach consensus. I think having that in-person process, being able to hear what the disagreement was, was really important because I think that's how we found that right language.

DC: Totally, because you may have a case where you think there isn't consensus, but it's actually a misalignment of language or conceptual understanding. So having that mediator there able to identify that seems really, really crucial. It also seems like a special role, you have to be good with words, but also have the ability to relate to both sides of the conversation.

MG: I think the nice thing is that the editor wasn't part of our team, so she had no stake in the game, so to speak, other than just trying to help us. There was no like, it's going to go one way or the other or we need to actually reach consensus. So it was actually really nice because she was like an unbiased person just trying to find language that potentially could help.

DC: It sounds like the Delphi process was really good for the work that you all did and that you’d probably use it again if you were going to go through a similar endeavor.

MG: Sure, I've actually recommended it for other projects around other things. I think it's a really great way to get feedback. And now because we can do web surveys we can get feedback in an easier way, I think you'll find it more often in the literature. I will also say it was really important for us in this measure development because there was no gold standard measure for the thing that we're trying to measure, right?

So we're trying to validate something without a gold standard. And so you know we had content validation. We're going through multiple validation processes, really because there is no gold standard measure where we can just compare it to something else.

DC: Right, you can't get convergent validity or something similar. It's really interesting. I'd known a little bit about the Delphi process, but hadn't really thought too much about using it in practice and thinking about the way it can be used for instrument development. So that's neat. I'm sure there's probably something written on that, but I’ll have to look into it a little bit more.

So the next question I had was, what have you found out about the use of the quantitative measure for evaluating community engagement in practice? Are there any good examples of the measure being used in practice that you would recommend, for example?

MG: Yes, so, you know, our measure was just published. I don't know if people who have used it have published, but there were a bunch of people who were sort of early adopters. I will say, we were using it as we were developing it and we were using different iterations of it. So there's a team at Drexel, the P.I. is Amy Carroll-Scott, but when I met with them last year, or maybe even the beginning of last year, they had used it and had some good validity around the types of activities that people said they had did, such as recruitment.

They looked through all of these things and then compared them to the level of engagement. So they had some good data around that. There's a team out in California that is actually using it to evaluate community engagement grants. They're an evaluator for a bunch of grants, and they're using it as part of their evaluation process and their validation statistics were actually really good. But I think it's one of these “time will tell” things, because people are just starting to use the measure.

I think to me, what's most exciting about it is, even the people that used it in our pilot study came back and said it was a great way to have a conversation with our community partners about partnership. So for me, it was exciting to see people use it in a real way, like let's now really talk about our partnership. Is this where we are? Is this where we want to be? Where do we want to be?

People said, it was a really good conversation starter and maybe it provided a forum for people to talk about the project that they often don't talk about, which is how is the partnership working? Whereas when you meet with your project team, you're usually talking about, are we on time? Have we accomplished the task? But you often don't talk about how are we functioning as a partnership? Let's reflect on that, let’s think about what we can do better.

DC: Yes, it sounds like a kind of norming process for the research team in the community. That's really cool. It sounds like it's still a little early for having evidence of how it's working in practice, and that makes total sense, since it was developed in the last decade, and this research takes some time to get picked up. But it sounds like there's already some work going on out there.

MG: So it is nice, because I think even though I can't point you to a bunch of papers, I do know a bunch of project teams are using it right now. With how long it takes for something to get published, it might be a while before we start to see papers on it. But no one's called me complaining.

DC: No, and you have found out that the use of it has led to this norming process, which is probably something maybe you didn't expect right away. But when you see it, you're like, “Oh yeah, that makes complete sense.” It's like a place to come, sit at the table and talk.

The other question I had related to this and how you use this community engagement measure and practice is, at what stage of the research would you say that the measure should be used? For example, at the end of a community engaged research project? Longitudinally throughout the life of the community engaged project, for example, from the theory and scoping stage to the implementation of the research to the dissemination. That's one thing that was kind of on my mind, when might be the best time to use this as you're engaging in this project?

MG: We have two versions - the comprehensive version, which is 32 items, the condensed version, which is nine items. We've told project teams to use it as they see fit. But we've been suggesting to people, if you're starting a new partnership or at the beginning of a study, you may want to start with the comprehensive version and see how that works out. Then you may want to do the condensed version a few times, depending on how long your study is.

So, if it's a five-year study - you may want to do the condensed version annually and then the comprehensive version at the end of the project, right? So, you give it pre/post.

But, what I know one team is doing and I think is a really great way to use it. They did the comprehensive version at their baseline and they found that there were two or three engagement principles where they felt like they weren't performing well. And so they're only tracking those, right? So they're like, “we're doing these other ones while we're only going to track these three engagements.” So then they're just using the items related to those principles. I mean, they're tracking those over time, which I think is a really nice way to think about it. “Like, well, if we're doing great on all this stuff, let's focus on the thing that we're not doing so well at.”

DC: Right, it's like a performance metric that you can use kind of as the process goes on to see whether you're changing, you're adapting to what the community wants. I think that's a real benefit of using it throughout the process of your project versus maybe just at the end. You know when you're done and you asked, how did we do during this project? And you can't really change anything at that point. You know, you just found, well, we didn't do well on this principle. So what? And what can you do with it now? At least this way, I guess it gives you a chance to adapt and grow during the research process.

MG: And we've told people no one likes to take a test and do bad, right? So it's not a test, right, at the baseline. It’s a new partnership. You shouldn't have these high levels of engagement - that wouldn't make sense, right? But you're just getting to know someone, right? It's not going to be great right away.

It takes some time to build a good partnership. So the goal should be growth, right? The goal shouldn't be to be excellent at baseline. The goal should be that your partnership is developing over time and most people who do engagement work are committed to doing it, across multiple projects. So they're not just trying to look at the life of one project. Evaluating it post hoc at the end of a project doesn't really give them what they want to know.

I think the real, the real scientific question, the question that I'm hoping that we get to address is, how does community engagement impact the scientific process and scientific discovery? That's the real question, right?

Because anecdotally, I know that I'm a better researcher because of my engagement work. The questions I ask are better; the way of implementing things is more realistic and sustainable. But this is all anecdotal from my years of doing this.

So part of the development of this measure was, can we really think about the science of engagement and can we quantify in some way? And I know it's not going to be perfect, but really showing people that engagement in science is so imperative that it makes the science better.

DC: Mm-Hmm. Mm-Hmm. So I think that segues into my final question, what work on the quantitative measure are you currently doing? Is there anything we should be on the lookout for? For example, are you involved in any projects where you're thinking about testing whether the community engagement process matters for distal outcomes? I think it's an interesting question, I guess, it would involve some kind of quasi-experiment and you’d want to have a couple of groups to compare.

MG: We literally just finished this one, but I think, for me, it's more about refining this measure. We purposely created a generic measure, so it's not population specific, it's not disease specific, just because there was nothing out there. But I do think it may be worth starting to think about tailoring some of this stuff to certain diseases, our measure’s showing good fidelity.

Someone has used it with Alzheimer's, I believe. And they switched some words out and put Alzheimer's in, and it works really well. So I think some of that work needs to be done. One of the questions you asked me in the beginning was about a focus on the equity piece. So, are we doing engagement equitably? I think our measure doesn't do that yet. There needs to be some tweaking of that.

I also think we need to tweak our two scoring methods a little bit. For the most part in our pilot study, there was agreement. And when there was an agreement, the P.I.s said, I disagree, but I knew that they were going to. I knew my community partner was unhappy with something that just happened. So the score makes sense. But I think we need to think through…Sometimes you can have a functioning partnership, but one or two partners are still unhappy, right?

We use averages, which really washes out when there's disagreement. And I think we need to think through what that means. I want to make sure that we don't mute the minority voice, right? Sometimes there's only a few people in the minority saying something that's different from what the majority of the group is saying, and when you use averages, that gets muted. But in community engagement, it's often important to hear the minority perspective. Whether or not that's going to change things is different, but, I don't think our measure does a good job of that.

So there are some things that I think could use some tweaking. The other major, major limitation, and I'm definitely not the person to address it, is that the measure is only available in English. There are so many other languages that this tool could be useful for. To have a measure that's only available in English, granted that we just developed it, means it’s not as useful as it could be. My whole thing was that I needed to get it right in English first.

The challenge with other languages is that I actually am firmly against direct translation of this measure. Just because of how much language was an important piece of getting it right in English, I don't think a direct translation is the way to get it right in another language.

The reason why I say I’m not the right person to do it, is that I barely speak English, and the only other languages I speak are like SAS, R, and Stata. So I think someone who is fluent and who is interested in using the measure - I don't think they have to do everything we did, but I do think they're going to have to replicate some of what we did in another language. Just because how I talked to you about, about how finding the right word was so important. I know that often direct translation loses some of that nuance, and so I'm just nervous about people directly translating it.

That being said, I couldn't do anything in another language, but I do think that's a major limitation, and I think that's sort of where the measure needs to go.

DC: Right, you want to make sure that it's responsive to cultural backgrounds and understandings, in addition to language. Which almost begs the question of whether you would need to go through the entire process again with the new group of people. I think that that's probably one way to do it and may be the most optimal way to do it, but there's probably a way where you can build on what you've done to get where you have a head start. Going through the entire process may be costly.

MG: Yes, I don't know that they would have to go through all of it, but I think I would definitely use a Delphi process in translating it because I would want to make sure that there was consensus around the translation because things don't always translate directly.

DC: So do you have any plans to think about how to evaluate whether community engagement matters? Do you have any work coming up around that, that point?

MG: I don't have anything coming up that’s funded, but I have really been thinking about it. So measures of how one person as a partner, how engaged they feel. But I've really been thinking about how you measure the partnership, if that makes sense, in all of its… [gestures to mean in its entirety].

DC: Yeah, I can imagine coming up where if we had two projects going on at the same time and they are both doing community engagement, but we don't really know what's going on. And then you could approach it, say we're going to give each of them a community engagement strength index, and so we want to see whether that index is meaningful and like they're both looking at the same outcome. Did the higher community strength index score for the one team actually translate to higher health outcomes or better health outcomes?

MG: But to do that, where I've even talked to PCORI, who funded me to do that work, you need access to a lot of, just think from a statistical point because you want a lot of variability, like you would need access to a lot of partnerships. In our pilot study, I think we had 20 or so partnerships, which was nice to test that measure. But I think to be doing the things that I'm thinking about, you need access to hundreds of partnerships. But I do think that’s the next step. I just, I'm not there yet. I'm still trying to finish this current stuff.

DC: No, I hear you. You all are still taking one step forward at a time and getting the work done. I definitely appreciate the work that you're doing. It's really interesting. And now we can have this conversation about how we might evaluate whether the community engagement process actually works, which is really great.

Well, that's all I have. Is there anything else that you're thinking about or wanted to mention?

MG: No, just that I hope people use it. I think you have our website, but if not, I'll give you a link to the website where people can find the measure and all other kinds of stuff. We encourage people to use it and let us know how it works.

DC: All right. Thank you.

MG: Thank you for taking the time and reaching out.

Interviewer: Dakota Cintron
Date of Interview: 2/1/22

Method of Recording: Zoom

Transcription: Steph Chernitskiy using Adobe Premiere Pro version 22.2, edited for clarity and formatting.

Background: When Dr. Cintron sat down with Dr. Goodman, he proposed the following questions for discussion.

Interview questions

  1. The original version of the quantitative measure for evaluating community engagement was developed and used in 2013 by the Program for the Elimination of Cancer Disparities (PECaD). Can you tell us a bit about the motivation for developing the measure at this time? Why was the idea of evaluating the community-engagement process identified as important?
    • Who were the community partners represented in the process of initially developing the measure?
    • How important was the idea of evaluating the community engagement process for achieving equity? Eliminating disparities?
  1. A Delphi process was used to perform a content validation study of the quantitative measure for community engagement after its initial development. What were the major findings from this content validation study? In what ways did the quantitative measure for evaluating community engagement evolve from its original form?
    • Who were the community partners represented in this Delphi study?
  1. What have you found out about the use of the quantitative measure for evaluating community engagement in practice?
    • Are they any good examples of the measure being used in practice that you would recommend to our readers?
    • At what stage of research would you say that the measure should be used? For example, at the end of a community-engaged research project? Longitudinally throughout the life of a community-engaged research project (e.g., from theory and scoping, to implementation, to dissemination)?
  1. What work on the quantitative measure are you currently doing and what should we be on the lookout for?

Stay Connected