Quantcast
Channel: Research – heather lanthorn
Viewing all articles
Browse latest Browse all 22

Addressing courtesy bias

$
0
0

This is a post that i recently penned for IDinsight’s internal blog but i thought was worth pushing out into the interwebs, in hopes that people will weigh in with tactics they have tried!

Please share in the comments your experiences with courtesy bias and the tactics you have used (seemingly successful and unsuccessful) — including the ones here, which are not fool-proof.

Courtesy bias is one of the response biases, which we may encounter if we ask (quantitatively or qualitatively) what participants and other stakeholders think about a program. i point out that this could be quantitatively or qualitatively because we can ask people about program feedback and satisfaction in both ways, even though courtesy bias is often pegged as a particular challenge for qualitative work, such as here.

.

Courtesy bias is the tendency to understate dissatisfaction or challenges with a program, often driven by not wanting to offend or drive away the organization delivering the program or policy (that is, the tendency to portray the ‘benefactor’ organization in a positive light); it may also partially stem from not having permission to or practice at giving constructive criticism. Note that courtesy bias is different than social desirability bias, which is about portraying the self in a positive, norm-abiding light.

 .

Courtesy bias can be a real challenge, given our role as impartial, independent researchers and trusted advisors who want to deliver a truthful picture to our clients and give them information that may help them decide whether to continue or how to modify their program in ways that seem consistent with achieving the desired outcomes and avoiding undesired outcomes. Unfortunately, not all of the tools that have been developed for addressing social desirability bias (list randomization strategies or the Marlowe-Crowne scale) automatically apply to courtesy bias, though this could be an area for future research (for example, Feedback Labs suggests a Net Promoter Analysis that we might explore / that may end up overlapping with Marlowe-Crowne).

 .

In the meantime, there are a few tactics you can try, during and also before and after interviews / survey administration.

During interviews

  • Trying to address concerns and power imbalances: As is typical practice during the consenting process, we should stress our independence, the anonymity of the responses and protection of identities, and the respondent’s right to forgo any question. We may want to repeat this information at various points during the interview or administration of the questionnaire, rather than only saying it at the beginning. We may also want to highlight that we’d rather they ask to skip a question than to provide an untruthful or overly rosy answer. In addition, we can portray our client as eager to learn (if this is true!) and that the responses will not result in the removal of the program (if this is also true!) or other negative repercussions. Finally, we may want to point out that the client views constructive feedback as an important partnership they have with participants (yet again, if this is true!).

 .

  • Indirect questioning: Sometimes it can help to ask about the views and experiences and actions of others, rather than starting with the respondent. This is to open up another door for expressing how one feels, potentially building up rapport and trust so that the respondent will eventually come around to talking about their own views (though we still learn something even if they do not). When i did my very first fieldwork in Nepal, it was amazing how everyone reported that everyone else in the village used a traditional healer besides themselves; then sometimes we’d move into the fact that they actually did go but they didn’t believe in it; then sometimes it would turn out that they believed in it in certain cases… We can ask questions that start, “it is sometimes the case that teachers find it [challenging to participate in the program]. Have you heard anything about that?” Of course, we need to be very transparent about the way we framed the question when we report the results (including the full question or questionnaire item in a footnote should be standard practice).

 .

  • ‘Ball rolling’ / social norming questions (note that i made up these terms): Sometimes, it can be helpful to try to set social norms that it is ok to say something negative to get a conversation going. This could be hypothetical, “i wonder how i would find time to do those exercises or chat with those teachers — do you find this challenging?” Or it can be, “some teachers have been telling us that it is difficult or uncomfortable to do x… have you ever experienced that?” Depending on how far along you are in data collection, these questions to get the ball rolling can reflect real data or they may, slightly deceptively, reflect expected data. Each team will have to decide what is comfortable for them. Note that while this may seem leading, it is not necessarily trying to elicit or incentivize a particular direction of response, just to give the respondent permission to tell their truth. As with indirect questioning, of course, we must be absolutely transparent about the question(s) we used to elicit the data we present in our results and recommendations, including the question we use in a footnote. (If you are doing qualitative interviews and this is a question you update over the course of the research, you should report on the varieties of ways you asked the question.)

As a side note, it may also be that you need to do repeated rounds of interviewing with the same person during a data collection wave to build up the necessary rapport to get to the truth. 

           .

  • Reinforce that we and the client value honesty. Remind the respondent not just that we value their time, but also that we chose them for their insights and honesty and really value these. That we consider them the experts on the experience with the program and that we have no other way of learning about what may be better than them telling us.

 .

Before &/or after interviews

  • Introduction from the client. This is a tricky suggestion because it seems at odds with presenting ourselves as independent from our clients; your team should consider carefully if it makes sense in your context. Still, if our client is trusted by the participants and they signal to our intended respondents their desire for us to hear the truth, this could be helpful.

 .

  • Do your interview preparation. As much as possible, try to learn what we can about a respondent before meeting with them (or after, to contextualize their responses and decide if a follow-up interview would be helpful). Have they attended all the meetings this academic year? Do we know if they have made use of new information presented in an agricultural training? Do program staff have anything to say about the respondent’s level of engagement or about the morale of other program staff? Gathering what information we can from administrative data, program staff, and other possible stakeholders can help us more effectively probe about what may not be working that well for our respondent, so that we can try to push beyond “the program is good” in our interviews (if, of course, the respondent has more to say — we aren’t trying to push them to make up complaints that don’t exist!).

.


Viewing all articles
Browse latest Browse all 22

Latest Images

Trending Articles





Latest Images