This post explores how IDinsight addressed the tendency of program staff to want their most successful participants surveyed.
IDinsight surveyor Zewelanji Phiri conducts a post-harvest endline survey in Zambia’s Southern Province. ©IDinsight/ Kondwani-yobe Mumba
It can be a huge challenge identifying survey respondents in sparsely populated regions with limited information about where people live. This is especially the case when participants are recruited outside of the home: for example at a clinic, school, or as we’ll cover in this post, an agricultural extension training. For a process evaluation in Zambia, we needed to find hundreds of randomly sampled farmers — but all we had was a list of people who attended training. Many of them were impossible to track down without someone from the program we were evaluating supporting the effort to find where they lived. But once we started working with program staff, they were interested in taking us to their strongest participants. This post shares how we overcame the tendency of program staff to only want their most successful trainers or farmers to be surveyed — or to otherwise positively influence the findings of the survey.
Our two takeaways:
We wanted to understand whether prominent, volunteer farmers in a given community (lead farmers) could and would attend a centralized training and then, in turn, effectively deliver that training to farmers in their respective communities, sharing recommended practices for post-harvest crop management practices. Would these newly trained farmers, then, be willing and able to change their practices? (The short answer is yes, yes, and no ). We interviewed 347 farmers and 53 lead farmers. We focused only on farmers who were recorded to have received the training because we sought to understand whether the program logic held. (Check out the addendum for more on the context, sampling strategy, and results).
Once we selected farmers from the training attendance lists, we had to find them. The list of trainees contained, at most, the name, sex, age, village/district name, and phone number for each trained farmer — but this information came with challenges. Most village names are unmarked. The registers generally provided farmers’ government names, which may differ from the name they commonly use. We were not provided with most farmers’ phone numbers, and of those we received, few had both a working phone and network coverage.
The low population density of Zambia’s Southern and Central provinces made this work even more difficult: villages can be sprawling and widely dispersed, spread up to hundreds of kilometres away from district centres.
The combination of all these factors meant that a specific person’s household and farm were nearly impossible to find on our own, on an efficient timeline, without either a working mobile number or a local interlocutor.
Given this situation, once we randomly chose farmers from these lists, during the pilot, the implementer’s permanent field staff helped us to physically locate the farms of the selected lead and trained farmers during piloting. Implementing field staff were easy to contact, met regularly with the lead farmers, and knew where the villages were located. When we located a farm, we would approach the farmer, with the implementer in tow, and announce that would be asking questions about the training.
However, the seemingly efficient pilot strategy presented challenges in getting an accurate picture of what happened following training.
Since the implementing organization mandated each lead farmer to train 200 farmers, there was a risk the lead farmer had inflated the number of farmers he/she trained by adding names to their lists. This turned out to be true: at the end of surveying, we found that only 70% of the reportedly trained farmers we surveyed had actually been trained1
We worried about bias from two possible sources: lead farmers and implementing field staff. Both may have faced incentives to guide us to farmers who had been trained and could demonstrate their training — lead farmers may want to look good in front of implementing staff, while implementing staff may want the evaluation data to look good for their employers. This proved to be the case during our pilot and survey.
Some implementing staff and lead farmers asked why we had selected some poor performing (lead) farmers, by their measure, and left out known top performers.
Additionally, during piloting, lead farmers would bring us to only the subset of selected farmers who had been trained (by telling us that selected farmers who had not been trained were unavailable or had moved), or attempting to train other farmers on the spot.
Though we did not directly observe it, we also hypothesized that implementing field staff were calling lead farmers in advance of our visits, providing lead farmers time to do additional trainings. While any kind of information dissemination about good post-harvest practices could be beneficial to farmers, we wanted to talk to farmers who had really attended the planned trainings to see if the model was scalable without the attention (threat?) of an evaluator.
We realized that in our effort to locate farmers as efficiently as possible, and despite random selection, our sample was being biased in favour of ‘good’ farmers.
We realized the survey would have to serve an additional purpose as an auditing function: we wanted to see whether the farmers reported as trained had actually been trained.
During piloting, we quickly addressed this in two ways: changing the way we found farmers as well as the way we introduced our purpose and questions.
To locate farmers for our endline survey, we opted to bypass the implementing organization. Rather, we connected more directly with lead farmers and farmers, through one of three possible means, presented here in descending order of preference:
We made additional adjustments to how we introduced ourselves to farmers and who was present during the interviews, to mitigate pro-program, courtesy, and social desirability biases. When speaking with farmers, we returned to a plan we had considered pre-pilot, to frame the purpose of our survey to be about agriculture in general and did not mention the training specifically until the end, when we asked questions about the training itself. This way, farmers were forthcoming about if they did not receive or remember the training.
This approach still presented some challenges, including:
Time efficiency losses are worth it when the potential costs of bias could ruin your survey. Independent evaluators need to take implementers’ conflicting incentives into account when locating survey respondents.
This survey took place in rural Zambia, in the context of a program encouraging improved post-harvest crop management practices. Farmers in Sub-Saharan Africa lose between 4–18% of their grain crop (varying by country and crop) every year during post-harvest processes, such as drying and storing. This directly leads to reduced income and lower food security for farmers. Improved post-harvest management techniques can mitigate these agricultural losses. As an example, many farmers in Zambia use open-weave grain bags to store harvested crops. Open-weave grain bags leave grain exposed to insects and moisture, leading to crop loss and rot. Hermetically sealed grain bags and silos, by contrast, store grain more securely and protect against spoilage, thereby benefitting nutrition and income.
Most farmers in Zambia have not been trained in post-harvest management — prior to training, less than 5% of farmers and only 10% of lead farmers were aware of hermetic storage. This is partly because agricultural extension is challenging in low-density contexts. In-person agricultural training in Zambia is costly because of low population density; the long distances extension workers must travel from one farmer to another within their assigned region impose significant costs. Our process evaluation focused on an alternative, cascading approach to delivering agricultural information in Zambia’s Southern and Central Provinces.
This model sought to trim extension costs by using a cascade model of training on post-harvest crop management. Our approach included observing trainings, pre-post tests around training, and quantitative surveys and qualitative interviews with lead farmers and farmers.
We selected a stratified random sample of lead farmers and farmers to survey, considering district and sex, from a sampling frame based on lists of trained farmers. These lists were generated by each lead farmer to register who attended their training; lead farmers provided the lists to the implementer, who then shared them with us.
We conducted rapid pre- and post-training assessments on lead farmers and farmers’ knowledge of key post-harvest practices and followed up with an endline survey about these practices and other outcomes of interests six months after the trainings. We found the trainings were, overall, successful in transferring knowledge to both lead farmers and farmers, but this knowledge transfer for the most part did not translate into changes in practice within six months.
8 October 2024
3 October 2024
24 September 2024
13 September 2024
6 September 2024
2 September 2024
20 June 2019
6 May 2019
15 July 2019
20 August 2024