The feedback from enumerators who switched from in-person to phone surveys during COVID-19 provides valuable insights to strengthen phone surveys.
A phone-charging shop in Kiryandongo owned by a baseline enumerator. ©IDinsight/Heather E Lanthorn February 2020.
When IDinsight was hired by GiveDirectly to assess how cash transfers impacted South Sudanese refugees in Uganda, the team did not anticipate COVID-19 disrupting the project halfway through. But in March 2020, everything switched gears. Although the endline for the impact evaluation was months off, we wanted to understand more immediately how the pandemic was affecting households and whether cash transfers dispersed prior to the pandemic had made a difference in people’s lives. With support from the Elrha Research for Health in Humanitarian Crises (R2HC) funding program, IDinsight conducted a quantitative phone survey, consisting of 1900 interviews over three rounds, exploring the effects of cash transfers on health and household welfare outcomes during the COVID-19 pandemic.
We were able to quickly mobilize a subset of the team who had conducted an in-person baseline from September to November 2019. But we needed to shift several tactics to make a phone-based survey work for a team used to face-to-face interviews. In this post, with heavy input from our enumeration team, we reflect on how to set projects up for success.
The setting: Kiryandongo refugee settlement, Uganda
Uganda is one of the five countries that host the most refugees in the world. In our work, we focus on households registered in the Kiryandongo refugee settlement, initially established in the 1970s and located in Uganda’s Western Region. This ~10,000-household settlement is situated on formerly cleared ranch land, adjacent to Kiryandongo District’s commercial center, Bweyale. Most refugees registered in Kiryandongo are from South Sudan. They are predominantly ethnically Nuer, Dinka, or Acholi/Luo, though over ten languages are spoken, which required our team to work in six languages officially, and team members occasionally had to spot-translate for others.
Houses in the settlement, where our research focused, are not connected to grid electricity, which makes phone surveys difficult. At baseline (September — November 2019), about half of interviewed households had at least one solar panel. Many rely on local charging stations, which have generators, to power up their mobile phones. Rain often disrupts what power is present, creating several challenges for reaching people by phone during the rainy season. With the rains, power options may not work for a day or more, leaving respondents and enumerators without any means to charge their phones and power banks.
During data collection, some respondents were also hesitant to pick up calls while it was raining due to a myth that mobile phones attract lightning. Even without the rain challenges, most households also experience poor network connectivity since the most popular mobile network provider has not built a network tower in the settlement.
Given that our enumeration team are themselves mostly refugees, these were the conditions in which they were living and working as well.
For our earlier baseline, enumerators attended in-person training in September 2019. Unlike this in-person team, for our phone-based surveys during COVID-19, we could only work with enumerators who had a smartphone, head/earphones, and access to a phone charging facility. This allowed them to call respondents as well as tune into remote training via Google Meets and to engage in group conversation on WhatsApp. Enumerators also administered and captured the surveys using SurveyCTO, which they downloaded onto their phones.
We worked with a team of 12 extremely talented enumerators; eight of whom are South Sudanese refugees, with most registered in and living in or near Kiryandongo settlement. Nine of the eleven are male and are all between 23 and 35 years of age. All enumerators had completed their secondary school education, and a few had diploma qualifications or undergraduate degrees. Everyone we requested to apply was immediately available to work for us, which speaks to the limited job opportunities in Kiryandongo, during the pandemic certainly, but also more generally.
While our goal was to have the enumeration team “work from home,” in reality, this wasn’t entirely feasible. Six out of the eleven enumerators did not have electricity in their homes and had to leave their phones to charge overnight at a shopping centre in the settlement. Most enumerators also preferred to make their calls outside their houses, away from distractions and where network connectivity was better. A few had to make calls from the settlement’s main centre, where the network is stable, and it is easier to find a place to charge a phone.
Typically, enumerators worked eight hours per day, for five days during the week, starting at 7:00 am and taking breaks during the day as needed. As per respondents’ requests, they would schedule phone calls up till 9:00 pm as well as on weekends.
As part of the questionnaire, we provided detailed lead-in scripts and probes to help the enumerators introduce themselves, the survey, and create a good rapport during the interviews. Additionally, the respondents were familiar with our team of enumerators, having interacted with them during baseline. Nevertheless, enumerators found it was easier to create confidence and trust with respondents during face-to-face interviews. Unlike in-person surveys, it was not possible to observe visual non-verbal cues from respondents during phone surveys.
In this study, we had the advantage of previously meeting all of our respondents and could reference this earlier meeting (date, enumerator name, and reference to our logo and vests) when we called. Even with this in-person point of connection, cold-calls are tough. We suggest incorporating conversation-builder scripts at the start of surveys to build trust with respondents. We allowed time during training to practice making use of this script until it became more natural, allowing enumerators time to get comfortable and build familiarity with it. For in-person interviews, we often focus training on visual non-verbal cues, but for phone surveys, the focus needs to be on auditory non-verbals. These could be unspoken reactions such as long pauses, sighs, or hesitation. During training, develop and practice techniques to respond to each.
At times, respondents had to take calls around other people, making them less comfortable providing detailed responses. There were also distractions affecting the calls. One enumerator commented, “Some respondents lose concentration because of the gap created by communicating over the phone. Some tend to pay more attention to other things that they were already doing before the phone calls, such as cooking, while talking over the phone. This compromises concentration on the side of the respondents.”
To address social desirability bias, we framed all questions to have neutral answers and well-elaborated scales. We also scripted detailed introductions to sensitive questions through which we reassured respondents of anonymity and confidentiality of their responses.
To help increase respondent attentiveness, teams can employ reflective listening by repeating back responses, having verbal ‘nods’ such as “ahs,” showing empathy and friendliness, and alerting the respondent of the amount of time left to complete the survey in case of diverted attention. Enumerators can also ask respondents if they need a moment to complete a task so they can focus.
We tried to reach all respondents who had provided their phone numbers to us during baseline. However, enumerators experienced many challenges when trying to reach the respondents: some had their phones off over long periods, including during trips back to South Sudan, while others had their phones off for short periods when it was not possible to charge them. Some were reachable but failed to honour appointments or rescheduled several times, while others lived in poorly connected locations. To improve our response rates, we implemented a callback protocol whereby respondents were called at least seven times on different days and weeks to attempt to reach respondents who kept their phones off over long periods. Respondents were able to schedule calls at a time that was convenient to their schedule, and enumerators recorded why households did not answer calls and set up appointments for callbacks.*
In Table 1, we show our response rates across each of our three survey rounds, including how many people we reached consistently. In each survey round, we called all 1060 respondents for whom we had phone numbers.
Fortunately, at baseline, we had asked respondents about alternate numbers at which they could be reached — this helped us discover numbers that had been changed and disconnected. If we had known at baseline that we wouldn’t be visiting in person, but rather trying to track people down by phone, we may have asked for different or additional ways of contacting them. This is a lesson going forward, to set a study up for success for different, and unexpected means of follow-up.
When providing feedback, two enumerators pointed out that making phone calls over long periods was often tiring. One said, “It is hard to talk to many respondents over the phone every day for a protracted number of days.” Another noted that “Phone calls irritate the ears… There are some days, that I experience headache, itchy ears.”
To support enumerators, we allowed them flexibility on the time of day they would make calls to take breaks as needed and make calls when most respondents were available. We also provided sick days and health insurance in case anyone in the team members fell ill. To keep the team on track and provide feedback, we had weekly group meetings and daily one-on-one check-ins on the phone. We also had a WhatsApp group where we shared updates.
Teams need to ensure that they are keeping track of their staff’s wellbeing. As part of the training, teams can incorporate guidance on time management, handling stress, and taking breaks. Enumerators suggested that having shorter surveys (ours averaged between 30 minutes to 38 minutes) or spacing out surveys during the day may help ensure that they remain motivated and in good health.
We checked in with our team regularly, including frequent contact through WhatsApp (chatting and video messages to overcome network challenges, daily team calls, and weekly one-on-one check-ins). In each of these, we held space for, and actively solicited, feedback. Still, by also completing this end-of-surveying survey, we learned even more. In the future, we will strive for more structured (and written) reflection and feedback during and after data collection. These feedback mechanisms are a powerful tool to ensure that enumerators have a chance to reflect on the work, and they contribute to adopting more sustainable data collection approaches in the future. Reflecting on the data collection experience, one enumerator said, “I am happy when I see the total number of respondents we have done as few as we are, we were able to achieve a lot.”
Despite the shortcomings highlighted above, there was also feedback that phone surveys save on time and increase traceability. Over three rounds of surveys, we were able to reach over 80 per cent of our sample. Three enumerators noted that phone-based surveys were better since it was easier to trace respondents who may have travelled — including back to South Sudan, on occasion — and the phone surveys saved time required to travel to respondents’ locations.
One enumerator noted, “Phone interviews can be done at any time, even when the respondent is away, as long as the respondent is free and willing. Even at night if requested by the respondent whereas in-person interview can not be done beyond 7 pm.”
We also found that phone surveys allowed cost-effective data collection since a lot of travel costs and logistics were no longer an issue.
Overall, we had a good experience with our enumerator team, and we hope to work with them again during the endline later this year.
* In this context of low literacy (54 per cent at baseline), we did not experiment with sending advance text messages to foreshadow our calls or confirm appointments, but in other contexts, this approach may further increase response rates.
We would like to express gratitude to our enumerators (Aaron Mabior, Abel Alier, Andrew Magong, Charity Muyuo, Christian Opio, David Riak, Isaac Franco, Jimmy Okello, Mario Malualdeng, Patrick Gatkuoth, Stephen Sokiri, and Susan Acen) for their patience and persistence. This surveys’ success is theirs, conducting more than 1900 interviews at an average pace of 7 surveys per day.
The study would not have been possible without input and collaboration from other IDinsight team members- Daniel Stein and Rico Bergemann, Emmanuel Rukundo of Apata Insights, the GiveDirectly team in Uganda, and contributions from many others, including the Uganda National Council for Science & Technology, the Office of the Prime Minister, and Mildmay Uganda Research and Ethics Committee (MUREC). For preparing this blogpost, special thanks to Emily Coppel of IDinsight and Cordelia Lonsdale of Elrha.
This research is funded by Elrha’s Research for Health in Humanitarian Crises (R2HC) program. R2HC is funded by the UK Foreign, Commonwealth and Development Office (FCDO), Wellcome, and the UK National Institute for Health Research (NIHR). Visit elrha.org for more information about Elrha’s work to improve humanitarian outcomes through research, innovation, and partnership.
8 October 2024
3 October 2024
24 September 2024
13 September 2024
6 September 2024
2 September 2024
We support NGO partners to improve program design, streamline implementation, evaluate impact, and accelerate scale-up.
14 October 2020
3 April 2020
26 June 2020
22 March 2022