Skip to content
Blog

3 steps to designing an effective phone survey that reaches respondents

Mitali Roy Mathur 3 April 2020

This post is aimed at research practitioners experienced at administering in-person surveys and interested in beginning phone surveys.

Let’s say that you are in charge of a phone survey. You get a list of phone numbers, sit down in a comfortable chair, and begin making phone calls. Sounds simple enough.

It is unfortunately not that easy. Running surveys over the phone presents challenges that might not initially come to mind — from realizing why respondents cannot be reached to managing frequent callbacks over a period of time.

During phase I of our experiments, we have focused on the operational challenges of reaching respondents. Our goal is to learn the dos and don’ts of running phone surveys by implementing a series of ongoing rapid pilots. We want to understand and address the challenges of creating a set of efficient methods for remote data collection.

During our first two pilots, we simply sent surveyors a list of phone numbers and asked them to call each number and try to reach respondents. Unfortunately, after two pilots, we were only able to reach around 26 per cent of respondents for the first pilot and 36 per cent for the second. After experimenting further with different calling protocols in the last three months, we now consistently reach 80 per cent of our sample through phone surveys.

In this blog post, you will learn more about the steps, experiments, and findings that have helped us increase our respondent reach. We focus on key factors of phone surveys: the optimal timing, frequency, and duration of calls. It is important to note: we have only been experimenting in India. However, we hope that these steps might help other teams understand best practices in their respective contexts.

Step 1: List the different reasons why phone calls are failing to reach respondents

If respondent reach is a concern, it is important to understand the specific reasons why phone calls are failing. For the sets of calls in our first two pilots in which we gave phone numbers to field officers with limited guidance, we asked surveyors to note the reasons the respondent was not reached or why they only partially completed the survey. With this input, we were able to create a list of the possible outcomes of a phone call.

This was a helpful exercise because we learned more about the different geographies and circumstances in which respondents were not reached. From our first two pilots, we learned that many of the phone numbers we called were invalid, many numbers were unreachable due to failure to “top-up”, and some respondents were simply not picking up. Since most numbers existed, but we had not been able to reach them, we realized that we could solve this problem by carefully cleaning up our list of phone numbers and trying to call the respondent multiple times at different points in the day to increase the likelihood of someone answering.

In other contexts, you might find that a high percentage of initially unreachable numbers are never topped up, never have network, do not exist, or are entered wrong (if drawing from a third-party list). In these situations next steps and solutions will be different from ours, illustrating the importance of tracking.

Step 2: Create a protocol for how and when to call back

Our next natural step to increase reach was figuring out what to do when a number was not reached. It is obvious that we needed to call respondents multiple times, but we did not know what time and how frequently to call them to maximize respondent availability.

We reviewed the literature on best practices for reaching households and decided to test different protocols for our context. We narrowed our research down to three protocols that differed in the number of calls to be made and the timing of those calls (x time of day; x number of times per day). In our third phone survey pilot, we randomly assigned our sample to each protocol and asked our field officers to follow the protocol if the number was not reached.

After debriefing with our field team and analyzing the survey data, we were able to determine which protocol accounted for the highest proportion of respondents reached.

In our most successful protocol (depicted above), we split the day into three slots: morning, afternoon, and evening. If a household was not reached in one slot, the field officer called the respondent back during the subsequent slots for up to seven attempts. This could mean up to three phone calls to the same household per day. Every day, we assigned field officers to a different two-hour calling window within each slot, ensuring field officers were truly attempting to reach the household at different times. If a household was reached, but could not talk at the moment, we recorded a date and time they can be reached again, and follow the appointment information as a next attempt.

In all other protocols, the response rate was statistically the same, but we saw a big jump in overall response rate after we continued piloting with our preferred protocol. We can’t estimate the direct impact of the callback protocols because this pilot was conducted in different geography than our first two non-callback pilots: the regions might have underlying differences in base respondent response rate. In an upcoming pilot, we will estimate how having clear callback procedures increases (or does not increase) respondent reach by testing our successful protocols in the same geography we conducted the first two pilots in.

Step 3: Build a call tracker

After finalizing our protocol, it became clear that managing calling assignments offered a big challenge. In a typical in-person survey, surveyors are given an assignment sheet and tasked with finding and surveying households at their own pace, but with phone surveys, surveyors may be calling the household multiple times during the same day. Managing multiple households across days, attempts, and time slots is difficult, even if the surveyors are well-trained on protocols.

We developed a tracker in Google Sheets that automatically inputs information on the attempt, date, and time of each call, and outputs the next steps dictated by our protocol. This way, instead of scanning a full assignment sheet to figure out which households need to be called back when (which can lead to errors in maintaining consistency with protocols), surveyors can filter the date and time of day to see all of the phone numbers they should call at a particular time. With larger teams that do not have access to Google Sheets, project managers can filter the tracker and email assignments. Surveyors can see the tracker and immediately know which households still need to be reached, which are completed, which have appointments, and for which respondents, all attempts have been exhausted.

In the early phase of our tracker, surveyors directly inputted the date, time, and status of the call into a spreadsheet, which recorded each attempt and outputted the next attempt information. This approach was not scalable because it required a working knowledge of Google Sheets and it was difficult to ensure that the surveyors inputted accurate information.

By the sixth pilot, and after many iterations, we had a fully automated tracker. In our trackers, meta-data from SurveyCTO is inputted and the next attempt protocols are outputted. Mismatches between the data from SurveyCTO and the next steps are also calculated to ensure that enumerators are abiding by protocols.

A sample of our survey tracker, condensed

This sample tracker is a simplified version of the tracker we use. Using variables such as start time and identifying geographic information we code in formulae that fill the cells of the tracker in real-time as forms are submitted on SurveyCTO. We have carefully automated this process so the only inputs surveyors fill out on the form that are pulled into the tracker are “call status” and “survey status.” Additional tabs in the tracker summarize survey progress and mismatches between submitted form times and survey protocols. Surveyors are able to filter the next steps column and their name in order to narrow down which households to call and when.

Due to the scale of our phone surveys, we have been duplicating tracker workbooks such that each district has its own tracker. We can therefore filter the assignment and SurveyCTO data accordingly. Doing so ensures that the workbooks aren’t slowed down by the calculations and can update smoothly.

We provide surveyors without access or knowledge of Google Sheets a much simpler paper version of the tracker which they can fill out manually. We have included an endnote in our survey that provides the information to fill out in the tracker and next steps for surveyors to record. In these cases, field managers still use the Google Sheets tracker to filter down assignments and send emails to surveyors of their assignments per time slot in accordance with the tracker. Surveyors are then able to compare their paper tracker to their emailed assignments and follow appropriate callback protocols.

A well-designed tracker is extremely useful in helping organize and distribute assignments to teams to ensure that call protocols are followed.

Following these three steps helped us significantly increase our reach of respondents. We were able to evaluate the context we were working in, determine the best calling protocol for increasing reach, and manage assignments via an automated tracker. In part II of this post, we will focus on the operational questions about survey duration and optimal call times.

If you’re experimenting with phone surveys or have feedback about our processes, please comment below or on social media, or reach out to our team, abhilash.biswas@idinsight.org and mitali.mathur@idinsight.org, with your thoughts and suggestions.