IDinsight Manager Alec Lim during data collection in the Philippines ©IDinsight/Jilson Tiu
Short messaging service (SMS, or text-based) and interactive voice response (IVR, or voice-based) surveys promise faster, more affordable data collection than traditional face-to-face methods. But in countries like the Philippines—and in many other low- and middle-income settings—they come with specific challenges. Geographic targeting is difficult, especially when conducting random digit dialing (RDD), which cannot rely on regional area codes to narrow the sampling frame. In addition, mobile signal reliability varies widely across areas. Cost savings also come with design constraints: short message formats, no interviewer support, and limited control over who responds. In our recent study on childhood vaccination barriers for children aged 0–2, we found that success relied less on any one technical innovation and more on thoughtful design grounded in behavior, context, and practical trade-offs. This blog shares what worked, what didn’t, and what we’d do differently next time. This blog also shares reflections on the tradeoffs between cost, quality, and ease of implementation among these different survey modalities. This analysis is part of our ongoing effort to find the most cost-effective, right-fit evidence for partners based on the decisions they need to make.
SMS/IVR surveys are phone-based tools for gathering information remotely. They are especially useful in contexts where in-person data collection is expensive, time-sensitive, unsafe or logistically difficult. Mobile surveys are quicker and cheaper. In our case, SMS/IVR interviews cost about one-fourth of what in-person surveys cost per completed response. That said, while remote modes can offer a cost-effective and scalable way to gather certain types of data, they are not a full substitute for in-person surveys. Choosing this method requires awareness of its limitations in three main areas: who you can reach, what you can trust in the data, and how effort should be prioritized to run the survey well.
Literature shows that SMS/IVR response rates were lower and skewed toward younger, more educated caregivers. This method also limits the ability to reach a representative sample of specific subpopulations without a verified list or sampling frame that contains phone numbers. For in-person surveys, it is possible to create a sampling frame with household listing or by using existing administrative lists. In our case, the only reason we were able to reach caregivers of children 0–2 was because we worked with community health workers, who maintain updated master lists of children by age group. Even then, response rates were much lower than for in-person surveys: 42% for SMS/IVR compared to 75% for face-to-face interviews. These limitations make appropriate weighting essential to ensure that findings reflect the intended population.
In-person surveys allow for vaccination card checks, probing, and real-time correction of errors. Without these supports, SMS/IVR results were more prone to misreporting, especially when incentives were offered. There were less built-in validation checks in the system: no live enumerator to clarify questions and no way to confirm identity, particularly in areas where mobile phone sharing is common. This makes mobile surveys less reliable for sensitive or complex indicators like confirmed zero-dose prevalence or true vaccination intent, which benefit from in-person validation.
SMS/IVR surveys require more thoughtful survey design. Limited character counts, high chance of respondent dropout, and potential fraud require careful planning, real-time monitoring, and post-survey cleaning to ensure data quality.
The lessons that follow are about easing this third consideration—how to prioritize thoughtful design—for future SMS/IVR surveys. These tools skew to a certain demographic and are not appropriate for every indicator. But with the right behavioral design, we can make SMS/IVR surveys work better for the data they can collect well.
To reach caregivers of young children via mobile surveys, we partnered with Barangay Health Workers (BHWs), local health volunteers who serve as trusted links between the health system and their communities. Their involvement was essential in identifying eligible households, sharing survey instructions, and encouraging participation.
The process followed a simple flow, with each step intentionally designed to address common SMS/IVR challenges such as limited engagement, dropout risk, and misreporting:
This approach helped us reach nearly 3,000 caregivers across two regions, with over 2,100 valid responses. Clear communication, ongoing support, and the trust BHWs already had in their communities were key to this process.
Behavioral insights shaped how we approached survey design, from message clarity to incentive timing. The lessons below reflect both what we learned in the field and the human-centered, behavioral principles that informed our decisions from study design to survey execution.
During piloting, we tested different incentive timings to assess their impact on BHW motivation and performance. In one group, BHWs received the full incentive upfront as part of the initial briefing. In the other, we split the incentive—half was given during the acknowledgment process and the remainder provided as mobile load after recruitment. We found that BHWs were more engaged and proactive in reaching out to households when they received the full incentive at the outset. In contrast, in barangays where incentives were delayed or given in tranches, both BHW engagement and respondent opt-in rates declined noticeably.
SMS surveys impose hard limits: 160 characters per message, no follow-up probing, and no enumerator to explain things. In early pilots, only 17 of 26 respondents (65%) who consented completed the survey, before any manual intervention. Manual intervention involved manually resubscribing a respondent if they had either dropped off in the middle of the survey or if they were disqualified by the system for having too many invalid responses.3 We saw this pattern particularly with text-based answers that were prone to errors. After simplifying the flow and shifting to numeric responses (e.g., 1 = Yes, 2 = No), completion improved significantly; 84% (2,500 of 2,987) during full rollout, and 72% (2,153 of 2,987) after data cleaning.
We embedded tailored prompts into the survey to help respondents get back on track after making errors. Instead of generic error messages (i.e., “This is an invalid response, please select from the options provided”), we used custom phrases for each question like “Please press 1 for None, 2 for Some, 3 for All” after an invalid response. These coaching nudges helped reduce repeat mistakes and contributed to the improvement in completion rates mentioned above.
This was especially important during the opt-in and consent stages, where confusion caused the largest drop-offs. For future surveys, well-designed prompts can help guide users through confusion and keep them from dropping off, especially when no live support is available.
The nature of SMS/IVR surveys limits how much we can validate respondent eligibility in real time. Because this mode relies on self-administered responses, we had to depend on respondents to accurately report whether they were adult caregivers of children aged 0–2. During our routine data quality checks, we began noticing discrepancies in patterns of responses. To investigate further, we implemented callback interviews. This step, while not common in SMS/IVR studies, allowed us to validate a portion of the sample. Across two rounds, we conducted nearly 400 callback interviews to assess self-reported eligibility and removed around 16% of responses based on these checks.
The findings confirmed that a significant number of participants either did not have a child in the target age range or were not adults themselves. These misreports were likely driven by the promise of an incentive and the fact that self-administered mobile surveys offer no real-time identity verification. While such follow-up is rare for remote survey modalities, it gave us a more accurate view of our sample and improved the integrity of our dataset.
Survey responses uncovered real barriers to vaccination, like supply issues and costs of getting to the facility. SMS/IVR helped surface these insights in a way that was faster and less resource-heavy than traditional surveys. But it also required more attention to flow, clarity, and incentives than we expected.
Summarizing our key lessons from above:
The broader takeaway? We should not treat SMS/IVR surveys as a plug-and-play solution. The smallest design tweaks—from how an incentive is timed, to how a question is phrased—can shape not just participation, but data quality and whether the experience truly respects the people we are trying to reach.
We hope these lessons help others planning SMS/IVR surveys to anticipate the pitfalls and make smarter, more human-centered design decisions.
25 February 2026
17 February 2026
29 January 2026
28 January 2026
22 January 2026
24 December 2025
18 December 2025
9 December 2025
3 December 2025
13 May 2024
14 July 2022
22 October 2020
8 March 2022