Introduction
IDinsight is currently working with NITI Aayog, the Government of India’s embedded development policy think-tank, to monitor a range of maternal and child health (MCH) outcomes in the country’s 27 highest-need districts. We are measuring these outcomes as part of a larger, household-level study. To identify our survey sample we used voter rolls as a frame, but the subset of households in our sample with a child is not large enough to give precise indicator estimates. We needed to find a way to increase the sample of mothers and children without introducing bias. Simply asking enumerators to wander the village until they found enough pregnant and lactating women (PLWs) to fulfil our sampling quotas wasn’t an option. This blog post shares how we addressed this challenge and provides insight to others interested in using public datasets to rigorously evaluate outcomes — even when these datasets fall short.
Fortunately, in India, frontline community health workers called ASHAs keep registers of all the PLWs in localized areas, or catchments, that are, on average, about the size of a polling station or a village. As ASHA registries are tied to valuable provision of services and access to government benefits, they tend to be quite comprehensive. By cross-referencing the voter rolls in places where we had membership data for both, we estimated they accurately cover 75 per cent of the population. However, it’s likely inclusion on these lists is correlated with health outcomes; it is likely the excluded 25 per cent may have very different outcomes. For example, a key health outcome indicator is whether a pregnant woman is registered for antenatal care. In most cases, the ASHA is responsible for doing these registrations, so it would make sense that mothers on the list are likely to be registered.
This approach, known as multiple-frame sampling (we used two frames, but it is possible to use more), allowed us to use these lists but correct for any bias under minimal assumptions. By determining whether every PLW we sampled was either on an ASHA list, a voter roll, or both we could define the probability an individual was included in the sample. The next post in this series will explain how we did this to measure MCH outcomes. In this post, we will illustrate how to do dual-frame sampling effectively using the hypothetical example of a phone survey that calls both landlines and mobiles.
Note that each observation is drawn from one and only one frame. Dual-frame surveys then require two key assumptions:
The sampling frame membership assumption can be challenging as it means each respondent has to be cross-referenced to the other frame. In our case, we had to determine whether every mother we sampled from the voter frame was on an ASHA list and whether each PLW drawn from an ASHA frame might have been drawn from a voter roll. This was feasible because ASHA lists are kept locally, but cross-referencing against other kinds of administrative records may be difficult or impossible.
The probability sample assumption is also helpful to remember since it rules out combining certain types of samples. You can’t magically make a non-probability sample such as a snowball sample rigorous by combining it with an area frame survey.
The classic example of a dual-frame is a phone survey that combines a cheap-to-call landline sample with an expensive-to-call mobile phone sample1.
Assume we are interested in measuring the percentage of households with phones where at least one household member takes at least one selfie daily. Random digit dialling (RDD) landline numbers is much cheaper than calling mobile numbers, but only a subset of the population has a landline. We can combine the biased landline frame with a mobile frame to achieve an unbiased estimate of the whole population of mobile and landline phone numbers by applying an appropriate weighting scheme. Then, it is simply a matter of sampling more from the cheaper landline frame, (analogous to the ASHA list in our application), and less from the expensive mobile frame.
Here’s how it works: Assume that each household with a mobile has a 50 per cent probability of having a member that takes selfies daily. No households with only landlines have a member that takes selfies2. Assume in a population of 1000 households, 400 households have only mobiles, 400 have mobiles and landlines, and 200 have only landlines (Table 1).
While a sample drawn from either cell phone or landline frames alone may exclude different portions of the population, we will show a dual-frame can produce an unbiased estimate across the population of people with either kind of phone while reducing the number of mobile numbers that need to be dialled.
To build unbiased estimates we need to quantify the size of the overlap between frames. Luckily, it is easy to ask a household you reach via landline whether or not they have a mobile, and visa-versa. We can use this information to define four subpopulations: mobile-only (MO), mobile sample with a landline (ML), landline sample with a mobile (LM), and landline only (LO). (Figure 1)
Figure 1: Subpopulations
Since we assume all households with mobiles have a 50 per cent probability of taking selfies and households without mobiles take no selfies, in the mobile sample we expect a selfie prevalence of 50 per cent (400/800) and in the landline sample, we expect a selfie prevalence of 33 per cent (200/600). The true selfie prevalence in our population is 40 per cent (400/1000). To estimate the population prevalence, we need to re-weight those households with both landlines and mobiles to reflect that they appear twice in our data, once from the landline and once from the mobile frame. If we ignore the duplication and naively calculate selfie prevalence from all 1400 observations, in our data it would be 42 per cent (600/1400) which is wrong. Without weighting the 1400 observations, our estimate is based on too many observations from households with mobiles: 1200/1400 = 86 per cent as compared to the actual 800/1000 = 80 per cent in the population.
Households belonging to one frame, but not both, are unweighted from a dual-frame perspective. In a normal sample survey, the sampling weights for each frame are multiplied on top of the dual-frame weights. We apply dual-frame weights that sum to one for households in the overlapping frames so that we don’t overcount these households. The adjustment to weights of observations present in both frames is defined on the [0,1] interval. By convention, we denote the adjustment 𝜃. Observations belonging to the ML subpopulation take the adjustment 𝜃, while observations belonging to the LM subpopulation take the adjustment (1 − 𝜃). For example, if we just gave landline households with mobiles a weight of zero and mobile households with landlines a weight of one (e.g. 𝜃 = 1), the weighted sample would have 800 observations from households with mobiles (all from the mobile frame) and 200 landline only households just like the population. If we gave a weight of one-half to all households with both a landline and mobile (e.g. 𝜃 = 0.5) and a weight of one to the remaining households, the weighted sample would similarly have 800 observations from households with a mobile (600 from the mobile frame and 200 from the landline frame) and 200 landline-only households, which matches the population. The value of 𝜃, as long as it’s between 0 and 1, doesn’t affect the point estimate because although frame membership (e.g. landline or mobile) correlates with outcomes, households in both frames have the same outcomes regardless of which frame they are drawn from (Table 2).
However, the choice of 𝜃 does affect the variance of the two estimates. Since the landline and phone samples are independent, we can simply consider the variance as the sum of the variances from each sampling frame. Theta should reflect the relative precision of estimators derived from the two sampling frames (Wolter et al. 2015). If we assume variance is identical for the dual-frame respondents, regardless of frame, 𝜃 corresponds to the relative sample sizes. Thus, variance is minimized when 𝜃 =.5 if we use all 1400 observations in the example, and variance is minimized when 𝜃 =1/3 when we sample only 50 per cent of mobile respondents (Table 2). Note that the dual-frame weights allow us to take a small sample from the expensive mobile frame and still produce an unbiased estimate. If we relax the assumption of equal variances there is a large literature on optimizing 𝜃 (Lohr 2009). Given the relatively small effect of 𝜃 on variance for large sample sizes, however, researchers often simply fix 𝜃 = .5.
As noted above, dual-frame adjustments can be applied to existing sampling weights. When sampling, we can consider the dual-frame adjustment as reflecting the fact that an observation could have been counted twice, rather than an observation actually being included twice. Note that without the dual-frame adjustment the sum of all observations’ sampling weights would be the total population of the two frames, plus the overlapping population (e.g. in the phone example the weights would sum to 1400). In practice, these properties are useful for gut-checking the accuracy of a weighting scheme. Using different sampling fractions and weights for each sampling frame also allows the survey designer to optimize sampling fractions in each frame with respect to the cost of data collection. Approximately, the ratio of the sample sizes should be proportional to the square root of the costs and the size of each frame’s underlying population.
Dual-frames can help organizations collect more data at a lower cost. A particularly exciting application for dual-frame surveys could be combining phone and in-person sampling frames. For example, the recent National Survey of America’s Families focusing on the lives of low-income Americans combined random digit dialling with an area frame. While random digit dialling may not be feasible in many low- and middle-income country contexts, organizations conducting impact evaluations can leverage phone numbers collected by implementers or by surveyors during a previous data collection round to build a phone frame.
The next post in this series outlines the Data on Demand team’s experience with dual-frame surveys in the latest score-card round.
6 December 2024
5 December 2024
4 December 2024
3 December 2024
27 August 2021
24 May 2021
23 March 2021
2 February 2021