Skip to content
Blog

Save a life or receive cash? Which do recipients want?

2 December 2019

New research conducted by IDinsight, funded by GiveWell, explores the preferences and values of individuals and communities in Ghana and Kenya to inform funding allocations

An enumerator interviews a survey participant of the beneficiary preferences project supported by GiveWell.

An enumerator interviews a survey participant of the beneficiary preferences project supported by GiveWell.

IDinsight has worked with Givewell to measure the values and preferences of individuals and communities in order to better allocate development funding. This is one of a series of blog posts on our study. We are interested in discussing these results and are open to collaboration on similar future studies. 

Is it better to support a program that increases household income or one that directly saves lives? Decisionmakers in international development, from foundations to governments, frequently weigh complex trade-offs like these between different types of “good” outcomes. Governments in low-and middle-income countries must decide how to allocate funding across different ministries, and funders must decide which sectors, organisations, and interventions to prioritise. The preferences of individuals from low-income communities — those on the receiving end of the hundreds of billions of dollars of development aid ($153 billion from the OECD alone) 1— are rarely brought into these decisions.

Recently, there has been some movement by international development actors towards more participatory methods and programs that aim to capture program recipients’ preferences for specific policies or initiatives. 2 We have only seen a few attempts to capture the preferences for different “good” outcomes and associated moral views, which are required to inform higher-level trade-offs (for example deciding between programs that save lives or increase income).

Over the past two years, IDinsight has partnered with GiveWell to identify methods that elicit people’s preferences in low-income communities and collect data using the best of these methods. This is directly relevant to GiveWell’s decision-making process, which compares the cost‑effectiveness (or amount of “good” done per dollar spent) of charities working in different areas (e.g. health interventions that save lives and cash transfers that increase income). To make these trade-offs, prior to this research, GiveWell staff considered a variety of factors but no data existed on how recipients would make these trade-offs. This blog post shares findings from our recent research collecting data on individual and community preferences in Ghana and Kenya, and details our methodology and challenges with some of these approaches.

During 2018 we conducted extensive piloting in rural Kenya, testing more than 10 different approaches to capture preferences. The aim was to identify and develop methods and questions respondents could understand and capture reliable data of the highest relevance to GiveWell’s decision-making.3 In 2019 we scaled the most reliable methods, interviewing ~2,000 respondents from low-income households in Kenya (Migori and Kilifi Counties) and Ghana (Jirapa and Karaga Districts).

Study Results

In synthesising the results, we place the greatest weight on the findings that were seen across multiple methods (listed in the following section). There are a number of reasons, discussed in the limitations section, why the central estimates from this study should be more deeply explored. However, we did find that the direction and range of our main findings were relatively consistent across all approaches, which gives us some confidence that we have meaningfully captured preferences. We found that:

  • Individuals place higher value on life than predicted by extrapolations from high-income countries, and reflected in GiveWell’s previous moral weights. Across our primary and secondary methods, we have three distinct estimates of the value of averting the death of a child under five years old. The results of all three fall in the range of $38k — $115k4. Wherever a ‘true’ estimate might fall, this range is consistently higher than GiveWell’s previous median moral weights of $14k, suggesting that a shift to place more value on averting deaths is required to be more reflective of local perspectives. Following recent recommendations as to how to extrapolate value of statistical life (VSL) from available data in high income countries suggests a range of $3k-$69k for this population, placing our estimates at the upper end of this range5.
  • Individuals consistently place a higher value on young children relative to older children and adults. The exact ratio of value averting the death of under-5 children versus averting the death of a child over 5 is relatively imprecise (1.1–1.9 based on the primary and secondary methods across this study). However, the pattern of placing a high value on children under 5 has been consistent — we have seen this result across every method applied in 2019 and piloting in 2018. This is in contrast to GiveWell’s previous moral weights which placed more value on individuals over-5 (in part due to the greater economic contribution made by this age group). This suggests that to better reflect the preferences of recipients, we must place more value on interventions that avert deaths of children under 5.

Qualitative interviews with respondents give us further confidence in these findings and have been crucial to contextualise preferences and validate results. We integrated qualitative questions into every stage of this research, including: 1)brief qualitative questions in the main survey to validate quantitative answers, 2) focus groups to discuss and debate the quantitative questions with a number of community members, and 3)longer individual qualitative interviews exploring moral reasoning and cognitive processes in more depth. Some of the most important findings:

  • In the majority of interviews, we found respondents who critically and rationally weighed up the presented choices and answered them as intended (much to the credit of some extremely talented enumerators). There were still, unsurprisingly, some misconceptions of our questions6. But crucially, we do not think the misunderstandings identified were prevalent enough to overly bias or discredit the results7.
  • Most respondents used clear ethical frameworks to justify their decisions. These were often different to the frameworks GiveWell staff members apply to approach this problem8. Recipients provided clear arguments for why they support placing higher value on averting death, particularly of young children (a few of which are summarised in this accompanying post).
Why is this important?

The findings of this study have played a role in GiveWell changing its moral weights for the 2019 top charities decisions. In its recent update GiveWell has placed substantially more value on programs that save lives (relative to programs that reduce poverty). Additionally, where previously higher weight was placed on averting the deaths of individuals over 5, they have updated to place equal weight on both age groups (under 5 and over 5).

For us, this update by GiveWell provides a proof of principle. It is possible to quantitatively capture preferences on outcomes that can meaningfully inform decision-making. However, this was just one study of a specific target population, in a specific context, where noisy estimates and substantial variation from one region to the next is likely. Practitioners need more studies in different contexts, applying different approaches, and further testing these methods to reduce the uncertainty of these results.

We also recognise that preferences are just one approach to answering these questions. Some argue that decision-makers should be instead aiming to maximise peoples’ subjective wellbeing, which sometimes conflicts with their stated preferences and values. Others may believe we should focus on programs that try to maximise benefit to communities based on more objective data about program recipients’ economic and social contributions. As a global development community, we need to do more exploration to establish the roles and relative importance of these different data sources.

The preferences of populations targeted by aid interventions have not widely influenced funding decisions, in part due to the difficulty of reliably capturing them. We hope this study provides the beginning of a way forward to more readily incorporating their views into these difficult and important trade-offs.

Methodology

Method 1: Value of Statistical Life (VSL)

Value of Statistical Life or VSL is a measurement often used by economists to estimate how much people are willing to pay to reduce their risk of dying. While this may seem morbid, the calculation can be helpful for weighing and prioritising different policy agendas using cost-benefit analyses to inform where policymakers should invest.

Our first method captured VSL by asking respondents their stated preference. Respondents were asked about their willingness to pay for a vaccine or medicine (one of the two was randomly selected for each respondent) for themselves or their child (we randomly selected which child under the respondent’s care to ask about, as well as whether we asked about the individual or their child first), that reduces their risk of dying from a hypothetical disease by a small amount (5 in 1,000 or 10 in 1,1000) over the next ten years. Prior to the scenario, respondents complete a series of questions testing for and training their understanding of small probabilities using visual aids.

Method 2: Choice experiment

We conducted two-choice experiments, aiming to understand people’s moral views or their perspectives on how resources should be allocated to achieve different outcomes at the community level.

The first choice experiment presents trade-offs between saving lives and increasing income in the community:

“Program A saves the lives of 6 children aged 0–5 years AND gives $1,000 cash transfers to 5 extremely poor families in your community. Program B saves the lives of 5 children aged 0–5 years AND gives $1,000 cash transfers to [X] extremely poor families in your community. Which one would you choose?”

We varied the value of X, both within and across respondents to capture how respondents trade-off between giving cash transfers to poor families and saving the life of an extra child under 5 across the population. Before presenting the scenarios, we prompted respondents to think about the impact of both types of programs.

The second choice experiment presents trade-offs between saving those in different age groups:

“Program A saves [100/200/300/400/500] lives of people aged [under 5/5–18/19–40/over 40], Program B saves [100/200/300/400/500] lives of people aged [under 5/5–18/19–40/over 40]. Which one would you choose?”

We use the choices made by respondents to estimate their relative value of saving a life in terms of the number of cash transfers, i.e. a monetary value of life from the community perspective, as well as their relative values placed on different age groups.

Study limitations

Our study, especially the VSL component, used state-of-the-art techniques from the current literature and adapted them to local contexts. We conducted extensive piloting to maximise respondent understanding of the questions. We believe it is important to recognise the value of the study — namely being one of the first studies to systematically estimate the preferences among low-income individuals in low-income countries — while accounting for its limitations when applying the results. The two biggest technical limitations to our approach are:

1. While the findings of this study are relatively consistent, the exact estimates vary depending on the framing of the questions asked, and analytical approach.

  • Our questions take two different perspectives (individual and community) each with its own set of strengths and weaknesses and leading to different results.
  • For both methods, our main analysis closely follows standards in the economics literature. We did find some extreme valuations on some of our choice experiments (such as very low value being placed on individuals over 40, and a very high value being placed on averting death of individuals under 5 by a large proportion of respondents). The results of our analysis vary depending on how much weight is given to these valuations. We’ve made our best technical judgement about how to adjust for this in practice.

2. Our methods rely on respondents understanding complex questions, and may be prone to a number of biases

  • All of our questions require respondents to think about concepts and problems that they do not typically have to face. VSL in particular, is reliant on understanding and conceptualising small probabilities (respondents are asked for their willingness to pay for a vaccine that will reduce their risk by 5 in 1000 over 10 years). To address this, a substantial proportion of the survey was spent helping respondents understand these concepts and testing their ability to understand different probabilities.
  • All our methods are also inherently vulnerable to common surveying biases like “social desirability bias” (to respond as you think you should, as opposed to how you actually would) and “hypothetical bias” (to respond in a way that doesn’t reflect how you would actually behave). While we put many measures to overcome these biases, it is likely that they still affect the results.
  • These limitations could potentially have been alleviated by taking a revealed preference approach (where inferences about individual’s choices are made based on their behaviour, rather than using survey responses). However, these approaches can also give unreliable results if people don’t understand the risks of their choices9. After much consideration and debate during piloting we decided there was not a promising opportunity to conduct reveals preference analysis in our context.
  1. 1. The true amount of development aid is almost impossible to estimate accurately. The OECD number does not take into account aid by foundations (overseas grants by US foundations alone contributed an additional 9 billion in 2015 and were on an upward trajectory) and private donors; or the contributions of non-OECD countries such as China.
  2. 2. For example, Shapiro et al. 2019 explored how respondents in rural Kenya trade-off between different types of interventions, Khemani et al. 2019 (linked) explored how respondents in India trade-off between cash transfers and public services, and others have explored how respondents rank or chose between different policy priorities (such as Tortora et al. 2009). Shapiro, Jeremy. “Exploring Recipient Preferences and Allocation Mechanisms in the Distribution of Development Aid.” The World Bank Economic Review (2019). Tortora, Robert. “Sub-Saharan Africans rank the Millennium Development Goals (MDGs).” Washington DC: Gallup World Poll (2009).
  3. 3. A blog post with additional detail on this piloting process will follow.
  4. 4. Our two primary methods produced estimates of $40,763 (individual perspective), and $91,049 (community perspective), while the results of our secondary methods fell in the range of $38,000 and $115,000 (community perspective, alternate framing).
  5. 5. Extrapolation based on the work of Robinson et al. 2019, and the income levels for our sample population. More detail on this extrapolation is presented in the full report. Robinson Lisa A., et al. “Reference Case Guidelines for Benefit-Cost Analysis in Global Health and Development.” (2019); Robinson, Lisa A., et al. “Valuing Children’s Fatality Risk Reductions.” Journal of Benefit-Cost Analysis 10.2 (2019): 156–177.
  6. 6. For example, for the VSL questions, we found some respondents who were clearly anchoring the value of children’s medication to the market value rather than their own WTP to avoid risk: “Children’s medications are always cheaper than adults so I will not pay more.”
  7. 7. The example described in the footnote above was the most common misconception. In order to find out how widespread this view was, we asked a subsample of respondents to further explain the difference in WTP for themselves vs their child. 72 out of 675 respondents asked (10%) made any reference to market value, and this was only seen among respondents with a lower WTP for children than adults. This may mean that the ratio of child vs adult VSL should be higher, however when we drop these respondents the implied ratio is still well within the range seen across all other methods.
  8. 8. For example, a number of respondents placed extremely high value on averting death relative to increasing income using deontological arguments arguing that the two just cannot morally be compared, which is in direct contrast to GiveWell’s utilitarian approach.
  9. 9. Studies most commonly use job market data (inferring VSL from the additional wage that is paid for riskier jobs). Where these datasets are available for low-income settings they are heavily prone to selection bias as they rarely contain data on informal employment, and so can miss information from the poorest households. For instance, Leon and Miguel (2017) used data in Sierra Leonne by assessing travel decisions, but anyone without the means to travel is not included in this sample. Kremer et al. (2011) adapted the approach by looking at willingness to travel to use improved water sources in rural Kenya. While the context of this study is relevant to GiveWell, it is unclear whether respondents had enough information on risk levels to make an informed decision, resulting in a low estimate of VSL. León, Gianmarco, and Edward Miguel. “Risky transportation choices and the value of a statistical life.” American Economic Journal: Applied Economics 9.1 (2017): 202–28; Kremer, Michael, et al. “Spring cleaning: Rural water impacts, valuation, and property rights institutions.” The Quarterly Journal of Economics 126.1 (2011): 145–205.