Skip to content
Blog

Comparing apples and oranges: how to make tough funding decisions on a timeline

Imagine you are part of a philanthropy that funds a diverse portfolio of programs — in education, health, and agriculture, for example — or maybe you are considering programs in those areas but have not yet committed funding. You have limited resources and cannot fund everything, so how do you ensure that your spending does the most good?

In early 2020, IDinsight partnered with the Coppel Family philanthropy to help answer this question. The Coppel Family supports various community-based antipoverty programs in rural Malawi, which collectively benefit over 250,000 people. Comparing the value of those different programs is challenging — for example, how should the Coppel Family decide whether to prioritize a program supporting early childhood nutrition or secondary school education?

This blog describes how we answered this question. Our methodology was informed by GiveWell, one of IDinsight’s long-term partners. GiveWell has popularized the use of cross-intervention comparisons using cost-effectiveness analysis, including applying impact estimates from the literature to programs implemented by charities.1

The Coppel Family wanted to understand how to maximize the impact of its program portfolio as soon as possible. Impact evaluations of each program — in education, health, agriculture, and more — could take years to provide results.2 How could we assess which programs work best, without waiting to see evaluation results?

Our goal was to provide evidence the Coppel Family could use on a short timeline — in a matter of months — to begin thinking about how to reallocate funding to the most cost-effective programs. To do so, we combed through existing research literature to find high-quality evidence on the impact of programs similar to those funded by the Coppel Family, and from contexts similar to rural Malawi.

In some cases, a pre-existing meta-analysis already compared several relevant programs, e.g. lessons for new mothers on the importance of exclusively breastfeeding infants during their first six months. Where meta-analyses did not exist or did not incorporate all relevant studies we found in the literature, we needed to aggregate impact estimates from different sources. However, not all pieces of evidence should be given equal weight. The standard meta-analysis approach is to weight estimates by their precision,3 but we also needed to factor in which programs in the literature are most similar to those funded by the Coppel Family.4 To address both aspects, we created a combined weight that incorporated both precision and similarity, to calculate impact estimates using weighted averages, as illustrated below.

At this point, we had about 10 program impact estimates across several sectors, expressed in different units such as changes in school enrollment, the prevalence of disease, and crop yields. Next, we needed a way to compare across sectors.

Comparing the estimated impact of these 10 programs would be relatively straightforward if every program meant to achieve the same objective — improving school enrollment, for example. But, how could we determine which programs have the most impact, when they aim to achieve such different goals?

To compare programs across different sectors, we needed to convert school enrollment, disease prevalence, and outcomes into common units — dollar values. For programs that reduce the burden of disease or save lives, we calculated the program’s dollar-value benefit to an individual, over his or her lifetime, using disability-adjusted life-year values5 and Value of Statistical Life (VSL) data. For more information on VSL and incorporating people’s preferences in how funding is allocated between programs, please see our Measure People’s Preferences project, in partnership with GiveWell.

For all other programs, we drew on existing evidence to determine how they impacted income-earning potential over an individual’s lifetime, and calculated lifetime benefits using the net present value of future gains in income. That required us to use existing evidence to make assumptions about how and when a program begins to impact income, and how long that impact lasts. Some programs (e.g. agriculture) target participants in mid-adulthood and increase income for their remaining income-earning years, whereas others (e.g. education, child health) target infant or child participants and impact their income-earning potential when the children become adults.

We then used the costs of the Coppel Family programs to calculate estimated cost-effectiveness, as shown in the example figure below. We rely on a set of assumptions about how to apply monetary values to diverse outcomes in our cost-effectiveness estimates. While guided by previous studies, there are limitations to this approach.6

7

As a last step, we sought to give our partner a sense of how confident we were about each cost-effectiveness estimate, to further inform any decisions being considered based on our analysis. We, therefore, graded our confidence in each cost-effectiveness estimate as high, medium, or low. Cost-effectiveness estimates graded as high suggest that 1) the evidence literature is high quality; 2) the studied interventions likely heavily overlap with Coppel Family programs; 3) the impact pathway is transferrable across contexts, and 4) we have no major doubts about the lifetime benefits calculations.

We fully acknowledge that the method we describe here has its limitations. Nevertheless, cost-effectiveness estimates are important guideposts to determine the probable value of different programs and how they could be prioritized. In this case, our approach enabled us to make the following recommendations to the Coppel Family, to help maximize the impact of their giving:

  • Scale up investments in child deworming treatment, Vitamin A supplementation, and promotion of exclusive breastfeeding. High quality studies of these programs consistently showed high impact compared to their cost to implement, making them highly cost-effective.
  • Continue support for school feeding programs and volunteer-run preschool centers. We found these programs moderately cost-effective, and future research could help identify opportunities to improve implementation and reduce costs.
  • Reduce support for lead farmer programs (programs that teach new agricultural practices to ‘lead farmers’ who then train other farmers in their communities). We found a lack of robust evidence proving an average training program increases farmer income. Because this is a relatively large program for the Coppel Family, we acknowledge they may want to rigorously evaluate their program and use the results to update our estimates before changing their funding allocations.

The Coppel Family is now using these recommendations to explore changes to their funding portfolio — changes that can bring additional benefits to the 250,000 individuals supported by their programs.

Cost-effectiveness analysis can prove a useful approach for other philanthropic actors as well, especially when there are time or resource constraints. It is an essential tool for turning a familiar challenge — comparing apples and oranges — into a powerful opportunity — to use the limited resources available to do the most good possible.

  1. 1. GiveWell is uniquely transparent about its work, including its approach to cost-effectiveness analysis, which allows others to apply its learnings.
  2. 2. In many cases, impact evaluations would be a worthwhile investment: you would learn with confidence whether the programs improve lives, and potentially how to make the programs work better. But in this case, given our partner’s total portfolio size, the cost of conducting an impact evaluation of each of their programs was not justified. Instead, the Coppel Family was looking for an initial signal that certain types of programs typically get better results, which would help determine which programs they should probably continue funding, which they could consider discontinuing, and for which they need more rigorous evidence (perhaps an impact evaluation) before deciding.
  3. 3. Michael Borenstein, Larry Hedges, and Hannah Rothstein. “Introduction to Meta-Analysis.” July 1, 2007.
  4. 4. We combined two sets of weights: 1) a statistical confidence weight equal to the impact estimate divided by the standard error, where the largest weight is given to the most precise estimate, and 2) a weight we derived by grading each program in the literature according to a set of criteria, where the largest weight is given to studies of programs highly similar to those funded by the philanthropy, in similar contexts, and using the most rigorous methods.
  5. 5. Taken from the latest Global Burden of Disease study.
  6. 6. Limitations of this approach include: 1) Some programs cannot be rigorously evaluated using quantitative methods (i.e. RCTs), thus comparing them using this method may not be feasible. 2) The evidence literature may be sparse for particular programs or outcome indicators of interest, and more likely to include statistically significant findings due to publication bias. 3) Some estimates of program impact are more transferrable across contexts; due to the highly contextual nature and benefits of education programs, we had low confidence that impacts from one program could be expected for a similar program in a different location, compared to child health interventions where the impact is largely biological, for example.
  7. 7. Red indicates a program is not cost-effective (less than $1 lifetime benefit per $1 spent on the program); yellow indicates moderate cost-effectiveness ($1–3 in lifetime benefits per $1 spent); and green indicates a program is highly cost-effective (over $3 in lifetime benefits per $1 spent).