Skip to content
Blog

Six reflections on doing cost-effectiveness analysis

Jeffery McManus 14 October 2025

©Yankrukov/Pexels

Cost-effectiveness analysis (CEA) is a key tool in the evidence toolbox, helping decision-makers weigh the costs of a program against its impacts. Most of my experience with CEA has involved running back-of-the-envelope calculations following an RCT to give a rough idea of how a program’s cost-effectiveness (CE) stacks up to other programs in the same sector. But recently, as part of IDinsight’s renewed focus on CE, following global aid reductions, I’ve supported a few of our partners in conducting CEA that extends beyond the typically rough post-RCT analysis, providing more rigor and detail to CE estimates. In these new experiences with in-depth CEA, I’ve had a few reflections about how the development sector approaches (and should approach) CEA:

1. There are great frameworks and free resources that should form the backbone of any organization’s CEA.

Some of my favorites include:

  • J-PAL’s CEA guidelines and templates (link). This is the industry standard for CEA in international development.
  • Livelihood Impact Fund ROI (link). For CEA of economic outcomes, LIF’s ROI approach is simple and intuitive.
  • GiveWell’s approach to CEA (link, more detailed guide here). Few think about CEA as comprehensively as GiveWell. It is one of the few organizations that has tried to rigorously equate different outcomes to facilitate cross-sector CE comparisons. 
  • IDinsight’s technical bootcamp on CEA (link). Largely based off J-PAL’s guidance, this bootcamp lesson includes worked examples and quizzes.
  • International Rescue Committee and One Acre Fund’s approaches to CEA (link and link). These are some of the most thoughtful implementing organizations when it comes to CEA. 
  • What Works Hub for Global Education blogpost (link). This is a great primer on CEA for education programs, with several worked examples.
  • Brookings Childhood Cost Calculator (link). Useful guidance on how to capture costs in a systematic way across a range of child and youth interventions.

2. Despite the availability of great free resources, there is huge variation in how organizations report cost-effectiveness, which makes transparency absolutely critical.

There is no consensus about which impact metrics to use, even within sectors. Education organizations may report impact in terms of learning outcomes, attendance, or years of schooling. Livelihoods organizations may report income or consumption or employment at the individual-level or the household-level. Beyond metrics, there is wide variation in how to calculate CE. Different organizations use different time horizons, different discount rates, sometimes including spillover or indirect effects or general equilibrium effects and sometimes not, sometimes including opportunity costs of program participants’ time and sometimes not, sometimes attempting to combine multiple outcomes and sometimes focusing on a single main outcome.

With such variation, the only way to fairly compare CE across programs is for organizations to publish the formula they use to calculate CE and share details on how they estimated the inputs for that formula, so that readers can reproduce the results and make adjustments to facilitate comparisons with CE estimates of other programs.

3. Funders and other audiences rarely care if a program’s return on investment (ROI) is 2.2x vs 2.3x; they care if it’s 2x vs 5x vs 10x.

Organizations should focus their attention on getting accurate estimates of the main components that drive CE, including:

  • Impact causally attributed to the program 
  • How long effects are sustained and whether they grow, shrink, or stay the same over time
  • Whether there are likely meaningful spillovers or general equilibrium effects (positive or negative). If so, getting a reliable estimate of such effects can be tricky but important.
  • Direct costs and opportunity costs of program implementation
  • Fixed (up-front) program costs vs recurring costs

With these inputs in hand, an organization can estimate the CE that is probably within a reasonable range of the true CE of the program. Readers can then apply their own adjustments or subjective inputs (e.g. discount rates or moral weights) to facilitate comparisons with other programs.

4. Programs with CE models that estimate impact through internal monitoring data and/or assumptions about counterfactuals are more likely to report unrealistically high CE, compared to models with impact estimates from a rigorous evaluation.

During a recent desk review of peer programs for a livelihoods CEA, time and again I encountered eye-popping CE estimates, only to find that they were based on ‘reasonable assumptions’ about what outcomes would be without the program. There are some true CE outliers backed by rigorous evidence; two examples that come to mind are VisionSpring’s 40-50x ROI of eyeglasses on income gains (study) and Youth Impact’s phone tutoring achieving 4 learning-adjusted school years per $100 (study). But most instances of off-the-charts CE estimates are based on modeling assumptions, and I don’t buy those if they haven’t been subjected to rigorous external evaluation.

5. Discount rates are unknowable and highly specific and I’d argue that it’s better to leave them out and let readers apply their own adjustments.

OK, I might get in trouble for this one. Many programs have benefits and costs that accrue over time, and generally, people prefer to realize benefits sooner and incur costs later. For this reason, J-PAL and other CE gurus advise ‘discounting’ future benefits and costs when calculating a program’s lifetime ROI; for instance, one might assume that a $100 benefit in 1 year is worth the equivalent of a $90 benefit today (implying a discount rate of 10%).

The problem is that discount rates vary across contexts, across individuals within the same context, and even across time for the same individual. The data needed to estimate a person’s discount rate at any given point in time would be so challenging to collect that discount rates are effectively unknowable. Discount rates will be higher in an economically or politically unstable region than in a stable one. Naturally programs that deliver immediate benefits (like humanitarian aid) will have a higher present value than programs with benefits that accrue in the long-run (like enterprise promotion programs). But even within a more stable region, idiosyncratic shocks to households (like the death of a family member or a windfall gain) will cause households to have hugely different time preferences about the benefits that an outside program might be bringing in.

J-PAL’s CE templates apply a 10% annual discount rate across the board, though this is based on a survey of discount rates used by 14 country governments (4 of which would be classified as developing countries) in 2007. To me it feels misleading to apply the same discount rate for situations or individuals who would obviously have higher or lower time preferences. At the same time, without hard data for guidance, any assumption about discount rates is highly subjective and impossible to defend. 

Instead of imposing arbitrary time preferences on program participants, I would prefer for CE estimates to exclude discount rates. Calculations should be laid out transparently (so that the reader can apply their own adjustments if desired), and the ROI of the program should be reported over different time horizons. For instance, how do impacts compare to costs after 1 year, 3 years, 5 years, or the reasonable life span of the program? With this information, a policymaker or funder can consider whether the program delivers sufficient benefits soon enough in a particular environment to justify the investment.

6. For CEA for economic outcomes (i.e. cost-efficiency analysis), ROI and IRR are not the same.

This one is a little wonky, but I noticed enough CE reports confusing the two concepts that I had to say something.

ROI is straightforward and intuitive: it measures how the benefits of a program compare to its cost over a given time horizon. Suppose that participating in a job training program causes participants’ incomes to increase on average by $100 per year, and the effects last for about five years. If the program spends $200 to train each person, then the annual ROI of the program is 50% ($100/$200), and the lifetime ROI is 2.5x ($500/$200). 

Internal rate of return (IRR) is the discount rate that would make a program’s future benefits equal to its future costs, or its net present value (NPV) equal to 0. In the job training example, IRR is calculated by solving the following equation:

In this case, IRR is 41%. The advantage of IRR is that it bakes in the duration of impacts, is flexible if benefits or costs accrue at a non-uniform rate over time, and doesn’t involve subjective assumptions about discount rates. These features allow for the comparison of IRR across different types of programs. A program with a higher IRR is more cost-effective than a program with a lower IRR; a program with a higher ROI may or may not be more cost-effective than a program with a lower ROI, depending on when & for how long benefits and costs accrue. 

The downside of IRR is that it is way less intuitive than ROI. You can easily calculate ROI with some mental math; IRR involves solving polynomials that most people can’t do in their heads, and refers to a hypothetical discount rate, making it harder to grasp and convey its meaning. Personally I prefer the more intuitive ROI, even if it requires extra caveats about time horizons. Either way, you should make sure that the estimate you are reporting matches the correct formula.

–––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––

Even with the excellent efforts by J-PAL and so many others to educate the sector about doing CEA and align us around a set of common standards, I’m doubtful that we will ever agree on the ‘right’ way to do CEA. Information needs and preferences about how information is conveyed are always going to vary across individuals and organizations. But by being completely transparent in how estimates are calculated, and by presenting a range of estimates under different credible assumptions about program impacts and costs, organizations will satisfy virtually all questions about their cost-effectiveness.