Skip to content
Blog

IDinsight’s new Value of Information work

IDinsight is working to estimate the value that evidence creation brings to development partners to ensure their limited resources are allocated efficiently and effectively.

An enumerator collects data on sanitation from a respondent.

View page in French

Picture a decision-maker who is deciding whether to increase the scale of a program. In order to make this decision, they’d like to know how much impact the program is having, so they ask IDinsight to conduct an evaluation. You might think this is great news for us: a client wants us to do what we do best, is willing to pay us for it, and already has an impactful decision in mind that we would inform. This is also great for society: another large-scale evidence-based program! However, since the funds spent on the study could otherwise be put toward the program itself, we want to be confident that the evaluation will improve decisions enough to justify the cost.

We often assume that getting new information is worth it because it’s cheap compared to program implementation costs, or because we feel uncertain about which decision is best and know that studies reduce uncertainty. We suspect that overreliance on this assumption sometimes leads to spending too much on new information. On the flip side, we sometimes rule out conducting new studies based on the high cost of a gold-standard design or because we feel that we already know what works. In these cases, we may miss opportunities to improve a high-stakes decision that is sensitive to even small changes in how effective we think a program option is. We will continue making these mistakes if we do not answer the question: how much can we expect social impact to improve based on new information?

Through our Value of Information (VoI) work, we’re trying to find ways to thoughtfully answer this question and improve the way the development sector’s limited resources are allocated to evidence creation. Our aim is to ensure that development actors miss fewer high-value studies, decision-makers redirect fewer funds from program implementation to low-value studies, and that the design of studies that do get funded is better tailored to the decisions they seek to inform.

Calculating the value of information

We’re still experimenting with methodology, but have summarized it in a few key steps.1 First, we estimate how likely it is that a decision-maker is choosing an incorrect option in a world where we don’t conduct a study. For example, based on the current information they have, they choose to scale up the program even though in reality (and if they had more information, they’d know that) the program is not sufficiently impactful to justify it. Or alternatively, they choose to discontinue the program even though, in truth, it is cost effective. We think of this potential for making the incorrect choice as the client’s ‘decision risk’. To quantify decision risk, we would work with the client to understand how their decision depends on the program’s impact and what they currently believe that impact to be, accounting for their uncertainty.

We would love it if decision-makers came to us saying something like: “I am 80% confident my program causes between a 0.2 and 0.35 standard deviation improvement in outcomes, hence without further evidence I would scale it up”. Realistically, few decision-makers define their ‘priors’ so precisely. Thankfully, there is a growing literature on how to get groups to voice their existing beliefs precisely using a series of simpler questions and steps that are intuitive to a general audience.2 These efforts are still relatively new, especially in the development sector, so we’re currently designing ways to test how close these elicited beliefs come to “the truth” in contexts like ours. 

Once we have calculated the ‘decision risk’ linked to not conducting a new study, we estimate how the new information provided by a specific study design (with given quality of evidence) would reduce decision risk. We do this through the magic of simulation. In each of our simulations, we assume a different value for the true program impact and simulate the information a given study design might provide about it. We also incorporate what the decision-maker is likely to do based on the information.

At that point, we ask the questions: Does the simulated study lead the decision-maker to change their scale decision? Does it move the scale decision closer to the one they would choose with full information about the truth? Answers to these questions, averaged over thousands of simulations, tell us how much the study reduces decision risk. They also tell us how much money a decision-maker would, on average, correctly reallocate after investing in a particular study. This figure can then be compared to the cost of carrying out a study to help decide whether it is worth funding. This process can also be repeated for various study designs, which will differ in terms of risk of misallocation and cost.

Why we’re excited about this work

We see this as another way IDinsight can prioritize our core value of social impact. So much development funding is spent on activities that don’t reach their intended goals or aren’t sufficiently impactful given the cost. We think of the VoI work as a way to reduce the extent to which that is true for evidence creation. 

This approach is very much still in beta – for now, we hope to pilot it on a few projects and see if/how we think it could scale. We may still conclude that the formal VoI process is not feasible for some (or even any) IDinsight projects. But we think even asking this question will be useful:

  • Helping clients voice what they believe about a program’s effectiveness might reveal situations where current decisions are not aligned with current beliefs, providing an opportunity to improve decisions before gathering new evidence. For example, groups of decision-makers may be implementing or funding a longstanding program based on the assumption that they all agree it is the most effective option. Once asked to estimate its cost-effectiveness, however, they might find that most of them are already confident that an alternative is better.
  • Having to clarify, somewhat formally, how they expect to make future decisions and on the basis of what evidence can help our clients avoid paying for work that is interesting and exciting, but not decision-relevant. For example, if political or administrative constraints mean that a decision-maker has only one choice—and that’s to implement the program as is, it may not be that helpful to measure its impact precisely, even if they really want to know.
  • The work we do in developing a VoI approach may have value in and of itself. Take our work assessing the accuracy of decision-makers’ prior beliefs. If we were to identify the conditions under which carefully elicited beliefs are good predictors of true impact, we may identify situations in which decision-makers may not need an evaluation at all – at least not right away.

What’s next

We’re at the start of our work. We will continue to experiment and refine our methodology. In the meantime, we’d love to connect with others with ideas. If we’ve piqued your interest and you’d like to find out more or give us feedback, please leave your thoughts in the comments and/or reach out to zack.devlinfoltz@idinsight.org, mallika.sobti@idinsight.org, claire.ricard@idinsight.org or eloisa.avila@idinsight.org.

References

The concepts and techniques we propose to use are not new and many are more common in other fields. For example, the VoI team has drawn inspiration and techniques from the below sources, among many others:

And a variety of other sources from economics, decision theory, business, psychology, etc.

  1. 1. Our colleagues Torben Fischer, Doug Johnson, and Dan Stein describe similar approaches to some of these steps in more technical detail in this blog post and the associated paper.
  2. 2. Goldstein and Rothchild (2014); DellaVigna et al. (2020); Leeman et al. (2021); and Rhys Bernard (ongoing).