A tree in Mukobela Chiefdom, Southern Province, Zambia taken as part of IS Nano pilot. ©IDinsight/Natasha Siyumbwa graphics by Torben Fischer
Informing specific decisions with rigorous evidence - 2 MB
Impact evaluations of development interventions have increased dramatically over the past 20 years,1 expanding from a research tool of academics that has recently been awarded with a Nobel Prize2 to a decision-making tool by policy-makers. Despite this expansion in use cases, the methodological approach to design and analyze impact evaluations has remained mostly constant. This standard approach tends to test whether a program works, i.e. whether its effect is different than zero. Conclusions from this test implicitly assume that consumers of the research are an academic audience that is interested in generalizable knowledge and skeptical of any evaluation results. Therefore, the standard approach requires a relatively high level of certainty to convince the reader that results are “true.”
We argue that in cases where the purpose of the evaluation is to inform a specific decision, researchers should consider alternative approaches to design and analyze impact evaluations. The unifying feature of the alternative approaches to design and analyze impact evaluations we discuss is that they explicitly consider the specific decision makers’ circumstances and decision framework. While this approach isn’t necessarily new, we hope to provide practitioners with an accessible and practical treatment of the subject.
Specifically, we outline two approaches. In the first, we retain the standard frequentist statistical approach to impact evaluations but outline how certain “default” parameters can be modified to take specific decision frameworks into account. For instance, in certain cases, decision-makers may be okay implementing a policy even with relatively high uncertainty as to its effectiveness. Second, we show how Bayesian analysis may more directly account for a decision maker’s beliefs and preferences. We give an overview of a Bayesian approach to evaluation and illustrate how to implement it in practice, including hands-on guidance regarding sample size calculations, analysis, and interpretation of results. Finally, we discuss how an evaluator can choose between frequentist or Bayesian approaches.
The high-level takeaways from this paper are as follows:
The intention of this document is to serve as a guide to those designing evaluations for decision-makers with the intent of allowing for more directed evaluations to ensure maximum policy impact.
4 February 2021
21 September 2020
21 July 2022
29 September 2021
23 June 2021