Skip to content
Working paper

Decision-focused impact evaluation as a practical policymaking tool

To more effectively inform development action, impact evaluations must be adapted to serve as context-specific tools for decision-making.

22 August 2018

This paper was commissioned by the William and Flora Hewlett Foundation to provide a perspective on the future of impact evaluation, based on IDinsight’s work using rigorous impact evaluation as a practical decision-making tool for policymakers and non-governmental organizations in developing countries. The goal of this paper is to articulate how impact evaluations can achieve their full potential to improve social programs in the developing world.

Decision-focused impact evaluation as a practical policymaking tool - 1 MB

Download PDF
Decision-focused impact evaluation as a practical policymaking tool (video)

Executive Summary

Impact evaluations have enhanced international development discourse and thinking
over the past 15 years, but the full promise of impact evaluations to improve lives has
yet to be realized.

Rigorous impact evaluations to date have largely focused on contributing to a global
learning agenda. These ‘knowledge-focused evaluations’ (KFEs) – i.e. those
primarily designed to build global knowledge about development interventions and
theory – have catalyzed a more sophisticated dialogue around results and refined
important development theories. Evidence created by KFEs has also led to
the international scale-up of several interventions.

However, two challenges have limited the extent to which KFEs have informed policy
and programmatic decisions. First, evaluator incentives are often misaligned with
implementer needs. Second, many impact evaluation results do not generalize
across contexts.

To more effectively inform development action, impact evaluations must be adapted
to serve as context-specific tools for decision-making that feed into local solution finding systems. Towards this end, a new kind of impact evaluation has recently
emerged, one that prioritizes the implementer’s decision-making needs over potential
contributions to global knowledge. These ‘decision-focused evaluations’ (DFEs) are
driven by implementer demand, tailored to implementer needs and constraints, and
embedded within implementer structures. Importantly, DFEs mitigate generalizability
limitations by testing interventions under conditions very similar to those in which the
interventions could be scaled. By reframing the primary evaluation objective, they
allow implementers to generate and use rigorous evidence more quickly, more
affordably, and more effectively than ever before.

Acknowledging that the distinction between KFEs and DFEs is not binary – any
evaluation will exhibit varying KFE and DFE characteristics – we have developed
these terms because they help elucidate the objectives of a given evaluation and
offer a useful conceptual frame to represent two axes of rigorous impact evaluation.
We argue that the future of impact evaluation should see continued use of KFEs,
significantly expanded use of DFEs, and a clear strategy on when to use each type of
evaluation. Where the primary need is for rigorous evidence to directly inform a
particular development program or policy, DFEs will usually be the more
appropriate tool. KFEs, in turn, should be employed when the primary objective is to
advance development theory or in instances when we expect high external validity,
ex-ante. This recalibration will require expanding the use of DFEs and greater
targeting in the use of KFEs.

To promote the use of rigorous evidence to inform at-scale action, we identify two
strategies to increase demand and four strategies to increase the supply of DFEs. To
stimulate demand for DFEs:

  • Funders should establish ‘impact first’ incentive structures that tie scale-up
    funding to demonstration of impact over a long time horizon; and
  • Funders should allocate funds to support DFEs across their portfolios.

To build supply for DFEs:

  • Universities and funders should build and strengthen the professional tertiary
    education programs to train non-academic impact evaluation specialists;
  • Funders should subsidize start-up funds to seed decision-focused impact
    evaluation providers and international evaluation organizations should
    support these organizations with rapid external quality reviews;
  • Evaluation registries should publish the cost and length of impact evaluations
    and the actions they influence; and
  • Funders, evaluators, and implementers should collaborate to establish ‘build,
    operate, transfer’ evaluation cells to strengthen evaluation capacity in
    implementing organizations.

Overall, to maximize the social impact of impact evaluations, all involved
stakeholders (implementer, funder, and evaluator) should have clarity on each
evaluation’s primary objective and select the appropriate evaluation type (DFE or
KFE) accordingly. We subsequently envision DFEs supporting a robust innovation
‘churn’ whereby intervention variations are generated and rigorously assessed in
rapid cycles. Both KFEs and DFEs – along with enhanced monitoring systems, big
data, and other emerging measurement developments – will play a critical role in
tightening the link between evidence and action.