Skip to content
Blog

Key insights from panellists at our Skoll World Forum side event

Lorreen Ajiambo 10 April 2025

Harnessing impact evaluations to drive transformation and scale

[L-R] Pascale de la Frégonnière (Strategic Advisor to the Board, Cartier Philanthropy), Alisha Myers (Director of Strategic Information and Innovation, World Bicycle Relief), Winnie Auma (Chief Operating Officer, Village Enterprise), Karan Nagpal (India Regional Director, IDinsight), Jeff McManus (Senior Economist, IDinsight), Safeena Husain (Founder and Board Chair, Educate Girls), Caitlin Baron (CEO, Luminos Fund), and Rebecca Sharp (CEO, IDinsight) ©Lorreen Ajiambo/IDinsight

Last week, on the sidelines of the Skoll World Forum, IDinsight and Cartier Philanthropy hosted an enlightening panel discussion on using impact evaluations to drive organizational transformation and scale. The event brought together leaders from diverse organizations, all at different stages of their evidence journeys but united by a deep commitment to data-driven decision-making. 

We opened the event with compelling keynote addresses by Pascale de la Frégonnière (Strategic Advisor to the Board, Cartier Philanthropy) and Rebecca Sharp (CEO, IDinsight). Pascale emphasized that evaluations are more than just a box to check for funders—they form the cornerstone of informed decision-making and foster genuine trust-based relationships with grantees. Rebecca underscored that the most valuable impact evaluations transcend merely validating interventions—they actively enhance programs, driving both efficiency and effectiveness.

The panel included Safeena Husain (Founder and Board Chair of Educate Girls), Alisha Myers (Director of Strategic Information and Innovation at World Bicycle Relief), Caitlin Baron (CEO of Luminos Fund), Winnie Auma (Chief Operating Officer of Village Enterprise), and Jeff McManus (Senior Economist at IDinsight). 

Moderated by Karan Nagpal, IDinsight’s India Regional Director, the panel explored how evidence can inform strategic decisions, strengthen program design, and help organizations grow despite the changing funding landscape.

When is an Impact Evaluation appropriate?

Jeff McManus, Senior Economist at IDinsight, kicked off the discussion by emphasizing that impact evaluations aren’t always the answer for every evidence need.

While they excel at answering whether a program works and quantifying its effects, they’re less suited for understanding why programs work or determining who needs them.

According to Jeff, organizations should consider impact evaluations when:

  • Implementation kinks have been worked out
  • The organization is large enough to support a rigorous study
  • There’s potential for growth and funder interest
  • The evaluation fills a genuine knowledge gap and answers specific learning questions

The other panellists shared their evidence journeys, emphasizing a stepwise approach in which organizations begin by refining their programs using internal monitoring data and smaller-scale assessments before embarking on rigorous impact evaluations.

Using evaluations to strengthen program implementation

Panelists described external evaluations as being complementary to their internal monitoring systems and gave concrete examples of how they used evaluation results to refine their programs.

Safeena Husain highlighted the importance of treating evaluators as thought partners who can help improve programs. When Educate Girls didn’t see great results at their RCT’s midline evaluation, they leveraged the relationship with their evaluator to refine their approach and achieve better outcomes by the endline assessment.

Winnie Auma referenced a Village Enterprise RCT showing that households with diversified income streams were more likely to sustain income gains. Village Enterprise revised its training curriculum to emphasize business diversification. An RCT of the Luminos Fund showed promising results but also unearthed a critical skills gap. This prompted Caitlin Baron and her team to overhaul the early literacy component of their program, resulting in a further 33% gain in literacy results the following year. 

Evaluations for growth and scale

For World Bicycle Relief, external validation was crucial to demonstrate the impact of bicycles on livelihoods. Alisha Myers explained that while internal studies suggested positive effects, they needed an independent, credible assessment including control group data to convince potential funders.

Panellists noted that different types of evidence are valued when working with governments.

Caitlin Baron found that governments are often less interested in impact evaluation data and more focused on monitoring data specific to certain geographies and programs. The format matters, too—governments may prefer presentations and in-person meetings over formal reports. Safeena Husain emphasized the importance of choosing the right government partners and positioning programs as complementary to government service delivery rather than as a criticism of their shortcomings.

Navigating the changing funding landscape

The panel addressed recent bilateral and multilateral funding contractions that have affected the development sector, acknowledging that the traditional scaling pathway from foundation to bilateral funding has fundamentally changed.

Some of the strategic shifts organizations are considering and implementing include:

  1. Prioritizing high-impact interventions based on their evaluations.
  2. Exploring cost-efficient digital solutions.
  3. Positioning themselves as credible authorities to address recent sector-wide evidence gaps.
  4. Championing results-based financing approaches.
  5. Exploring market-based approaches, including financial products, to improve affordability and access to their program/solution, reducing dependence on donor funding.

Key takeaways

The discussion highlighted several valuable lessons for organizations considering their evidence journey:

  1. Match the evaluation method to your specific learning questions.
  2. Use evaluations as opportunities to strengthen internal monitoring systems.
  3. View evaluators as thought partners who can help improve programs.
  4. Package evidence differently for different stakeholders, especially governments.
  5. Use evidence to demonstrate cost-effectiveness and prioritize high-impact interventions.

As the development sector faces unprecedented funding challenges, the insights from these organizations demonstrate how evidence can be a powerful tool not just for proving impact but for improving programs and adapting to changing realities.