Skip to content
Project

Developing and implementing a monitoring and evaluation strategy for Google.org’s AI Collaboratives

Decision-maker’s challenge

Google.org recently launched AI Collaboratives: a new funding approach aiming to amplify the impact of AI solutions in addressing global challenges by facilitating knowledge sharing and cooperation amongst cutting-edge researchers and implementers. The first two AI Collaboratives, which focus on wildfires and food security, bring together researchers, nonprofits, public agencies, and funders working to create change in these priority areas. As Google.org continues to work on AI solutions to address the world’s most difficult challenges, they seek to understand if the AI Collaborative model actually leads to improved outcomes for participating communities. 

Impact opportunity

The AI Collaboratives are tackling significant global challenges. Wildfires claim more than 300,000 lives due to smoke exposure annually, and food insecurity affects nearly a third of the world’s population. These Collaboratives are funding and convening some of the world’s most innovative and influential leaders in each space, providing tens of millions of dollars of funding and invaluable expertise to enable their success. IDinsight’s evidence generation will enable Google.org to optimize their contribution to these efforts.

The monitoring and evaluation approach that IDinsight is developing – and the evidence it generates – will help to accelerate progress on the cutting edge of both funding paradigms and technological approaches in the development space. The AI Collaboratives offer a new approach to funding that is centered on collective learning, shared infrastructure, and cross-actors coordination, and our research will inform optimizations of this innovative model. At the same time, AI is a rapidly growing and dynamic space, and our work will help ensure these technologies are being deployed effectively, equitably, and in ways that maximize their real-world impact, while establishing best practices for evaluating AI.

Our approach

IDinsight is serving as the learning and evaluation partner for the AI Collaboratives, developing a multi-year strategy to evaluate the success of the collaboratives and the collaborative model by seeking to understand: a) how the collaboratives are successfully amplifying the impact of their members by facilitating knowledge sharing and cooperation; and b) how the collaboratives are driving measurable improvements in food security and wildfire resilience through the development and use of AI solutions of their member organizations. We are working closely with Google.org’s grantees, who are deploying innovative, cross-cutting AI-driven tools – from precision agriculture platforms to wildfire detection systems – to better understand how these technologies are being used in practice, what outcomes they’re achieving, and under what conditions they are most effective in order to build a foundation for continuous learning across the AI Collaboratives.

In Phase 1 (November 2024 – March 2025), in close collaboration with Google.org and the grantees, we co-developed Theories of Change for each domain collaborative, crafted collaborative- and grantee-level research questions, and built an evaluation roadmap to guide measurement across ecosystem levels: the collaborative model, domain-specific pillars, and individual grantees.

In Phase 2 (July 2025 – October 2026), we are now putting that roadmap into action. This roadmap consists of two parallel efforts. We are setting up a collaborative-level monitoring and evaluation system that not only tracks progress towards the collaboratives’ objectives but also provides decision-relevant evidence to optimize implementation. In parallel, we are conducting light-touch M&E engagements with each grantee to integrate their existing M&E efforts into collaborative-level efforts and to identify opportunities for improved impact as well as opportunities for further evaluation. These efforts are laying the groundwork for implementing rigorous evaluations in the next phase – ensuring our approach is tailored to each grantee’s intervention type, stage, and learning goals.

Looking ahead, Phase 3 aims to include process evaluations of grantee initiatives; process tracing or impact evaluations for selected AI initiatives; deeper assessments of collaborative-level impact amplification; and targeted support to translate evidence into policy and practice.

Our work will continue to evolve alongside the AI Collaboratives, with a focus on supporting learning, reducing duplication of efforts, and identifying gaps that may hinder the effective use of AI tools and the collaborative model in driving meaningful outcomes.

The results

As part of Phase 1, IDinsight developed collaborative-level Theories of Change and research questions in close collaboration with Google.org and the grantees. These formed the foundation for an overarching, multi-tiered measurement framework and a multi-year roadmap to guide evaluation activities in subsequent phases.

Results from Phase 2 and Phase 3 will be shared as the project progresses and as monitoring and evaluation activities are rolled out across grantees and domains.