Skip to content
Blog

Capacity building: What is it good for?

Marc Shotland 17 January 2020

This is part 2 of a 3-part blog-post-series on capacity building. The first post, “Building the case for capacity building,” can be found here.

IDinsight India team facilitates a workshop on data use for health and nutrition officials in Ranchi, Jharkhand. ©IDinsight/Aditi Gupta

Just like a health or education program, we see capacity building as an intervention. We should therefore approach it with specific objectives. In my last post, I highlighted three possible objectives: Learning objectives are the knowledge, skills, and mindsets we expect our target audience to attain. Relationship objectives are mostly about marketing and advocating for evidence generation, as well as customer relations. Impact objectives are the real, measurable increases in the audience’s evidence generation and use. At IDinsight, our capacity building aims to achieve all three objectives. In this post, I describe how these objectives might be achieved.

The most obvious objective of training is to facilitate learning. What is learning? Learning can be a change in someone’s knowledge, mindset, and skills. Examples of questions we can use to define learning objectives are:

  • Do we want participants to approach program design and evaluation with more structure (i.e. develop a theory of change)? Then we’re trying to change their mindset and build skills, for example, learning why a theory of change is important and how to produce it.
  • Do we want them to focus their evaluations on outcomes rather than inputs and outputs? (mindset)
  • Do we want to explain the idea of rigorous evidence? (mindset)
  • Do we want them to be able to distinguish good evidence from bad (e.g. understand selection bias, precision, and power)? Then we’re trying to increase their evaluation-specific knowledge.
  • Do we want them to come away with a decent understanding of what a rigorous evaluation design should look like? (knowledge)
  • Do we want them to come away with the knowledge to both design and potentially implement rigorous evaluation? (knowledge and skills) Typically, in this case, their baseline levels are already extremely high (i.e. they’ve already been conducting good, rigorous research, perhaps making a just few common mistakes along the way, and this training is to help them hone their skills). Otherwise, this is too high of an expectation for even intensive CB engagements.

My simplest advice is to start with an assumption at the outset of a client’s knowledge, mindset, and skills. Then, set yourself targets, and think about how to measure whether those targets have been achieved. It’s important to remember, however, that learning is just an intermediate objective. If all we’ve done is change what’s inside participants’ heads, we haven’t yet had an external impact. Ultimately, impact objectives should be driving our learning objectives.

It is through our client-partners that IDinsight achieves social impact. We do not make policy decisions directly, nor do we implement policies. Instead, we try to influence those decisions and policies by injecting evidence into the process. Capacity building is a great way to develop and cultivate partnerships.

Initially, we use strategic communications to engage clients, get their buy-in for our partnership, and align on larger goals. For example, we need to spread the message about the promise of evidence-informed decision-making, about our expertise, and about why working with us will help them achieve more social impact. We use different types of content to achieve these objectives: we write blog posts and op-ed pieces, we produce polished reports, and present at conferences. To further deliver a message that sticks for a specific audience, we offer targeted training in which we have a captive audience and an advantageous teacher-student dynamic.

Other times our objective is to cultivate a strategic partnership. Capacity building can be seen as a genuinely selfless service. “Wow, those kind people at IDinsight are willing to invest in our people and train us to do the same things they get paid to do.” It builds goodwill for the engagement as a whole, and for us as an organization. It takes a pointed effort to establish these partnerships and maintain them as a project unfolds.

Similarly, we sometimes need buy-in and to set expectations for a planned evaluation for a specific project. In a sense, training is one method of walking clients through a work plan in excruciating detail and justifying each part — especially the difficult parts where we’ll likely require their support. For example, if we are conducting an impact evaluation, our clients might be tempted to intervene in the control group, and we need them to resist the temptation. Being told not to do so by a bunch of external evaluators may not be compelling. It will be a lot easier to align if they’ve internalized why we randomized in the first place, and are familiar with how (for example) selection bias and the threat of crossovers could hurt their impact estimates.

Like all of IDinsight’s services, capacity building should lead to real decisions and actions, not just learning. What are the likely decisions and actions we expect from capacity building? What is our impact objective? It depends on the audience. But put simply, our goal is to help our clients become self-reliant in producing, using, and advocating for evidence. Our theory is that changing mindsets, increasing knowledge, and expanding skillsets will help clients take on the evidence-informed decision-making activities that we typically offer as a service, leading to an overall increase in the production and use of evidence. And we believe that more and improved evidence leads to informed decisions and policies. In turn, informed decisions and policies lead to better social outcomes.

In the last post, “Building the case for capacity building,” I claimed that our audience falls into three categories: consumers of evidence, facilitators of evidence, and producers of evidence. Here I argue that each group requires a higher-level skill set than the last to effectively participate in advancing our impact objectives. Obviously, this is a generalization and simplification. But see whether you agree.

If we want consumers such as policymakers to integrate more high-quality evidence into their decision-making, we need to displace some of the other inputs they’re currently relying upon — false priors, misinformation, and bias. That requires a mindset change — believing evidence can lead to better decision-making, and then allowing evidence to update their priors. It also requires knowledge to understand the appropriate methods to answer their questions, and to distinguish good information from bad. And it probably requires the skill of producing a theory of change (which is definitely attainable for policymakers if they don’t already do this).

Facilitators (“commissioners” and “advocates”) can incentivize evidence use through program funding decisions and advocacy. For this, they must have at least the mindset, knowledge, and skillset of other evidence consumers. Like consumers, they should have an evidence-informed decision-making mindset. They should be able to identify the right method for the research question at hand and distinguish good evidence from bad. Beyond that, commissioners also need to recognize evidence gaps and direct their efforts and funding toward new research when need be. Funding new research requires understanding trade-offs between design choices and grasping finer details of research operations to understand how much it should cost to pull off a high-quality evaluation. So commissioners should be aware of how sample size is calculated, processes to ensure data collection quality, etc. They also need to understand how things can go wrong in the field, and how to manage risks. This knowledge is also useful for individuals to effectively advocate for these practices. If they receive push-back from knowledgeable sceptics, they could lose credibility if they don’t know the details.

Producers of evidence are those who design and conduct evaluations. They likely need to understand the above, but also measurement, statistical theory, and software. In other words, they may need to know as much as research practitioners.

You may be tempted to ask how we can fulfil our capacity-building impact objectives if all we offer is training. That’s a sensible question. Fortunately, the answer is simple. It’s the same answer I’d give to the question: “how do you build a house if all you have is a hammer?” You don’t. You go get more tools along with some raw materials.

In our theory of change for capacity building, many pieces need to fall into place before we see any movement on a client’s independent use and production of evidence. So we need to think: what are the prerequisites? Are there any complementary and follow-up activities we need to pursue? My response to the first question (what prerequisites?) is the “4 Cs”.1 Clients need:

1. A clear vision of how they hope to “experience” evidence-based decision-making in the future.

2. A commitment at all levels (from all necessary stakeholders) to see this through — particularly the use of evidence in decision-making.

3. Capacity: staff whose job it is to see this through. And on top of the human resources, some financial resources, especially if they plan to produce evidence.

4. Capability: staff with the necessary technical skills on the hardware, software, and analysis side (i.e. statistics, econometrics, data visualization, but also communication skills to reach the key internal and external decision-makers).

Each of the “4 Cs” interact with the theory of change of our capacity-building intervention.

In theory, our trainings may have some influence on vision and commitment. But we probably shouldn’t bother starting unless we already see strong signals of both, organization-wide. The primary mechanism we have to achieve our objectives is moving the needle on participants’ capability. Again, we need to see a strong indication of baseline (or potential) capability before we start. And we should be very conservative in our theory of how far we can truly move that needle. Ironically, in any “capacity building” engagement, it is solely our client’s responsibility to actually build the capacity — the human and financial resources — necessary for evidence production.2 This is often the first-layer barrier, and the easiest to observe. If the organization has no plans to mobilize resources to actually “build capacity,” then we shouldn’t be breaking any sweat trying to do so through the other 3 C’s.

In my next post, I will discuss what to teach, how to teach, and some complementary activities that can help us achieve impact.

  1. 1. Thank you to Aparna Krishnan for discussing this with me many years ago when developing a capacity building plan for the Tamil Nadu government (India).
  2. 2. Richard Kohl often made this point: Capacity building should in most cases be termed “Capability Building”.