Skip to content
Blog

Three lessons from iterative experiments in fisheries

Fishers carry fishing nets and other supplies to a set of floating cages in preparation for a round of experimentation. ©IDinsight/ Rob Sampson

A promising landscape

India has 2.36 million hectares of ponds and tanks and 2.907 million hectares of reservoirs. However, much of this area is under-utilised for fish cultivation. Most waterbodies with existing aquaculture rely on traditional and low-intensity forms of fish farming. Tata Trusts, a national philanthropic organisation, believes there are considerable income gains that could be realised through the promotion of more intensive forms of community-led fish farming. It runs a program called Open Source Fisheries that promotes low-cost, higher-efficiency fish-rearing techniques to help farmers increase their income.

Rearing fish in netted cages that are placed in a reservoir, also known as cage culture, seemed like a promising way to increase fish production across reservoirs in Andhra Pradesh. While its fixed costs can be high, subsidies for capital inputs are available, and it’s inexpensive to maintain. Furthermore, tightly controlled lab tests have shown it is a viable cultivation strategy. Tata Trusts commissioned IDinsight to test their hypothesis that variations of cage culture could work in field conditions in addition to in the lab. It aimed to use the findings for two broad purposes. First, they wanted to better understand the relationship between inputs, effort levels, and outputs, so they could improve their extension services. Second, they wanted to generate evidence on the effectiveness of this strategy that could be used to influence government policy on fisheries.

An unexpected turn

Often organisations design the best plan and program possible given available research, expertise, and evidence. But unknown variables in a new environment make it impossible to be certain of the program’s effectiveness at the outset.

IDinsight and Tata Trusts had reviewed research and worked with ichthyologists that suggested this type of cage culture could be a viable option for fish farmers. While we knew results in the field would be different from those documented in lab experiments, we were confident we could find at least one model that would be profitable for fishers.

However, during the time we worked with the Tata Trusts team, we were not able to identify a strategy that was profitable for fishers. The fish were not eating enough feed to grow to a marketable size. The water was more stagnant than expected, which could have impeded fish growth. In each experiment, growth was far below target.

While the results weren’t what they expected, Tata was still able to use this information to improve their programs and increase their impact. Tata is still experimenting with ways new techniques, perhaps including the use of cages, that could raise the incomes of fishers. The following are three important lessons we took away from this experience.

1. Evidence that an idea isn’t working as expected can improve the social impact of a program.

Many implementers are naturally disappointed to learn that promising ideas aren’t as effective as hoped. However, evidence on what doesn’t work can be just as or more important than evidence on what does work. Such evidence can lead organisations to allocate resources away from ineffective strategies. They can allocate these resources to more effective aspects of their program or they can invest in innovation to develop new models. This was the approach Tata Trusts took.

Tata Trusts had the resources available to scale-up the cage culture models they originally tested. However, the results from experiments showed that in the current form this model wouldn’t work well for local fishers. Instead of pushing the model to program participants without good evidence on how well it worked, they were able to channel resources to testing other models that could better benefit program participants.

2. An iterative approach can optimise over many design parameters better than a single test of effectiveness.

Designing an intervention usually involves selecting various design parameters for how the intervention should run. To grow fish in cages, one must select many design parameters. These include the size of the cage, the type of net, the type of fish, the amount the fish are fed, the frequency the fish are fed, the reservoir, and more. The interaction of these parameters with each other and the context determines the outcome[footnote]See Pritchett, Samji, and Hammer’s discussion of the advantages of such an approach in It’s All About MeE: Using Structured Experiental Learning (‘e’) to Crawl the Design Space[/footnote.

Running an experiment on fish growth allows a researcher to see how much fish grow under a single combination of design parameters. However, it says little about how each of these parameters contributed to the fish growth and how far from optimal that choice of parameters was.

Experimenting iteratively allowed us to learn more than we would have been able to with a single test. For example, in one iteration we compared two different feeding methods. One feeding method clearly outperformed the other. We used this strategy in the next experiments. Then we tested raising fish in small, densely packed cages. Those cages were all overturned in a storm. We abandoned that approach. In the next experiment we focused on stocking densities. We learned that reducing the stocking density by 60% was correlated with no substantial change in survival. But reducing it by 80% was correlated with a big increase in survival.

3. Flexibility with the research approach can save time and money.

Different research approaches have various advantages and disadvantages. Our initial approach was designed to generate precise estimates of how much local fish would grow under cage culture conditions and an ichthyologist-recommended feeding schedule. As such, we designed a tightly controlled small-scale experiment where we carefully measured conditions at baseline, over the course of the experiment, and at endline.

However, before starting the second experiment, we jointly determined that the most pressing research question had evolved to be whether these types of cage culture models could be profitable for farmers in these communities. Based on a more precise research question, we opted for cheaper and quicker data collection techniques. These were less onerous for Tata Trusts’ field staff but still gave the team enough information to decide how to proceed based on the results.