A surveyor mapping plot boundaries ©IDinsight
In Karl Popper’s The Open Society and its Enemies (1945) he introduces “piecemeal social engineering,” his framework for building up social institutions incrementally informed by experimentation and evidence. This is in contrast to the more prevalent “utopian social engineering” of his time which he criticized for overly lofty / abstract ideals that largely ignored practicality; indeed today we might regard such methods as colonial and paternalistic. For Popper, the “piecemeal engineer knows, like Socrates, how little he knows. He knows that we can learn only from our mistakes.” 1
In that spirit, I want to begin with lessons we have learned in trying to apply engineering principles and methods to help our partners increase their social impact. I hope that our learnings can be helpful for others in the sector.
One of the biggest constraints we have experienced as data professionals working in the development sector are the additional degrees of separation between the work we do and the ultimate beneficiaries of our work. By its nature, data already abstracts and reduces the richness of full contextual information. In a model building context, getting a good grasp of this contextual information is critical to a useful model.
Originally much of our work took a top-down approach guided by outputs from analysis, frequently the most common framework for data science work. We would get some data, perform data analysis, followed by feature engineering, model selection, validation, and then production. Over time, we realized that this approach tended to create outputs which were misaligned with the theory of change employed by our partner organizations.
This has led to a more “product” minded approach to data science. Today, we start all projects by working closely with our partners to understand their theories of change, generate user stories, and then perform analysis relative to those user stories.
For example, we worked with a political advocacy organization to help them match more people with social welfare benefits. Although initially, our focus was on improving a model we had developed for them in an earlier phase of the project, through a series of workshops, we understood that the more pressing need for their organization, as well as a much larger impact statement, was actually in making their data systems more efficient. This led us to pivot away from our initial work in order to pursue work that was more aligned with our partner’s strategic vision and theories of change.
Computing resources for data management and data science can be expensive. For an extreme example, estimates for OpenAI’s GPT-3’s training costs range from $4.6 million to $12 million per training cycle. This creates a power asymmetry (even within high-income countries) when it comes to equitable access to the latest machine learning methods as the costs for training and hosting (putting aside other associated costs for now) grow astronomical.
Unsurprisingly, many of our partners have expressed concern over potentially substantial compute costs that might overwhelm any benefits coming from our work. Although empirically, we have found that costs associated with our deliverables usually make up a slim proportion of an organization’s ongoing costs, reducing ongoing and development costs can make our work more cost-effective thus increasing our contribution to social impact.2
This adds a constraint of trying to implement best-in-class data science practices with various resource constraints. For instance, it was standard practice in my previous job to spin up large virtual machines / other resources on demand to do everything from experimentation to running entire model pipelines to automatically training hundreds of models for selection. However, this practice would not be acceptable for our work at IDinsight. We have found the following set of practices to be helpful for us:
This is a common software engineering practice but generally not with the intention of cost reduction. We have found substantial cost savings in creative ways to minimize memory usage or reducing code complexity which might lead to smaller compute resources without a reduction in performance. For instance, in one project in which we used word embeddings as part of a message-matching application, we found that normalizing the saved word vectors within the embedding ahead of time saved substantial computation time thus obviating the need to size up our EC2 instance.
It is well known that the main cloud vendors provide fairly generous free tiers. Often this is limited to specific resources for a fixed amount of time or gbs per month. However, this does help reduce development costs considerably. For an example on AWS, instead of a Sagemaker notebook mounted on a r4.xlarge, a t3.medium in the free tier might mean no additional costs to development if there’s no need for a super powerful instance or running the instance constantly!
In this regard, it is extra important to ensure that resources are appropriately sized for the task required. A very large and powerful database is just not required to store a fixed few thousand rows of data. In fact, in this case, it probably makes sense to go serverless.
Suppose we want to deploy an API to AWS for a mobile application, which is used by a few hundred people, each making no more than 50 calls a day between the hours of 9am to 6pm on weekdays. We could host everything on a conventional EC2 instance. However, this would be fairly costly since we are paying for server time even when no one is accessing the API. Provided the application does not need to have incredible latency, we could think about a serverless solution such as a Lambda function. This would meet the requirements and cost substantially less.
In addition to minimizing costs, we have found that it is important to break down and model potential cost scenarios with our partners in order to come up with a solution architecture that makes sense for their budget and capacities. We have found this exercise useful as many cloud vendors are not immediately transparent with their pricing especially if our partners have had negative experiences in the past with surprising bills.
This one is specific to data scientists. For most of our careers outside of the development sector, one of the most important goals of our modeling work is to minimize some error function to train the most performant model. However, in our experience, we have found that the most accurate model is usually not the best for social impact. When considering model performance, we have found that other context-dependent factors are just as important as minimal errors.
We might consider two models indistinguishable provided both lie above a certain margin of error and both are useful in whatever task is required even if one model drastically outperforms the other nominally. As practitioners however, model performance by itself does not always translate to better decision-making or better outcomes for beneficiaries. In particular, if one model is more transparent, more explainable, and more interpretable,3 we ought to prefer that one even at the expense of greater error. Here are a couple of reasons for why we believe this to be the case.
Model accuracy does not necessarily lead to better uptake in usage by our partners. A big part of our work in advising our partners lies in building trust in our methods and ensuring that those involved understand the model in order for the outputs to be useful. This means that some classes of models may be preferable to others depending on the context of our client organization.
For example, we have found that most tree-based models are generally fairly intuitive for most people who have a somewhat technical background (e.g. understands linear regression) especially when paired with interpretation methods such as SHAP. As a result, when we have invested time in making sure our partners understand our modeling work and are comfortable with interpreting the outputs themselves, we have noticed more trust in our work overall.
Another reason why we might focus on interpretability and explainability is that we are frequently data constrained in our work. Training data is often expensive (especially if survey data), limited, outdated, and riddled with data quality issues; it can be very easy to overfit. Consequently, issues such as covariate shifts become serious concerns for our work.
As such, we have found that an easily interpretable model might be more suitable as it can be easier to figure out what might have gone wrong. For example, we worked with an organization to predict the number of out-of-school girls in different regions in India. We found substantial variation in performance across districts and states suggesting that the model did not generalize well. However, the model was a legible ensemble of sub-models that represented different aspects of theories of change for why girls might be out of school. This suggested to us that we were likely missing covariates which led to a round of model improvements.
Finally, it is worth noting that it may be worth investing in interpretation tools for our models specifically with users at our partner organizations in mind. For the above prediction project, we created a small application that mapped recommended villages to visit topographically. For another project involving matching messages to a database of topics, we created a tool that allowed less technical users to experiment with how our model would assign potential messages. Our experience in this area has led to our partners becoming more confident with our tools, which has greatly aided our collaborative efforts. For instance, picking up potential covariate drift issues for us.
6 December 2024
5 December 2024
4 December 2024
3 December 2024
29 September 2021
3 November 2020
13 November 2020