Skip to content
Blog

2024 Dignity Report

3 December 2024

Chapter 2: Methods of measurement

 

Photo credit: Paula Bronstein/Getty Images/Images of Empowerment / Illustrations done by: Magda Castría

How do we know if development programs truly respect and uphold people’s dignity? While research shows that dignified treatment enhances cooperation, service satisfaction, well-being, and societal participation​, robust ways to measure dignity have been lacking.

In the “What to do” blog chapter, we explored the pressing need for dignity-focused interventions and identified several promising approaches for pilot testing. But how do we know if these interventions genuinely uphold dignity? This blog introduces the tools and methodologies to answer this critical question.

Choosing the Right Measurement Tool

Recently, the interest in dignity research has spurred the creation of new measurement tools, such as the “Project Respect for Participant Dignity” measure (Perrin, Hembling & Castleman, 2024) by Catholic Relief Services (CRS) and our “Felt Respect for Dignity” scale (Wein, Khatry & Bhimani, 2022​​).

Most of the tools use surveys to gather data, either through subjective questions about personal experiences or through objective observations of specific behaviors. We’ve prioritized and validated the subjective approach, as it allows individuals to apply their expectations and interpretations of respectful treatment––making it adaptable across different cultures and contexts.

For our research, we’re primarily using the CRS ‘Project Respect for Participant Dignity Scale’, because it offers greater sensitivity for academic research and has been extensively validated by CRS across five diverse populations in India, Niger, the Philippines, and Zambia. This tool also provides more detailed insights into the nuances of dignity experiences. However, we still recommend the shorter Felt Respect for Dignity measure for routine monitoring and evaluation, especially when organizations need to identify potential shortfalls quickly.

Refining the Tool 

To ensure our measurements are accurate and meaningful across varying cultural contexts and intervention types, we have considered several adjustments to address both technical and contextual challenges arising from the subjective nature of dignity. One such challenge is adaptive preferences (Sen, 1999; Dold, 2024), where individuals’ expectations of respectful treatment may be shaped by past experiences, especially if they have endured mistreatment. This phenomenon could lead to artificially lowered expectations in some groups, potentially underestimating the need for dignity-affirming interventions. To mitigate this, we plan on collecting baseline data on participants’ expectations of respect and dignity.  This establishes a comparison point that reflects not only personal expectations but also societal norms, understanding how they would expect a “typical person” to be treated.

We also consider adjusting the Likert scale format by expanding from a 5-point to a 7-point scale – Harzing et al. (2009), and clearly labeling scale points. Doing so is in an effort to improve the granularity of responses and help to avoid “ceiling effects” where scores cluster at the high end as has been observed from the CRS pilot of the measure. This expansion allows respondents to express nuanced agreement or disagreement levels without defaulting to extreme responses or midpoints. Additionally, to counter the potential over-saturation of scores, we are considering a follow-up question that asks participants to reconsider their score as a percentage (i.e. transferring their score to a scale of 0-100), thereby allowing room for upward or downward adjustments that better reflect the true impact of the intervention.

Additional data collection beyond surveys

In addition to baseline and endline survey data, we plan to integrate complementary data collection methods to capture deeper insights into how participants experience and perceive changes in dignity over time. A primary focus will be understanding the lived experience behind shifts in dignity scores. 

We will conduct cognitive interviews during the piloting phase, a series of qualitative interviews throughout the lifecycle of the program, and observations. This will help us understand more deeply how specific points on the dignity scale relate to personal experiences, the effects of different parts of the intervention and the connections between different outcomes.

Looking at broader impacts

Treating people with dignity may improve various aspects of their lives beyond the impacts we know of. Prior research highlights impacts not only on individual wellbeing (Wein et al., 2022) but health (Beach et al., 2005), program service engagement (Wherry et al., 2019; Thomas et al., 2020), and broader social/societal dynamics (DeCremer, 2002; Lalljee et al., 2013). We will add to the literature by examining impacts across a range of secondary measures that may include agency, subjective wellbeing, social status, intended and actual service uptake, trust in the NGO, willingness to participate in future, willingness to recommend services, altruism to others, conflict resolution, cooperation, and community engagement. 

We are also interested in exploring how programmatic outcomes relate to dignity outcomes. This could help us better understand the relationship between respect for dignity and sectoral outcomes and evaluate whether improvements in dignity correlate with improved program outcomes.

To fully understand these impacts, we rely on rigorous measurement tools and insights from other organizations already grappling with dignity practically. These insights provide perspective on what works and help identify practical challenges we must address in our interventions. In Chapter 3 blog, we learn from our partners’ real-world experiences, exploring how they bring dignity to life in their work and draw lessons for our randomized controlled trial.

Sample Size

To calculate sample sizes we ran simulations based on previous validations of the CRS measure, which suggested a minimum of 1,540 respondents across 154 clusters. However, to increase reliability and account for potential subgroup variations under the same assumptions, we aim for a larger sample size of 2,400, spread across 240 clusters. This will allow us to capture detailed insights into how dignity-related experiences differ across various groups, including marginalized communities.

Ethical Considerations

It would be irresponsible (and ironic) to study dignity while conducting research that feels impersonal or disrespectful. That’s why we’re learning from mistakes of the past and making the research process itself a model of dignified interaction. We’re using participant-preferred methods for feedback, implementing improved consent processes, and incorporating community advice at every stage. 

This approach isn’t just more ethical; it can produce better data. Studies have shown that respectful treatment may positively affect participants’ willingness to engage, which in turn may contribute to higher-quality data and more authentic responses (Wein et al., 2023; Wein & Lamberton, 2024). Given that, we will adopt practices that both improve participant experience and potentially enhance the quality and reliability of the data. We will pilot methods of feedback, consent, and community advice that accord with participants’ preferences for improved research (Singh, 2023, Toward More Inclusive Behavioral Science).

The road ahead

Measuring dignity is challenging yet essential. By refining these tools and methods, we’re working to create more accurate measurements and better understand what works. These efforts will contribute to building more dignified, inclusive, and respectful development programs, keeping dignity at the heart of all our work.

Stay tuned for updates as we refine these measures and share insights from our work in the field!

 

The Dignity Initiative's 2024 Report: What Works?

Exploring what works in dignity-centered development