Adaptive Evaluation for Innovation and Scaling
Oct 10th 2023
The scaling of innovations often involves system change. Adaptive Evaluation offer...Read More
Note: This is the fifth blog of a six-part blog series
This issue of IEI takes on several tenets in international development and their impact on the sector and people they seek to serve, from community-driven development to data collection and impact measurement.
Community Participation in Development. There’s been dynamic dialogue around the role and impact of Community Driven Development in recent weeks. Community Driven Development, defined as programs in which community members participate in the identification, implementation and maintenance of externally funded development projects, has been a part of the development lexicon for decades and is well-integrated into the language around the Sustainable Development Goals. However, the impact of these programs is both uncertain and difficult to quantify.
A recent brief by the International Initiative for Impact Evaluation (3ie) found that while substantially improving the quantity of small-scale infrastructure projects, CDD programs had little to no impact on social cohesion or governance. Note the authors, “Evidence suggests people may have participated in making bricks, not decisions.” In his analysis and facilitated debate about the paper and its findings, Oxfam’s Duncan Green notes that some of the negative results may also “reflect the dangers of treating ‘the community’ as a single entity, rather than understanding it as a complex system of power in which some groups dominate others (men v women, less poor v more poor, able bodied v disabled).”
In a recent World Bank Working Paper, Scott Guggenheim and Susan Wong take a look at the impact, myths and realities of CDD programs through the lens of government implementers. Their conclusions are both hopeful and critical, noting among many other findings that:
Examining the issue through a different lens, Nonprofit Quarterly put out “Community Development as if People Mattered,” a case study of the role of NGO’s in Nigeria and the level of community participation in their programs. The article notes that while the theory around Community Driven Development is well established, operationalizing that theory into practice and implementation is still a work in progress and impacts program effectiveness in nonprofit interventions. The author writes, “Participation, to be sustainable, needs to aim at horizontal integration, which includes the collaboration of various spheres of the local government, individuals, associations, and groups, irrespective of gender or social status.”
To measure, or not to measure? An article in this summer’s issue of the Stanford Social Innovation Review by the authors of The Goldilocks Challenge: Right-Fit Evidence for the Social Sector, asks the important question of when it is relevant and reasonable to engage in impact evaluation of a program. They highlight the impact study conducted by Living Goods that influenced policy makers and the replication of their community health-based social enterprise program. However, the authors caution that evaluation results are seldom that clear. Instead, developing an evidence base is akin to “building a mosaic,” in which the whole picture can’t be understood by a single piece, but instead emerges as more and more pieces come together.
The growing trend for more impact evaluation however can pull resources and attention from collection data that could actually improve the performance of a program, redirecting scarce resources toward measurement for measurements sake.
“Much of such waste in pursuit of impact comes from the overuse of the word impact. Impact is more than a buzzword. Impact implies causality; it tells us how a program or organization has changed the world around it.”
The authors offer up 10 fairly specific reasons and scenarios in which measuring impact is not the right step to take for an organization or program – a useful yardstick in the face of increasing demands from across the sector to better understand “impact.”
Instead, they argue for a more diverse approach to data collection.
“The challenge for organizations is to build and use data collection strategies and systems that accurately report impact when possible, demonstrate accountability, and provide decision makers with timely and actionable operational data. The challenge for funders and other nonprofit stakeholders is to ask organizations to be accountable for developing these right-fit evidence systems and to demand impact evaluation only when the time is right.”
Amending data inequalities. While on the topic of data collection, The Web Foundation recently put out an article highlighting the sometimes extractive nature of data for development.
To shift these data inequalities requires a shift in mindset in our approach to collecting data.
“To push back against data inequality, data initiatives need to be designed to work with people (as agents of change), not for them (as beneficiaries).”
IMAGO’s partner, The Poverty Stoplight, is highlighted as an example of what data empowerment can look like. The participatory poverty assessment methodology, “lets people conduct an evaluation of their living conditions and helps them develop strategies to escape poverty. While the data created can be aggregated centrally, it also stays with individual citizens who can use it to inform their decision-making.”
As data collection becomes more and more integral in the design and implementation of development programs, such a shift in mindset about uses and ownership of data is essential to ensuring that programs are designed from the ground up with the people, those data generators, as the most important consumers of that data and have access to the tools necessary to use that information to change their lives.