Blog

When the Evidence Isn’t Evident: Why Are Some Kinds of Impact So Hard to Measure?

/
August 29, 2019

A few months ago, I proposed that a lot of work in the democracy sector, and social change in general, can be captured in six distinct “impact models.” At Democracy Fund, these models have lent new nuance to a perpetual question: how do we measure the impact of democracy work? We understand that there’s a big difference between impact and no impact, and that we shouldn’t hide behind “impact is hard to measure” to avoid admitting when we’re simply not achieving it. But while I wish there was a methodological silver bullet to measure democratic change, the truth is that it can be hard to measure some impacts using specific evidence within a specific period of time. In other words, for some types of impact, the evidence is less, well, evident.​

Looking back on evaluations that I’ve done, I can think of a number of instances where there was clear, objective evidence of impact from a transformative model: a new law passed, voter turnout increased. But I’ve struggled to find evidence of impact from preventative models: government overreach that was constrained, or civil rights abuses that were prevented.

I think the reason for this is actually pretty simple: what differentiates the impact models from each other also affects how likely they will be to result in “evident” impact – that is, impact that can be measured with specific evidence and in a specified time period. When we decide how to intervene in a system, we make two basic choices. The first is whether we’re looking for short-term or long-term change: does the intervention address specific, emergent threats or opportunities, or are those threats and opportunities more long-term and/or evolving? The second choice is whether the strategy is intended to disrupt the system or to make it more resilient: is the intervention responding to a deficiency or inefficiency in the system that needs to be changed, or is the intervention seeking to protect a system from threats or decline?

These choices also have implications for how “evident” the resulting impact will be. Disruptive interventions are more likely to yield evidence of impact because it’s easier to pinpoint how and why things change than how and why they remain stable. And because they address timebound threats or opportunities, short-term interventions are more likely to yield evidence of impact in a specific timeframe. So it follows that short-term disruptive models would be most likely to yield evident impact, while long-term resilient models would be the least likely, and short-term resilient and long-term disruptive models would fall somewhere in the middle.

In the framework below, I have attempted to map the impact models across these two dimensions (type of change and timeframe). Based on where they are located on the map, I’d offer the following conclusions:

  1. Transformative and proactive models that leverage sudden openings to disrupt systems, are most likely to yield evident impact.
  2. Incremental transformative, palliative and preventative models that focus on long-term resilience of systems are least likely to do so.
  3. Stabilizing and preventative models that defend against threats by focusing on short-term resilience may yield some evident impact, but the full scope of that impact (including threats that were contained or thwarted) may be less evident.
  4. Opportunistic models that invest in long-term disruption to achieve systems change, may produce some evident impact, but that’s dependent on the timeframe for a breakthrough.

I realize that doesn’t really answer the question of how to measure the impact of these models, particularly when the models are on the less evident end of the spectrum. But I think it prompts a different, and perhaps more important, question: if we accept the premise that some models of democracy work can have impact even if that impact isn’t evident, can we still make sound, evidence-based decisions about them?

Navigating complex systems is rife with uncertainty, and collecting relevant and meaningful evidence is part of how we mitigate the risk of that uncertainty. So pursuing an impact model that will leave us flying blind due to a lack of evidence might seem unacceptably risky. For example, if we know that we’re working toward palliative or preventative impact through long-term resilience, how do we mitigate the risk of a “boiling frog” scenario, in which the system’s lack of progress and/or slow decline eventually becomes untenable? And how do we know whether we’re confusing the “strategic patience” required for a long-term, disruptive intervention with a “sunk cost bias” that makes us hold on to a losing proposition? And even if we’re able to observe the impact of a short-term, disruptive intervention, how do we make sure we’re also capturing evidence of unanticipated, negative results?

But if we stick with the “safer” models – those that promise clear evidence of impact in a defined period of time – we may be left with a false sense of certainty about whether we’re pursuing the most effective and relevant solutions, or avoid tackling the thornier, longer-term challenges altogether. So lately I’ve been focused less on “how can we measure the impact of democracy work” and more on “what evidence do we need to be confident in our strategic choices?” Because now more than ever, democracy work requires courage and creativity, and I want to build an evidence-based evaluation and learning practice here at Democracy Fund that recognizes that. Of course there’s a big difference between impact and no impact, and of course we shouldn’t hide behind “impact is hard to measure” to avoid admitting when we’re simply not achieving it. But we also need to acknowledge that there’s a big difference between the easy wins and the risky plays, and we can’t hide behind “the impact will be hard to measure” to avoid tackling the big challenges. Our current political moment demands no less.

Democracy Fund
1200 17th Street NW Suite 300,
Washington, DC 20036