Some brief thoughts on failure in governance interventions

It’s hard to address failure in foreign aid.  Yet how can we learn without a continuous exploration and honest conversation about what isn’t working?  Interventions addressing governance challenges are especially prone to failure given the complex and political nature of this sector, but are we acknowledging these failures and learning from them?  I would suggest we aren’t, or at least not nearly as much as we should be.  I want to briefly highlight three factors at the heart of this issue: incentives, measurement and the spectre of ‘political will’.  But first, a brief disclaimer: these are complex challenges and I don’t claim to have the answers.  I’m just throwing out a few thoughts.  That’s what blogs are for, right?

First on incentives.  Most aid organizations, from bi-lateral donors through charitable NGOs to local CSOs, have  incentives to downplay and ignore failure.  These organizations receive funding from a variety of sources (tax payers, charitable individuals, regranting NGOs, etc.), and its in their interest to portray their utilization of these resources as successful (part of the problem is that many development organizations simplify very complex challenges like poverty and governance, so as to coax donors into giving them money to do something about it).  Often their organizational lives seem to depend on it.  Openly admitting failed initiatives could lead to the funding running dry, as donors shift to other organizations claiming more success.

Yet development and governance interventions fail quite frequently, and monitoring and evaluation is becoming increasingly robust, so what is happing here?

This leads me to my second point of measurement.  Governance indicators are often challenging.  There are numerous national level indicators, such as the Corruption Perceptions Index (which is not universally accepted) and the Worldwide Governance Indicators, among others.  These may give an idea of the relative character and scale of governance challenges in any country, but what about measuring something like government responsiveness?  Or citizen participation?  These can be more challenging to quantify.  Some of the indicators that are used are not helpful, like counting number of committees organized or women who attended a meeting (I wrote a paper touching on this issue in the context of USAID programs in Guatemala and Bolivia).  So if we don’t have good indicators, we likely won’t know whether we failing (or succeeding, for that matter).  Evaluation tools have been better at identifying failure, but there are challenges to using some of the more ‘rigorous’ tools (such as randomized controlled trials) on governance interventions (often work with small sample sizes, intervention may change as dictated by political conditions, etc.).  But even when evaluations do find evidence of failure, this may be at the conclusion of a 4-5 year project costing millions of dollars (not that this the right timeline for governance interventions or that more money equals better results).  The imperative is to ‘fail fast’, so that project implementers can adapt their approach.

Third, and finally, often when governance interventions do fail, all involved throw up their hands and claim it was a lack of political will.  This is problematical, because a). political will is an unhelpful term that disguises complex dynamics around incentives issues, and b). if political will was a critical factor for success, then why is it so often listed as an assumption?  External interventions addressing governance challenges need to tackle political will head on, with analytical tools that allow for political dynamics to be untangled, and through politically informed interventions that push on different leverage points around incentives, power dynamics, windows of opportunity, etc.  To the extent that political will is only included as an assumption, then failure becomes a much likelier scenario.

Now, to get off my soap box, and get to the original point of writing this post, some people are starting to engage with failure more openly in development interventions…and the world has not come to an end!

See the following two (short) articles for thoughtful discussion on addressing failure:

http://www.theguardian.com/global-development-professionals-network/2012/dec/07/fail-faire-how-to-talk-about-failure

http://www.scidev.net/global/communication/opinion/let-s-make-a-success-of-failures.html?goback=%2Egde_788017_member_264615511#%21

Also, apparently there was a Failure Festival in Washington, D.C. recently.

Now, the challenge lies in turning failure into learning (and into better work in the future).

Advertisements

2 thoughts on “Some brief thoughts on failure in governance interventions

  1. Dear Brendan,

    Thank you for what is really a well-expressed sentiment here. As you already know, I am myself increasingly preoccupied with both the nature of governance indicators, as well as the reliability of indices and metrics, especially after being aggregated into national level data, which I think obscures the myriad micro- and meso-level dynamics that are difficult to capture without adequate samples from each community (which is difficult and costly, but which might reveal a patchwork of results and complications that confound some of the more basic concepts of development). In any case, you bring up an excellent point here, as well as some of the obstacles that make overcoming it a challenge. I look forward to more posts.

    • Matt,
      As I was writing this post, I was thinking back to my (relatively brief) time at USAID, and the challenges we faced in finding good, results-oriented, quantitative indicators. There just wasn’t that much available data, and what there was data on, wasn’t necessarily the most important stuff. So almost all of USAID’s justice sector projects looked like relative successes based on their indicators, yet obviously Guatemala’s justice sector is still abysmal. Was this just a matter of us being successful in some very isolated work that was not having a positive effect on the whole system? Or were are indicators not telling us the real story? Or something else. Same thing with many of our other programs. And believe me, we did spend time thinking about impact-oriented indicators, but I still worry that the need to quantify everything led to a real loss of much learning about what is actually going on.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s