Foreign Policy’s Democracy Lab website recently published an article on US democracy promotion and strengthening efforts by Tomas Bridle. Bridle is largely responding to an earlier piece by Melinda Haring, also published by Foreign Policy. Both authors make interesting points, which I will briefly summarize, before adding some thoughts of my own.
Haring contends that the US government, through principally through USAID, has spent millions of dollars on democracy promotion projects in authoritarian/semi-authoritarian countries (those classified as “not free” by Freedom House’s Freedom in the World index) with little or no results. By way of example she points to the $55 million spent by USAID on democracy efforts in Azerbaijan, which continues to be ruled by a strongman with only the trappings of democratic governance. Haring suggests that an alternative is the small-grants approach of the National Endowment for Democracy (NED), citing as advantages the lower overhead costs and more contextually suitable approaches. Finally, Haring calls for better evaluation of the impacts democracy promotion efforts, even as she acknowledges the complexity of analyzing the outcomes of such interventions.
Bridle writes in response to Haring’s article. He states that the two authors are in agreement regarding the bureaucratic challenges involved in much of USAID’s democracy work as well as the need to move beyond “cookie cutter” approaches to strengthening democratic governance. Yet Bridle argues that Haring is using just such a cookie cutter philosophy by stating that under authoritarian conditions the US government should exclusively use NED-style small grants to promote democracy. He states that Haring has forgotten to ask what kinds of approaches have been more successful in promoting democracy in the past. Bridle notes that many of the 35 countries that progressed from “not free” to “partly free” in the Freedom House ranking between 1993 and 2012 had substantial and diverse US democracy assistance programs in place before and during those transitions. Although he quickly points out that we do not know which of those initiatives (if any) had a greater impact on the democratization process, he offers no further evidence to support his claims that the US should continue with its pluralistic approach to democracy promotion, through USAID, NED and other channels. Brindle argues there is also no evidence to suggest that NED grant programs have been or would be more effective than the efforts that were carried out. Finally, and importantly, Brindle admits to having been involved in one of the democracy programs in Azerbaijan that Haring pointed to as a failure, but doesn’t directly address her criticism of that intervention, simply stating that there are “countless inefficiencies and problems” in such efforts, but “throwing out all but one type of assistance won’t fix those problems and won’t move those countries any closer to democracy and good governance.”
Both authors acknowledge that democracy promotion and strengthening is a difficult and complicated endeavor, but an important one. Brindle’s argument, however, has glaring holes in it: he only addresses one of Haring’s claims rather than her overarching point about democracy promotion, and even more problematically, he makes claims about the relative efficacy of different kinds of programs using the most flimsy of evidence: the US implemented diverse forms of democracy promotion in countries that saw democratic advances, ergo, the US should continue with this approach.
Thus, both authors, directly or indirectly, highlight the need for improved analysis, learning and evaluation of democracy strengthening interventions. Haring calls for this directly, while Brindle demonstrates such with his flawed use of “evidence”.
Indeed, to avoid the cookie cutter solutions both authors oppose, there is a need for stronger political analysis of the contexts of democracy strengthening interventions. Yet the increasing use of political and political-economy analyses doesn’t necessarily lead to improved programs. Broad studies of political dynamics may be difficult for democracy practitioners to operationalize in specific interventions. Analyses might instead focus on potential leverage points in specific institutions. However, improved political analysis must then be translated into politically-informed interventions rather than the solely technical assistance approach that often dominates efforts to strengthen democratic governance (see my report from the “Making Politics Practical” workshop). Programs may break the cookie cutter mold, but if they rely only on technical approaches rather than recognizing the inherently political nature of democracy efforts, they are unlikely to contribute to meaningful change. Haring’s suggestion of grants to civil society organizations may well be a better way to achieve democratic gains in the long run, by strengthening local citizen engagement around advocacy and monitoring (Pierre Landell-Mills argues for citizen-based approaches in the case of anti-corruption work).
Finally, as mentioned, the question of evaluating the impacts of democracy programs is a thorny one. The first lesson is the need to go beyond the dominant randomized experimental design approaches to impact evaluation (which may be a bubble, anyway). Rather, given the political nature of efforts to strengthen democratic governance, evaluation methodologies need to capture how actors think and work politically, most likely employing mixed-methods designs. One promising approach is “embedded learning”, such as has been undertaken by the International Budget Partnership in some of its programs with civil society. But here Brindle’s point is well taken: a diversity of methods for learning about democracy strengthening programs and their impacts is likely better than relying solely any one approach.