Earlier this week, I was at a day of the Experiments in Governance and Politics (EGAP) conference in London, with participants from DFID, the World Bank, USAID and several highly respected universities. While the focus was on the emerging experimental evidence in this field, I rather conversely found myself not so much admiring the quality of the research (which was good), but pondering its limitations. Several discussions caught my attention…
Firstly, there was much discussion on the impact of Community Driven Development and Reconstruction (CDD/R), and specifically the experiments conducted in Aceh, Liberia and DR Congo. This has been recently summarised in the work of Dr. Elisabeth King. The overall conclusions seem to be that the impacts are mixed, and in fact, rather disappointing. What struck me mostly was the “so what?” question raised by one of the discussants: What is it that policymakers and programme staff should now do differently, particularly where impact evaluation is essentially showing ’no impact’? There is a real challenge here for evaluators: to better bridge the divide between evaluation and policy, to become more involved in programme design and implementation, and perhaps, for more activism.
Secondly, the common thread throughout the day was that we need to do more to accumulate and synthesize knowledge. I particularly liked the perspective of Sheree Bennett from the International Rescue Committee who said that we need to be deliberately accumulative – not just based on RCTs (and more RCT-based questions) but by pulling together all types of research (using mixed methods) to incrementally build knowledge. This was refreshing to hear at a conference focused on experiments.
Thirdly, mention was made of the ‘medical model’ of evidence, and its appropriateness (or otherwise) for international development. Certainly, clinical trials have done much to revolutionise medicine for the better, but the point was made that Randomised Control Trials (RCTs) are actually done quite late in the development process – and indeed there is much research (genetic or otherwise) that happens prior to a clinical trial. Furthermore, the point was made about whether evidence in the governance field should really be viewed as similar to medical science. Nutrition was offered as an alternative – so while in medicine it may be necessary to isolate particular variables, in nutrition it is said that understanding the complex interaction between different variables within a system – the human body in this case – is often more important. I can see many parallels to the complexity of development in the real world.
And lastly, I gave a brief presentation of recent work in Malawi, where we have reviewed some 50 concept notes and only four (8%) of these were deemed to have ‘some potential’ for experimentation (based on a criteria of technical feasibility and policy relevance). Eventually, even these four fell away for a variety of other reasons. To me, this highlights the limitations of RCT-type approaches for assessing development interventions: It can be appropriate in some cases, but perhaps not for the vast majority. If this is so, then there is more to be done to innovate and improve the robustness of the alternatives – something we are aiming to do through the new Centre for Development Impact.