While the last two posts in the ‘making results systems work’ series have been looking at result system in a broad sense, today we’ll focus specifically on evaluation.
In the special edition of the Journal of Development Effectiveness we contributed an article on how Norad’s Evaluation Department manages evaluations and whether they do this in a way that best supports quality. Last year we also conducted work for the Australian Department for Foreign Affairs and Trade (DFAT), which reviewed the quality of 87 operational evaluations completed in 2012, and explored the underlying drivers and barriers to quality.
Given the clear linkages between these two pieces of work we pulled together the learning from both, along with a brief review of recent literature on evaluation quality, into a CDI practice paper called ‘improving quality: current evidence on what affects quality of commissioned evaluations’.
We thought this would be a useful exercise given the increase in resources that organisations are dedicating to evaluation and the fact that a growing number of commissioners are now looking at how to ensure the studies they commission are of sufficient quality. We felt that while a plethora of evaluation quality standards exist that identify the factors that shape quality, most are experiential rather than based on research evidence. Particularly in the context of commissioning and implementing evaluation in bilateral donors, there has been limited empirical research on identifying the factors that underlie evaluation quality. We wanted to try to start to address this gap.
Here are some highlights from the paper:
The importance of team skills to quality: “While all of the studies point to the importance of having a generally strong team that has a balance of skills and experience between individual members, the presence of evaluation skills within the team is particularly important. The USAID (2013) study found that USAID evaluations with an evaluation specialist as part of the team were statistically of significantly higher quality.”
Clarity of purpose as a key underlying factor of quality: “The extent to which an evaluation has a clear purpose is strongly correlated with evaluation quality…When there is clarity around why an evaluation has been commissioned and how it is going to be used, quality is higher. Evaluations that have a clear purpose to inform management decisions, in particular on future programming, are correlated with higher quality.”
The effects of institutional factors on quality: “interviews with both evaluation managers and evaluators at DFAT suggested that the evaluation policy had perverse effects on quality. They argued that the drive for increasing the number of evaluations risked promoting a compliance driven approach in evaluations. If evaluation managers are pressured to commission more evaluations than they have time to manage, evaluations are likely to become a ‘tick-box’ exercise. In this type of environment quality inevitably dips. Conversely, in the case of USAID, there was a clear correlation between evaluation quality and whether an evaluation was commissioned before or after the introduction of new evaluation policies and systems. The changes included the strengthening of M&E as one of the topline performance indicators of reform efforts, the roll-out of evaluation training courses to 1,200 USAID staff members and other stakeholders, and the introduction of a target for high quality evaluations through the Forward Initiative”
Main conclusion from the practice paper: “the current debate on evaluation quality has become fixated on the issue of methodology to the neglect of other equally important issues. While methodological rigour is important, a singular focus on this issue is unwise. Considerations of quality need to permeate all stages of the evaluation process and evaluation quality needs to be recognised as a product of the capacities of the evaluation commissioner and evaluation team, the relationship between them, and the wider institutional environment in which the evaluation is being conducted”
Rob Lloyd, April 2015
Read Part 1 – Comparing the Results Measurement Systems of Development Agencies
Read Part 2 – A Framework for thinking about Leadership on Results