The scope of this assignment presents an excellent opportunity to conduct a comprehensive study of evaluation quality. The literature in this field recognises that there is gap in past meta-evaluations that have concentrated on only selected aspects of determinants of quality. Many concentrate on reviewing the quality of evaluation products only, with limited attention on the processes, the institutional context, capacities and supporting systems that also may play an important role on determining quality. In this task, we have the opportunity to examine a wider set of quality determinants.
Objectives
The purpose of this assignment is to provide DFATD with: i) an assessment of the quality of decentralized evaluations; ii) a set of recommended actions for the Evaluation Division to improve the quality of DFATD tools, guidance and planning processes to increase the credibility, reliability, validity and use of evaluations; and iii) a set of opportunities to improve management information systems supporting the storing and sharing of evaluation knowledge.
Our approach
Itad’s team will be engaging with three main stakeholder groups: the commissioners of decentralised evaluations, the executors of decentralized evaluations and the users. Our approach to the meta-evaluation has six key components:
- Desk based quality review of DFATD commissioned decentralized evaluation reports, and associated documents (ToRs, Inception reports, management responses), using a standardised quality template, to identify key strengths and weaknesses in evaluation quality.
- Review of the Development Evaluation Division (DED) / Evaluation Service Unit’s (ESU) systems, services and capacities for supporting the planning of decentralised evaluations.
- Review of the DFATD’s management information system (MIS) for storing and sharing evaluation knowledge, and a comparative review of three other agencies’ MIS to identify good practices.
- In-depth case studies of a sample of DFATD evaluation processes, to explore the barriers and enablers of good evaluation planning and to identify how and why decentralised evaluations are used / not used.
- Staff survey to gather wider perceptions on the planning and use of decentralized evaluation within DFATD and the utility of DED/ESU support, test findings emerging from components 1-4 among a wider sample of staff and produce findings that are more generalisable.
- Review of a previous meta-evaluation in 2008 to identify lessons and extent to which recommendations were followed up.
Contact Rob Lloyd (rob.lloyd@itad.com) if you would like to discuss this project.
Image © Canadian Flag, Roundhouse, Whistler. Photo Credit: David Baron (CC BY-SA 2.0)