Getting to grips with the DFID Macro Evaluations

We were absolutely elated when, in February 2014, DFID awarded Itad, on behalf of the ePact consortium, the contract to conduct macro evaluations of their gender policy – the Strategic Vision for Girls and Women – and their Empowerment and Accountability policy.

Our initial reaction

Both macro evaluations are high profile, challenging pieces of work, but for different reasons. The Strategic Vision is a flagship DFID policy, which has enjoyed top ministerial support. We knew that evaluating its implementation would be a sensitive undertaking but that it offered considerable opportunities for influencing DFID’s future approach to promoting girls and women’s empowerment. For the Empowerment and Accountability macro evaluation, our task is to draw out from DFID’s project portfolio evidence on what works, for whom, in what contexts and why. This calls for an innovative methodology, which facilitates rigorous comparison of similar types of projects working in diverse situations. We’re conscious that empowerment and accountability interventions are highly context specific. Understanding how projects interact with their environment and the elements of that environment which promote or inhibit change is one of many challenges we face on the evaluation.

Whilst we were looking forward to the prospect of working on the two macro evaluations, we were also quite daunted. DFID has no means of identifying its empowerment and accountability and Strategic Vision project portfolios and so we only had a rough idea of the number of projects we might be working with. From our existing experience of working with DFID, we knew that whilst project data would be plentiful, data that would help us unpick what works in different contexts and why would be patchy. What we didn’t know was just how patchy! Our planned methodology was testing new ground and there were many unknowns about how to apply it, especially given the available data.

Digging deeper

During our 11 month inception phase, we were able to make progress with understanding the scope of work and the quality of available data. We reviewed all 2,379 projects DFID approved since the two policies came into effect in 2011 to identify those relevant to the two evaluations. 529 projects were identified and we analysed each of these in more detail, coding basic features and loading key documentation into our evaluation database. The resulting database, which provides access to all DFID Empowerment and Accountability and Strategic Vision projects active in the 2011-2014 period is a major achievement (see Emily Richardson’s blog on that whole process – coming soon!). Providing a consistent set of project documentation for each project, the database represents another milestone in DFID’s drive for greater transparency in the work that it supports.

Our analysis of available data confirmed our concerns about unpacking what works in different contexts. Approximately 40% of the projects had data we felt we could work with. Only 12% of projects had an evaluation report, a key source of the kind of information we were looking for on the drivers and inhibitors of change. Unsurprisingly then, availability of quality data is the main criterion in the selection of projects to include in the Empowerment and Accountability macro evaluation.

Testing our evaluation approach

In the first part of 2015, we had the opportunity to test out our methodology for the Empowerment and Accountability macro evaluation. This was an incredibly valuable process through which we’ve learnt a huge amount. Two main lessons stand out:

1. We need to analyse a larger set of projects to be able to draw well evidenced conclusions: To keep things manageable in the pilot, we worked with a small number of projects. However, we learnt that given the nature of the projects, and the need for specific hypotheses to generate findings which are easy to implement in the real world, we need to work with a much larger set of projects.

2. We need to apply our two methods slightly differently: The macro evaluation uses two methods, Qualitative Comparative Analysis (QCA) and narrative analysis. From the pilot, we understood that we needed to dedicate more time to the interpretation of QCA preliminary findings and undertake multiple rounds of QCA analysis to bottom out the important drivers of change. We also understood that to maximise the potential of QCA, we needed to approach the selection of projects for in-depth narrative analysis more systematically. In future, for each of the causal chains which QCA analysis identifies as important to achieve an outcome, we will analyse at least 3 projects which represent the following cases: i) projects which exemplify the causal chain under analysis and achieve the intended outcome; ii) projects which exemplify the causal chain under analysis but do not achieve the intended outcome; iii) projects which do not exemplify the causal chain under analysis, but which achieve the intended outcome.

We’ve now refined our methodology for the Empowerment and Accountability macro evaluation and are applying it to our first complete set of projects, focussed on social accountability interventions. Watch this space for our reflections on the evaluation process!

Looking ahead

So, with this experience under our belt, how are we now feeling about the macro evaluations? We are both excited and energised. Our work to date has shown us that we can navigate complex problems to deliver high quality pieces of work, which can and do inform policy and practice. We are cautiously optimistic that our willingness to test, learn and adapt will enable us to overcome the challenges ahead. It is then with some anticipation that we move into the next phase of the macro evaluations and the empowerment and accountability project analysis specifically. We look forward to sharing our experiences and findings with you here.

Claire Hughes, December 2015


Visit our Macro Evaluations Blog page for more reflections from the team.