Itad’s methodology to understand what works in Empowerment and Accountability

The main focus of the Empowerment and Accountability macro evaluation is to analyse the portfolio of DFID empowerment and accountability projects to understand what works, for whom, in what contexts and why?  To do this, the Itad team has developed an innovative methodology, which combines Qualitative Comparative Analysis with narrative analysis, to analyse a set of projects with a common objective.

Below is a summary of the methodology tested in an early pilot. Read a detailed description of the methodology.

The evaluation team has learnt enormously from an early pilot and have refined the methodology to apply to our first full round of project set analysis. A technical note explaining our revised approach is available here.

You can find some of our reflections on the pilot in our blog.

The Pilot Methodology

The macro evaluation methodology (illustrated below) is based on a ‘realist’ approach that synthesises a wide range of evidence from the projects under analysis to identify the causal mechanisms contributing to change and to explore how they work in different contexts and settings. The synthesis process uses working hypotheses about contribution to change which are tested and interpreted using a set of projects.

The evaluation team used portfolio data held in the evaluation database to select the projects to be included in the pilot project set.

We identified projects that had an intended outcome of ‘improved service delivery’ and then screened them to include those projects with sufficient quality of evaluative data. From the resulting 48 projects, we purposively selected 15 with the highest quality evaluative data and which captured the range of social accountability ‘operational models’ and the different contexts observed.

Figure 1: Macro Evaluation Methodology

 Methodology

The evaluation team then employed two methods to synthesise and interpret the evaluative data for the set of 15 projects. Using the qualitative comparative analysis (QCA) method, we first systematised the range of ‘conditions’ (comprising contexts, mechanisms and outcomes) that emerged from our review of the project set reporting and evaluative data, and applied a binary score (1=largely present; 0=largely absent) to each condition for each project in the set. Analysis of the resulting data sheet allowed us to identify patterns of conditions that give rise to the given outcome.

Taking the findings from the QCA, we then sought to interpret and illustrate these patterns using narrative analysis: a deeper comparative qualitative analysis of the evaluative material available. The narrative analysis consisted of identifying themes arising in project documentation which were relevant to the hypotheses identified, systematically coding the relevant narrative passages using the database software, and then analysing the resulting evidence to flesh out the QCA findings and draw conclusion against the working hypotheses.

For more information about our methodology, have a look at a presentation the team gave at the UK Evaluation Society 2016 Conference.

Return to Macro Evaluations Page

Capture3