Attribution is the establishment of a causal link between (parts of) an observed change and a specific intervention. It is something that we all have to measure as part of monitoring and results measurement (MRM) and evaluation activities on programmes using market systems development approaches in order to understand whether and how what we do is making a difference and the difference we intended. But for MRM and evaluation practitioners, measuring attribution is one of, if not the, biggest challenge. Much has been written about how tough the task of measuring attribution is, considering, for example, the large potential number of causal factors (Barbara Befani); and constantly adapting interventions, and the unpredictable nature of market systems development (Ben Fowler & Elizabeth Dunn, Tim Ruffer & Elise Wach).
To help practitioners overcome this challenge, at BEAM we have tried to shed light on the current theoretical/academic and practitioner understanding of attribution and causality in a new paper. The aim was to assist professionals involved in commissioning and implementing evaluations, as well as MRM professionals developing attribution strategies for programmes using market systems approaches.
As part of the research for the paper, we interviewed 10 key informants, including practitioners and academics. We had extremely interesting and dynamic conversations, with a lot of useful insights. A number of really interesting themes came out, such as:
- Not everyone thinks it is possible to measure attribution on programmes using a market systems approach – some people say the complexity of market systems makes attribution impossible, others say that experimental methods can control for complexity. This debate, however, is not a dichotomy of two extremes: there is a wide range of views between these two ends of the spectrum, which we outline in the paper.
- There are political constraints to measuring attribution – It became clear that whatever view practitioners have on attribution, the political economy surrounding market systems approaches ends up having a large effect on how programmes tackle the challenges of measuring attribution.
We then looked into the theoretical literature on causality and attribution which differentiates between three types of causal explanation (Pawson, 2007, Unpublished Work):
- Successionist causality: Causality as statistical regularity, independent variables affecting dependent variables.
- Configurational causality: Causality as a combination of factors working together in a cumulative way.
- Generative causality: Causality as part of the whole system that is irreducible to the parts.
The key informant interviews revealed that the successionist paradigm strongly influences the current thinking on attribution in monitoring and evaluation. Counterfactual methods (which are based on a successionist view of causality) are, implicitly or explicitly, seen as the only credible way to establish attribution. Key informants said they experience continuous pressure to use successionist causality logic and the related counterfactual models, particularly experimental methods.
The key informants also proposed a number of reasons as to why this might be the case. This preference could be a “hangover” from when private sector development focussed more on direct delivery approaches (challenge funds, cash for work schemes), where successionist causality can often be more appropriate. Or perhaps the development sector has an “inferiority complex” created by those who claim that randomised designs are the “gold standard”. In setting the standards for methodological rigour, some people think that randomistas and social scientists have unfairly condemned other methods to an inferior second tier (see also this paper by the lab at the ILO, 2015).
To move things forward, the paper suggests a typology of approaches based on different contexts and perspectives on change. This will allow practitioners to move beyond the successionist paradigm and choose the methods that are most appropriate to their context and intervention design, and are grounded in what is currently understood as “good practice” in the evaluation literature.
This new paper, available here, will be useful and interesting for professionals involved in commissioning and implementing evaluations, as well as monitoring and results measurement (MRM) professionals who are involved in developing attribution strategies for programmes using market systems approaches, or anyone interested in causality theory more generally. Thoughts and feedback very much welcome!
This blog was originally posted on beamexchange.org