Assessing the value of evaluations

What’s an appropriate amount of money to spend on an evaluation?  It’s a difficult question to answer. It’s difficult because it depends.

It depends on how big the programme is that you want to evaluate. It depends on the magnitude of the decision the evaluation is contributing to.  It depends on whether the evaluation might generate benefits beyond the programme that’s being evaluated. However, just because it is a difficult question, doesn’t mean it shouldn’t be answered.  Organisations are investing significant sums of money into evaluation as a way of surfacing what works (and what does not), so understanding where (and how much) scarce resources should be invested into evidence generation is an important question and one that many evaluation commissioners are grappling with.

As a donor that has been at the forefront of supporting the generation of evidence about what Pages from Value_of_Evaluation_Discussion_Paper_-__Final_Version_for_Publication_03082016_cleanworks in international development, DFID has been grappling with this issue for some time.  To help structure its thinking and approach, Itad, through its Organisational Effectiveness theme, worked with the DFID Evaluation Department to produce a working paper on this issue:  The Value of Evaluation: Tools for Budgeting and Valuing Evaluations.

In the paper we argue that the value proposition for evaluations is context-specific, but that it is closely linked to the use of the evaluation and the benefits that come from the use of the evidence.  Although it may not always be possible to quantify and monetise this value, it should always be possible to identify and articulate it. In the simplest terms, the cost of an evaluation should be proportionate to the value that an evaluation is expected to generate.

This means that it is important to be clear about the rationale, purpose, and intended use of an evaluation before investing in one. To provide accountability for evaluation activity, decision makers are also interested to know whether an evaluation was ‘worth it’ after it has been completed. Namely, did the investment in the evaluation generate information that is in itself more valuable and useful than using the funds for another purpose.

In the paper we review a wide range of different valuation techniques such as Value of Information analysis, Cost Benefit Analysis, and case studies, and look at their challenges and potential usefulness for the valuation of evaluations. We found that most of the valuation techniques are relatively time consuming to use, require a specific set of skills to apply, and can only be applied in a context of abundance of high quality data. While some of the techniques can generate detailed and specific estimations of value, they often do so at the expense of wider utility. We argued that most ex-ante techniques may be too time-consuming for evaluation commissioners, including DFID, to use routinely.

More complicated and time-consuming valuation techniques may be justified where the benefits are likely to be large, for example where the information generated by an evaluation has the potential to be massively scaled-up and used across countries and/or agencies. In contrast, some of the ex-post techniques are suitable for further adaptation and use.

Drawing on this analysis, the paper presents a framework that evaluation commissioners can use when planning an evaluation to better articulate and estimate its potential benefit and weigh this up with the resources available for conducting the evaluation.  We hope the framework will help commissioners think through these issues in a more systematic way, and support organisations in allocating resources for evaluations in a way that delivers the most value.

Rob Lloyd, August 2016