Skip to content

Blog

Evidence and evaluation: What’s the Use?

Guest blogger Teresa Hanley asks 'what's the use of evidence and evaluation?' after her session at EES2016.

20/10/2016

As evaluators, we worry that all our hard work sometimes goes no further than the ‘Evaluation Report Shelf’.

At the recent European Evaluation Society 2016 conference, I presented some of the findings of the evaluation I lead of DFID’s Humanitarian Innovation and Evidence Programme (HIEP), which has a focus on use of evidence, and I reflected on the use of the evaluation findings to date.

Our evaluation is following the programme for five years, from 2013-18, and has just completed its third stage. At this point, we can begin to see some early signs of discussion, debate, and in some cases uptake, of the emerging findings from the large range of evidence and innovation projects supported by DFID through HIEP. Factors helping this include partnerships between academic and operational organisations, which provided access to data and communities for research as well as providing credibility to findings in the eyes of other operational decision-makers. These partnerships are not always easy as Duncan Green clearly articulates in his recent blog, but based on the evidence we have gathered to date, they are definitively worthwhile.

In the initial inception phase, the evaluation team worked with DFID to develop the theory of change underlying the programme design, which proved to be another worthwhile process. This resulted in a much more detailed articulation of how humanitarian organisations would take on the research and innovation products and feed it into policy and practice. Key steps included:

  • The role of intermediary organisations to promote discussion on findings
  • Endorsement by influential figures in the sector to champion findings
  • Use by DFID in its own role as donor, user of evidence and influential actor in the sector.

By taking these steps at this early stage, we can see they are already beginning to bear some fruit in terms of interest in the programme’s findings.

So far, the evaluation found good progress in many projects in engaging with the sector, but we also found a consistent underestimation of the investment needed in communication, in terms of funding and time. It takes significant investment of time to build communication links and, in particular, then engage with different networks and opportunities internationally and nationally to translate evidence and innovation in routine practice. Resourcing organisations to monitor uptake after the research report is produced is also important.  We recommended greater investment into a range of products and processes for international and national stakeholders to build learning based on the programmes. We have fed back these findings to DFID for consideration in the next stage of the programme.

Reflecting on the evaluation itself there are some key factors that supported its use so far in DFID:

  • The programme is running for five years so we can feed in recommendations while parts of the programme are still being developed
  • Having a consistent evaluation team helps with relationships with the wide range of individuals involved in the programme from DFID and its partners, as well as builds the evaluation team’s collective knowledge
  • The role of an independent evaluation advisor in DFID, with support from an external steering committee overseeing the evaluation, ensures both the quality of the evaluation and the engagement of the programme team
  • DFID’s obligation to respond formally to evaluation recommendations at the summative phases. This formal process takes time and we have learned it is important to feedback informally much earlier to ensure the recommendations are timely for DFID and partners.

In presentations at the EES conference, there was plenty of evidence of increasing attention to the use of evaluations. Participants discussed using videos, webinars, and graphics to present findings in more interesting ways than what can seem like a dry report, and to involve more people. They also shared experiences of efforts to change organisational culture to be more evaluation-friendly, which included very practical challenges of getting online platforms up and running, and finding the start-up resources for building skills and changing attitudes.

Researchers and evaluators seem to have similar issues in common, as we both grapple with making sure our evaluations are relevant and used.  In the midst of discussions about the plethora of options we have now for communicating our findings, one surprising piece of evidence that a UN Agency shared was its own finding that potential users of evaluation remain interested and do read the full evaluation reports, as well as valuing face-to-face interaction as a way to learn from them. These reports continue to be the foundation of evidence and learning, but we now have more ways to make them do so much more that they no longer need fear life on the virtual shelf.