The irony of evaluation – thoughts from the annual UKES conference

My initial reaction to my first UKES conference was that it all seemed slightly ironic that a group of dedicated evaluation professionals gathered for two days to discuss how the products they spend most of their professional life toiling away over are not always as useful as they could be. Sighs of self-pity aside, this is, unfortunately, the case in many instances. So, what are some of the factors behind this?

The theme of this year’s conference was ‘the use and usability of evaluation’. This is not a new debate (having been enshrined in ethical standards[1] and specific approaches[2]) but evidence suggests that evaluations being commissioned and produced are not necessarily used systematically[3].

I presented some of the factors that underlie the use of evaluations, identified through a series of three quality assessments Itad conducted for the Australian, Canadian and Norwegian aid administrations, looking at both the quality and use of the evaluations commissioned. Four factors stood out from the evidence gathered on the use of evaluations:

  1. Creating ownership and working in a consultative way: Evidence from all three quality assessments found that ownership of the process, and generating a sense of responsibility for the findings, was an important factor contributing to use. Interestingly, one aid administration highlighted the importance of quality relationships between evaluation teams and commissioning aid managers as a key factor contributing to both the quality and the utility of evaluations. This suggests evaluators need strong interpersonal skills and the ability to build solid working relationships for useful evaluations. Whereas, another aid administration was more pragmatic and spoke of the need to deliver evaluations in a consultative way, in addition to collaborating closely at all stages of the review process. Obviously, the independence of the evaluators needs to be upheld at all times.
  1. Having a clear purpose to the evaluation: This was defined as having a clear understanding by the commissioner and or evaluator of what decision-making processes the evaluation is going to feed into. How will the findings be used? This was not about defining the objective of the evaluation, i.e. what the evaluation has set out to assess, but giving more credence to how exactly is the information that emerges from the evaluation is going to be used. This speaks to the evaluation needing a clear line of sight between what the evaluation is seeking to assess and what exactly it will be used for.
  1. Getting the timing right: Unanimously all the quality assessments found that timing was key. In some instances, it was found that evaluations took place after decisions on the future of the programme had been made, or the evaluation findings failed to be shared with appropriate departments/teams in time. This seems basic but the evidence we gathered suggests that evaluations were commissioned and produced at the wrong times.
  1. Having appropriate organisational capacity and systems in place: The evidence identified that in some instances the numbers of evaluations being commissioned was beyond the right level for performance management. In other words, more evaluations were being commissioned than could reasonably be used given the capacity of the aid administration. This, in turn, promoted a culture of evaluations being a ‘tick box’ exercise rather than promoting a culture of learning from evaluations. Also, across all the quality assessments the aid administration’s the strength of knowledge management systems played a significant role in the extent the findings could be shared across the organisation. This had an effect on the extent other teams/departments could reflect on the evaluations commissioned. The final finding to note was the senior management buy-in for the use of evaluation findings is important for promoting a culture of use within an aid administration.

As evaluators, what can we take away from this? Well, as independents/third-party contractors we are often removed from decision-making processes and, arguably, have little influence over institutional uptake pathways. That said, maybe the most interesting point to consider is that we as evaluators, define with the commissioners up front: who is going to use the evaluation, when, for what, and in what format(s) do they prefer to engage with information. Should we be moving away from bulky reports to nimbler products that can be more easily digested by an aid administrations with limited capacity? Should we be moving towards more real-time processes to ensure data feeds into the relevant decision-making processes at the right time? Should we be more forthcoming about defining purpose and give more consideration to how the evaluation will be used and whether we could support use? I don’t have the answers but the evidence suggests that if we are going to influence use, perhaps we should give the products we produce and how we interact with clients a closer look.

Greg Gleed, June 2017

Share

[1] For example, see UNEG (2008) UNEG Ethical Standards, UNEG http://www.unevaluation.org/document/detail/102

[2] Patton, M.Q. (2008). Utilization-focused evaluation, 4th edition. Thousand Oaks, CA: Sage.

[3] NAO (2013). Evaluation in government. National Audit Office, London.

  1. Peter O'Flynn

    Excellent post – I think one of the under utilised mechanisms for these evaluations is while fostering ownership in the designed phase, creating buy-in (or at the extreme end – contractual arrangements) to ensure action is taken on the results of the evaluation. Naturally, unintended consequences cannot be part of that arrangement, but where gaps have been identified in the theory of change or alike, or perverse incentives, a stronger (again more-contractual like) arrangement that sets what change must come from the evaluation (if any) would be advised.

    Too often the nature of work has a tendency to revert back to the norm – reverting back to the mean of the daily arrangements, the same heuristics, the same mistakes. As such, greater binding is a reinforcement of greater buy-in.

  2. Greg Gleed

    Interesting point. I definitely think there is scope for giving more thought to how evaluations will be used up front. Perhaps, better defining/mapping institutional or programme level evidence uptake pathways and key decision making opportunities as part of an agreement (contractually or not) would support use. This would also help articulate more precise recommendations and provide a clear ‘use’ road map for the commissioner.

    Glad you enjoyed the blog.

  3. Nick Chapman

    A good post, that builds on a set of substantial studies for different clients. An issue perhaps missed is the separation between the commissioning unit (evaluation depts) and the users in the main depts. The competing agendas and even tensions are important in the dynamics of evaluation use. Maybe this didn’t come out in the written studies, but from involvement in them this was an underlying issue.

Comments are closed.