To respond to the complexity, scale and urgency of today’s global challenges, organisations are increasingly embracing a systems perspective in their work. Adopting such an approach creates a different set of challenges for generating and responding to evidence and learning – compared to more traditional approaches.
Many tools and approaches that have been foundational to and for monitoring, evaluation and learning (MEL) (for example, rigid or static indicators) aren’t necessarily the same ones that will enable MEL professionals to add value to systems change work. MEL for systems change requires new tools, frameworks, and competencies to generate evidence and apply lessons.
Exploring new technical innovations in MEL for systemic change
In March, we convened an online event to stimulate discussion on how best to undertake strategic learning and evaluation within a systems-change approach, and what new technical innovations we could apply in support of this. Our esteemed panel comprised of philanthropic organisations at different stages in their journey to learn and evaluate with a systems-change lens and evaluators who are using machine learning in innovative ways. Each one of them is working in different ways to advance MEL practice through innovative approaches, tools, and technologies.
During the discussion, panellists talked about their work, and offered advice on the competencies and perspectives required to deploy these innovations. Their examples were wide-ranging and included:
- grounding MEL approaches in relational and wellbeing outcomes
- hypothesis-based portfolio design and learning
- deploying large language models to analyse large qualitative data sets, and
- using AI to reduce monotonous manual research tasks.
The discussion illustrated how important innovation is in ensuring MEL evolves to meet the challenges of systems change. But it also focused on familiar themes to many evaluators confronting bias and power in data collection and analysis, leading institutional culture change, and focusing on evidence use and users.
Three standout lessons from our discussion
Embracing the messiness of systems change work requires us to let go of rigid indicators and overly prescriptive MEL frameworks.
Inherent to systems-change work is working with complexity. This requires a good degree of experimentation, learning and adaptation. Where more traditional MEL has been about results chains, standardised indicators and data collection protocols, MEL for systems change requires synthesizing a range of evidence sources, sensemaking with stakeholders, and iterative and responsive approaches to data collection.
Luminate’s new approach to its foundation-wide MEL, which focuses on hypothesis testing and updating, is illustrative of this shift. Luminate’s Kecia Bertermann spoke of how, faced with complex portfolios of work across issues such as promoting a healthier digital public sphere and protecting civic space, Luminate have replaced indicators with learning questions linked to specific hypothesis they wants to test. Crucial to this new approach is the deliberate collection of evidence that refutes the portfolio’s hypothesis, which was seen by many who joined this online discussion as an exciting departure from typical MEL practice.
The digital toolkit available to evaluators is expanding rapidly with the advent of large language models (such as ChatGPT) with great opportunities and serious risks.
These tools will enable us to identify trends and patterns in huge data sets and quantities of online material like never before. We need to be prepared to harness these tools to our advantage (for example, like Panellist Dimitri Stoelinga who discussed Laterite’s work to streamline monotonous research tasks), but also we need to be cognisant of where these digital tools source their information and of their biases and limitations. (See for example, Kerry Bruce’s recent LinkedIn post which gives an example of ChatGPT inventing a citation that doesn’t exist). A recent study identified that surveyors and survey research are highly susceptible to large language models’ influence, and indicates a false sense of security and certainty in large language models’ outputs.
Questions of power remain central to understanding and applying appropriate MEL approaches and tools in systems change work.
If complexity-informed MEL requires greater emphasis on bringing stakeholders together to engage with and make sense of evidence, to spot patterns, and extract meaning, ensuring the right voices are present when this happens is crucial. As panellist Marcus Jenal of Fondation Botnar shared, we need epistemological justice, or in other words approaches to evidence generation that recognise and give value to the voices and perspectives of affected populations.
At Itad we have seen over the years how clients are working on bigger and more complex challenges. This requires an evolution in MEL methods, tools, approaches and mindsets. The stories and examples we heard from our panellists, along with the innovations we are aware of within Itad, give us a huge source of encouragement that the MEL profession is stepping up to the challenge.
Our hope is that through these innovations, and by widely sharing the experiences from them, evidence informed learning and adaptation will play a central role in systems-change efforts in the future – and enable our clients and others to achieve the positive impact on people and planet they aspire to.
Watch the full recording of our event below or on YouTube.