Understanding outcomes – for example, the number of people using improved water and sanitation facilities, or practising desired hygiene behaviour – is the first step in understanding whether programming for water, sanitation and hygiene is effective and whether the WASH sector as a whole is making progress towards achieving the SDGs. The experience of the WASH Results Programme has illustrated that regular monitoring can be effective in understanding progress, or lack thereof, towards outcomes and hence, represents a valuable tool for quality programming
Because it typically relies on representative survey data, measuring progress towards outcomes is more difficult and more expensive than reporting outputs. While reporting of outcomes has not yet taken hold across programming in the WASH sector, there is a growing interest and ambition to do so. The DFID-funded WASH Results Programme, where assessing outcomes was incentivised by payment-linked outcome targets, is one example amongst several initiatives introducing sustainability checks, clauses and compacts in recent years.
The outcomes focus of the WASH Results Programme
The £112 million WASH Results Programme aimed to support poor people in 11 countries to access improved water and sanitation, and to practice improved hygiene. Three consortia (‘suppliers’) of non-governmental organisations (NGOs) were contracted by DFID in 2014 to undertake large-scale delivery of WASH in advance of the conclusion of Millennium Development Goals. This ambitious delivery goal was coupled with payment for outcomes – measured up to two years later – to encourage the continued use of water supply, latrines and handwashing at critical times.
In the WASH Results Programme, outcomes were measured through surveys that assessed access to water services and latrines, and handwashing at critical times. One supplier carried out five surveys while the other two suppliers implemented two surveys, after one and two years of their implementation phases. Payment for outcome targets was contingent on an independent verification of the survey design, implementation and data analysis.
In our learning brief on outcome achievements in the WASH Results Programme, we shared data and related lessons around measuring sanitation and hygiene outcomes. Here, we focus on two aspects of outcome monitoring in WASH Results: 1) how regular and verified monitoring of outcomes contributed to more reliable data on programme achievements and 2) how this data helped making programming more effective.
Regular (and verified) outcome monitoring produces more reliable data
With payment contingent on achievement in the WASH Results Programme, it was vital that outcome data was reliable and verifiable. This meant suppliers had to really think through methods for measuring outcomes, and the evidence required to demonstrate results. Providing this wasn’t always straightforward; handwashing with soap is notoriously difficult to measure, for example, as discussed in this blog. The data had to be reliable, or suppliers risked losing out on payment.
All three suppliers reported that the programme’s emphasis on and scrutiny of outcome data led them to scrutinise their outcome monitoring systems and processes. For example, both the SWIFT and SSH4A consortia reviewed their survey design, sampling approaches, and enumerator training and closer supervision based on verification feed-back. Specific problems identified in verification reports were further discussed in joint supplier-verifier after-action reviews and improvements checked through systems appraisals. One supplier also repeated household surveys in two cases when their results were not sufficiently robust.
How more frequent outcome monitoring makes programmes more effective
The evaluation of the WASH Results Programme examined the link between outcome-level monitoring and the programme’s performance and found that regular outcome monitoring had a positive effect on the outcomes achieved. The biggest benefits were observed where suppliers introduced periodic outcome monitoring right at the beginning of the programme.
When outcomes are monitored regularly and data is reliable, suppliers can use it to understand what approaches are (not) working and where. This gives them the tools to use resources and capacity effectively. If a challenge arises, the programme’s course can be quickly corrected.
- All three suppliers found that up-to-date outcome monitoring data was vital for adapting programme delivery, and deciding what approaches should be scaled up. In the Democratic Republic of Congo, for example, the SWIFT consortium ran an outcome survey to identify villages with a lower performance so that they could dedicate more resources and consider their approach. In Uganda, the SSH4A programme re-prioritised sanitation programming activities based on survey data. For example, when upgrading of latrines was slow, the programme expanded and intensified messaging in these areas. In Mozambique, where survey data revealed that behaviour change messages had not translated into an increased presence of handwashing facilities, the programme overhauled its behaviour change communication approach.
- Regular monitoring at outcome-level can reveal important lessons and patterns that would not be captured otherwise. Surveys conducted by SNV and SAWRP found that use of handwashing facilities would often drop after 1-2 years. Both suppliers learned that households in some communities did not find tippy taps to be durable and easy to use. Based on this information, they were able to re-examine what handwashing solutions they promoted in those contexts. In addition, SNV found that in areas where sanitation coverage was low to start with, there was often a period of rapid progress that then stagnated quickly without further input. This trend had not been identified before.
- Disaggregated outcome monitoring for specific groups(e.g. women, people living with disabilities), helps implementers understand who is/is not being effectively reached by the programme. There are challenges in collecting disaggregated data but suppliers report that insights strengthened their programming. Further discussion on this can be found in the WASH Results Programme Learning Brief #3: Reaching the vulnerable and those in fragile contexts with WASH services.
Fully understanding sustainability requires assessment 2-3 years after implementation
Ongoing outcome monitoring is extremely valuable. However, it may not be possible to identify within the life-time of a programme whether the outcomes achieved will be sustained into the longer term.
In the case of the WASH Results Programme, which generally measured outcomes only one and two years after implementation, we cannot fully assess from the data whether the efforts to put the necessary support systems in place have been successful. For example, rule of thumb suggests that most toilets last around two years before collapsing when not properly maintained, so slippage due to insufficient maintenance may not be captured by outcome monitoring in the lifetime of the programme.
To fully understand sustainability and to understand how effective proxy indicators for sustainability are at predicting longer-term outcomes, it would be valuable to revisit programme locations 2-3 years after programmes have finished undertaking post-implementation studies. SNV have undertaken a study of this type and plan to undertake more:
Post-implementation impact studies could provide valuable additional learning that could help the sector progress towards its goal of achieving sustainable access for all. As such they represent a valuable investment for donors committed to achieving the SDGs.
This blog post is based on discussions at the WASH Results Learning event held virtually in May 2020 and attended by representatives of programme suppliers, DFID (now FCDO) and the Monitoring and Verification Team.
It was originally published on the WASH Results blog.