Monitoring and evaluation of climate change adaptation: an introduction
Introduction
Monitoring and evaluation (M&E) is crucial to ensure that adaptation actions are proceeding as planned and that lessons are drawn to improve them. Since evaluation of how an adaptation intervention is performing is based on the monitored criteria, both are important to understand what makes adaptation actions effective.
However, there is no general consensus on what successful adaptation to climate change looks like. Adaptation situations are complex because they need to consider human-environmental interactions in the context of various threats and sources of vulnerability. Moreover, the range of potential adaptation interventions is broad depending on the scale, the scope and the sector considered. Some of the main conceptual and operational challenges to monitoring and evaluation of adaptation are:
- developing suitable metrics for adaptation due to the diversity and complexity of potential climate impacts and adaptation responses;
- comparing progress to a counterfactual – what would have happened in the absence of the intervention;
- adaptation takes place against shifting climatic, social and environmental baselines, which produce changing risk contexts which can be confounding factors in the assessment;
- the results of an adaptation intervention are sometimes only visible in the long-term;
- there are multiple reporting requirements and a lack of M&E capacities in many lower- and middle-income countries.
Designing a relevant and informative M&E system for an adaptation intervention is therefore a challenge.
This article focuses on some of the most common methods used to monitor and evaluate adaptation. Each method addresses one or several challenges of M&E in a Climate Change Adaptation context. The table below summarizes the existing approaches. This article does not cover them all, but focuses on some of the most widely used overall approaches, iterative methods and participatory methods.
Some definitions
Monitoring and Evaluation
Mechanisms put in place to respectively monitor and evaluate efforts to reduce greenhouse gas emissions and/or adapt to the impacts of climate change with the aim of systematically identifying, characterising and assessing progress over time.
Source: IPCC Glossary (2022) In: Climate Change 2022: Impacts, Adaptation and Vulnerability.
Attribution
Attribution is defined as the process of evaluating the relative contributions of multiple causal factors to a change or event with an assessment of confidence.
Source: IPCC Glossary (2022) In: Climate Change 2022: Impacts, Adaptation and Vulnerability.
Baseline
A baseline is a description of the initial condition/situation before an intervention takes place. Some relevant baseline information may be available from other ongoing initiatives in the project region, or national statistical systems.
Source: GIZ et al (2020) Guidebook for Monitoring and Evaluating Ecosystem-based Adaptation
A dive into iterative methods: Result-Based Monitoring
What is result-based monitoring?
Result-Based Monitoring (RBM) is a management strategy focusing on performance and achievement of outputs, outcomes and impacts. The Logical Framework Approach (logframe or LFA) is the most commonly used tool to implement RBM.
The logframe is an analytical and organizational matrix that summarizes core project components: its inputs, activities, outputs, outcomes and objectives (see Figure 2: rows). Each component must be verifiable through indicators defined before the implementation phase. The logframe also specifies the sources and methods used to obtain the measures that constitute the set of indicators selected, and the underlying assumptions about conditions that need to exist for the project to be successful.
Baseline measures are taken and targets are defined before the implementation of the project. Indicators are monitored throughout the project, and in post-implementation phase – which is why RBM is an iterative method. The evaluation aggregates all the measures including post-implementation ones, and compares their evolution throughout the project to draw conclusions about the overall achievement of the targets and goals for the intervention.
In which context is RBM to be used?
RBM is the most common monitoring and evaluation approach used by development cooperation agencies, or funders. It is for example a requirement in project implementation by the UK Foreign, Commonwealth and Development Office (FCDO – see the example below). It is used to ensure transparency between the different stakeholders engaged in an intervention, to monitor progress in its implementation, and to verify the achievement of the predetermined targets and goals of the intervention.
Example of a framework using RBM
The FCDO is one of the main public donors for overseas Climate Change Adaptation and Development. The project teams are asked to report on their activities using RBM and more specifically LFA for monitoring the projects they fund. The Logical Framework is designed at an early stage of the project to set baselines and can be used as a tool to assess options during the implementation of the project, and to monitor and evaluate progress against these baselines. All Logical Frameworks used by FCDO follow a similar structure to allow for comparisons between projects. However, the generic indicators that allow comparability of results across projects are often inadequate for reporting the singularities of an intervention. The reporting tables thus are limited in capturing the complexity of the on-the-ground reality. The framework also includes reporting on gender equality improvement, recommending desegregation of results based on gender.
imits of the approach
As illustrated by the example above, RBM and LFA are typically used by development cooperation agencies for their development activities and are not specifically tailored to CCA (FFI see Climate-eval’s study on indicator development). A certain number of practical challenges to RBM and LFA reflect the complexity of monitoring and evaluating CCA:
- The attribution gap: RBM is a rather linear method, that tends to simplify and overlook the complexity of CCA. Long-term impacts are unlikely to result from a unique CCA intervention but are commonly the result of a series of interventions. RBM is limited in its ability to attribute an outcome or impact to a specific intervention (causality). Impact evaluations are used for that purpose (see section below).
- Use of inappropriate indicators: the targets and indicators are often designed based on a funder’s requirement, and might ignore certain aspects of an intervention, especially the experience and needs of the beneficiaries. The use of participatory methods and approaches is recommended to avoid this caveat.
- Limitation in long-term monitoring and evaluation:Results and impacts of a CCA intervention are sometimes only observable in the long-term, and M&E strategies sometimes fail to take that into account because of a lack of planning or resources. Similarly, a lack of planning and resources can also result in baselines not being measured before the implementation of the project. Baseline reconstruction is a method that helps addressing this issue.
- Variability and scenario evolution: climate risk is constantly changing as we fail to mitigate appropriately greenhouse gases emission. Interventions must be designed to address vulnerability linked to various climate scenarios – and their monitoring must use rolling baselines and targets.
Recognition of limitations to Result-Based Monitoring have led to the search for more refined ways to inform M&E of adaptation and increasing inclusion of other research and scientific methods into management of adaptation activities, such as impact evaluations and other experimental approaches.
Proving attribution: the challenge of impact evaluations
What are impact evaluations?
As explained previously, methods such as RBM are great at determining whether certain predetermined outcomes have been reached in the time of an intervention. But it fails at establishing a causal relationship between the intervention (or part of it) to the outcomes. To establish attribution, experts use impact evaluations. According to Stern et al., (2012) impact evaluations should:
- evaluate positive and negative, intended and unintended long-term effects on beneficiaries that result from a development intervention;
- assess the direct and indirect causal ‘contribution claims’ of the intervention (i.e., is the intervention responsible for the effects observed, and to what extent?);
- explain how the intervention leads to an effect so that lessons can be learned.
Impact evaluations rely primarily on quantitative methods – based on experimental designs (scientific approach to data collection and measurement) – which should allow analysts to determine with great precision the relationship between an intervention and an outcome.
In which context are impact evaluations used?
Impact evaluations based on randomized experimental designs come from the field of biomedical research, but have since been adapted to be used in environmental and development studies to evaluate the soundness of interventions. It has become the cornerstone of science-based decision making in these fields. Impact evaluations are therefore used by development and environmental agencies to study a wide range of interventions. The use of impact evaluations in the field of CCA is still emerging, and effort is being led to increase the number of studies (for example, through the International Initiative for Impact Evaluation and the Campbell Collaboration).
Example of a framework programme using impact evaluations
“Building Resilience and Adaptation to Climate Extremes and Disasters” (BRACED) was a multi-year multi-stakeholder programme funded by DfID from 2015 to 2019 in South and Southeast Asia and in the African Sahel and its neighbouring countries. BRACED sought to improve the integration of disaster risk reduction and climate adaptation methods into development approaches, by influencing policies and practices at the local, national and international level.
BRACED activities included a strong Monitoring Evaluation and Learning component led by ODI and ITAD. In particular, the SUR1M project in Niger (Scaling-Up Resilience to Climate Extremes for over 1 Million People in the Niger River Basin) was subject to a thorough impact evaluation. The BRACED website also includes guidance, case studies and learning papers on the challenges of MEL in the Climate Change Adaptation field (see for example BRACED SR1.5 Guide for policymakers and practitioners).
More recently, the Green Climate Fund’s Independent Evaluation United launched LORTA (Learning-Oriented Real-Time Impact Assessment), to improve the quality of their impact evaluations. Their website also includes resources on MEL in climate resilience.
What are the limits of impact evaluations?
- Identifying a counterfactual: Impact evaluations’ ability to assess attribution relies on the understanding of the counterfactual: what would have happened in the absence of the intervention? With CCA interventions, this counterfactual can be difficult to establish because interventions are complex and rarely conducted in a controlled environment. Case studies, quasi-experimental or qualitative methods can then be used for that purpose, although the strength of evidence might be impacted.
- Capturing complexity: Quantitative methods are limited in their ability to capture the complexity of the development context. This is why they are often combined with qualitative methods – such as surveys, focus groups or interviews. These methods can also be used to identify a counterfactual.
- Proving causality: To replicate and scale up an intervention in other contexts, it is necessary to understand how the intervention made outcomes happen. Because of the complexity of CCA and its interaction with other environmental, social and economic factors, this can be challenging to establish. Theory of change is a powerful tool used for that purpose. It requires understanding and making explicit the theory underlying the intervention, its core assumptions, and the impact of external factors on the final outcome. These narratives of attribution and their hypotheses can then be tested, and inform decision making in an iterative process.
Participatory methods
What are participatory methods?
A further category of diverse M&E (or MEL) approaches is participatory methods. It is important to gather stakeholders’ perspectives on the adaptation intervention, including on its objectives, activities and their perspectives on its ‘success’ (nb. Success may mean different things for different stakeholders). Participatory methods can complement impact evaluations and iterative methods for M&E by addressing some of their limits, as mentioned in the previous sections.
In which context are participatory methods used?
Participatory approaches are most commonly used for community-based adaptation (CBA) activities rather than employed in the context of wider-scale programmes and development agency-led activities (which conventionally use RBM and LFA). Participatory monitoring and evaluation enables all stakeholders – and in particular beneficiaries – to be involved in the process and thereby transform the M&E into an opportunity to increase learning and improve the project implementation plans.
A good example of participatory methods from CBA literature is CARE International’s manual (Ayers et al. 2012) which also promotes continuous learning and reflection approaches, i.e. MEL. Some of these participatory methods are summarized below.
What is ‘Outcome Mapping’?
Outcome Mapping (OM) is an innovative approach to planning, monitoring and evaluating international development work, which focuses on changes in behaviour instead of more prevalent development impacts per se (impacts defined as a significant and lasting change in the well-being of large numbers of intended beneficiaries), and also incorporates aspects of self-assessment and reflection.
An important aspect of Outcome Mapping is the role of boundary partners and organisations in creating impacts on the beneficiaries. Boundary partners lie at the interface between the project proponents or implementers and the beneficiaries. Outcomes in OM are a change in knowledge, attitudes and practices (i.e. behaviour) among boundary partners. According to the OM concept, these changes are a prerequisite of further, indirect impacts of the development (or adaptation) intervention. The idea is that boundary partners are influenced so that they in turn can facilitate developmental impacts on the beneficiaries.
What is ‘Most Significant Change’?
Most Significant Change is a participatory form of monitoring and evaluation, based on listening to what people (beneficiaries/participants/stakeholders) consider the most significant change resulting from the project or initiative. It is based on storytelling and requires no special professional skills which makes it an easy approach to implement across settings and appeals to most cultures. It is a also good way to pick up changes that were unanticipated changes or challenge your assumptions of what is happening. This approach encourages all stakeholders to engage in data collection and analysis stages of a project as they have to explain why they believe one change is more important than another. It can be used to monitor and evaluate bottom-up initiatives that do not have predefined outcomes against which to evaluate.
For further information see the weADAPT page on MEL
Organisations and initiatives working on Monitoring and Evaluation of Climate Change Adaptation
A selection of key organisations and partnerships responsible for conducting monitoring and evaluation and developing best practices are:
Organisations/partnerships | Other national entities |
Global Environment Facility (GEF) – Scientific and Technical Advisory Panel (STAP) and the Independent Evaluation Office (IEO) | UK’s Climate Change Compass, and ICAI – independent commission for aid impact |
EvalNet, OECD/DAC’s network for evaluation | Sweden’s Expert Group for Aid Studies (EBA) |
Technical Evaluation Reference Group of the Adaptation Fund (AF-TERG) | Germany – GIZ Evaluation, and The German Institute for Development Evaluation (DEval) |
International Initiative for Impact Evaluation (I3E) | Canada’s International Development Research Centre (IDRC) |
UNEP’s Adaptation Gap reports | |
Green Climate Fund (GCF) – Independent Evaluation Unit (IEU) |
See also Dennis Bours’s M&E and climate change interventions newsletter
Discussion
This article has discussed some of the common methods used to monitor and evaluate adaptation, focusing on the approaches used by development cooperation agencies or funders, or by those working on collective adaptation, such as community-based organizations or NGOs. In each case, an M&E system that is tailored to meet specific needs is required; there are diverse activities that make up these systems.
However, we have not discussed the needs and systems used for M&E connected to national policy initiatives – domestic activities and budgets that come under national programmes are are important for reaching national climate adaptation goals, especially in the longer term (because they are likely to be sustained year after year). Interestingly, the 202 Adaptation Gap report found that only around a quarter of countries have a monitoring and evaluation framework in place. Clearly there is a lot of work to do to build in-country systems and institutional capacities in M&E and share the lessons from relevant programmes, as well as knowledge of suitable frameworks, tools and methods to use.
Further resources
- Useful articles and literature on weADAPT
- Maladaptation: An Introduction
- Resilience: An Introduction
- Transformational Adaptation: An Introduction
- An introduction to adaptation
- Climate-eval’s study on indicator development
- BRACED SR1.5 Guide for policymakers and practitioners
- PROVIA assessment framework on vulnerability, impacts and adaptation
- Monitoring, evaluation and learning
- Other articles and literature
- Monitoring & evaluation for climate change adaptation: A synthesis of tools, frameworks and approaches
- Principles, guidelines and requirements of our evaluation practice
- Theory of change for GIZ’s evaluations
- Learning to ADAPT: monitoring and evaluation approaches in climate change adaptation and disaster risk reduction – challenges, gaps and ways forward (2011)
- National Climate Change Adaptation: Emerging Practices in Monitoring and Evaluation (2015)
- Developing National Adaptation Monitoring and Evaluation Systems: A Guidebook (2015)
- The Adaptation M&E Navigator: A Decision Support Tool for the Selection of Suitable Approaches to Monitor and Evaluate Adaptation to Climate Change (2017)
- Evaluating Climate Change Action for Sustainable Development (2017)
- Good Practice Study on Principles for Indicator Development, Selection, and Use in Climate Change Adaptation Monitoring and Evaluation (2015)
- Guidebook for Monitoring and Evaluating Ecosystem-based Adaptation Interventions (2020)
- Sources
- Bours, D. (2014). Monitoring & evaluation for climate change adaptation and resilience: A synthesis of tools, frameworks and approaches. https://ukcip.ouce.ox.ac.uk/wp-content/PDFs/SEA-Change-UKCIP-MandE-review-2nd-edition.pdf.
- DfID (2011). How to Note – Guidance on Using the Revised Logical Framework. Department for International Development. https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/253881/using-revised-logical-framework-external.pdf.
- GIZ, UNEP-WCMC and FEBA (2020) Guidebook for Monitoring and Evaluating Ecosystem-based Adaptation Interventions. Deutsche Gesellschaft für Internationale Zusammenarbeit (GIZ) GmbH, Bonn, Germany. https://www.weadapt.org/knowledge-base/nature-based-solutions/monitoring-and-evaluating-ecosystem-based-adaptation-interventions
- IPCC, 2022: Annex II: Glossary. In: Climate Change 2022: Impacts, Adaptation and Vulnerability. Contribution of Working Group II to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change Cambridge University Press, Cambridge, UK and New York, NY, USA, pp. 2897–2930, doi:10.1017/9781009325844.029
- Lamhauge, N., Lanzi, E. and Agrawala, S. (2012). Monitoring and Evaluation for Adaptation: Lessons from Development Co-operation Agencies. DOI: https://doi.org/10.1787/5kg20mj6c2bw-en.
- McGray, H., Rai, N., Dinshaw, A., Fisher, S. and Schaar, J. (2014). Monitoring and Evaluation of Climate Change Adaptation. 74. http://www.oecd-ilibrary.org/environment/monitoring-and-evaluation-of-climate-change-adaptation_5jxrclr0ntjd-en.
- Stern, E., Stame, N., Mayne, J., Forss, K., Davies, R. and Befani, B. (2012). Broadening the Range of Designs and Methods for Impact Evaluations. DFID Working Paper, 38. Department for International Development. DOI: 10.22163/fteval.2012.100.
Useful articles and literature on weADAPT
- Maladaptation: An Introduction
- Resilience: An Introduction
- Transformational Adaptation: An Introduction
- An introduction to adaptation
- Climate-eval’s study on indicator development
- BRACED SR1.5 Guide for policymakers and practitioners
- PROVIA assessment framework on vulnerability, impacts and adaptation
- Monitoring, evaluation and learning
Other articles and literature
- Monitoring & evaluation for climate change adaptation: A synthesis of tools, frameworks and approaches
- Principles, guidelines and requirements of our evaluation practice
- Theory of change for GIZ’s evaluations
- Learning to ADAPT: monitoring and evaluation approaches in climate change adaptation and disaster risk reduction – challenges, gaps and ways forward (2011)
- National Climate Change Adaptation: Emerging Practices in Monitoring and Evaluation (2015)
- Developing National Adaptation Monitoring and Evaluation Systems: A Guidebook (2015)
- The Adaptation M&E Navigator: A Decision Support Tool for the Selection of Suitable Approaches to Monitor and Evaluate Adaptation to Climate Change (2017)
- Evaluating Climate Change Action for Sustainable Development (2017)
- Good Practice Study on Principles for Indicator Development, Selection, and Use in Climate Change Adaptation Monitoring and Evaluation (2015)
- Guidebook for Monitoring and Evaluating Ecosystem-based Adaptation Interventions (2020)
Sources
- Bours, D. (2014). Monitoring & evaluation for climate change adaptation and resilience: A synthesis of tools, frameworks and approaches. https://ukcip.ouce.ox.ac.uk/wp-content/PDFs/SEA-Change-UKCIP-MandE-review-2nd-edition.pdf.
- DfID (2011). How to Note – Guidance on Using the Revised Logical Framework. Department for International Development. https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/253881/using-revised-logical-framework-external.pdf.
- GIZ, UNEP-WCMC and FEBA (2020) Guidebook for Monitoring and Evaluating Ecosystem-based Adaptation Interventions. Deutsche Gesellschaft für Internationale Zusammenarbeit (GIZ) GmbH, Bonn, Germany. https://www.weadapt.org/knowledge-base/nature-based-solutions/monitoring-and-evaluating-ecosystem-based-adaptation-interventions
- IPCC, 2022: Annex II: Glossary. In: Climate Change 2022: Impacts, Adaptation and Vulnerability. Contribution of Working Group II to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change Cambridge University Press, Cambridge, UK and New York, NY, USA, pp. 2897–2930, doi:10.1017/9781009325844.029
- Lamhauge, N., Lanzi, E. and Agrawala, S. (2012). Monitoring and Evaluation for Adaptation: Lessons from Development Co-operation Agencies. DOI: https://doi.org/10.1787/5kg20mj6c2bw-en.
- McGray, H., Rai, N., Dinshaw, A., Fisher, S. and Schaar, J. (2014). Monitoring and Evaluation of Climate Change Adaptation. 74. http://www.oecd-ilibrary.org/environment/monitoring-and-evaluation-of-climate-change-adaptation_5jxrclr0ntjd-en.
- Stern, E., Stame, N., Mayne, J., Forss, K., Davies, R. and Befani, B. (2012). Broadening the Range of Designs and Methods for Impact Evaluations. DFID Working Paper, 38. Department for International Development. DOI: 10.22163/fteval.2012.100.