¶¶ÒùÊÓƵ

Language selection

Search

Results-Based Management for International Assistance Programming at ¶¶ÒùÊÓƵ: A How-to Guide

PDF version

Updated June 2022

Table of Contents

Preface

This guide is a comprehensive introduction to how ¶¶ÒùÊÓƵ applies the Results-Based Management approach to its international assistance programming, especially at the project level. It provides explanations of Results-Based Management concepts, principles, terminology and tools, as well as step-by-step guidance on their application.

Results-Based Management is also referred to as Managing for Results. These terms are used interchangeably throughout the Guide. In the context of international development, this approach is also called Managing for Development Results or Managing for Sustainable Development Results.

This guide is accompanied by various RBM tip sheets, checklists, templates and Glossary of Managing for Results / Results-Based Management Terms. For external audience, please see ¶¶ÒùÊÓƵ’s external RBM web page.

The authors welcome readers’ feedback and questions at gar.rbm@international.gc.ca.

Audience

This guide is intended for ¶¶ÒùÊÓƵ staff (at headquarters and in the field) responsible for international assistance programs/portfolios and projects, and the wide range of Canadian, international and local partners with whom ¶¶ÒùÊÓƵ works. Although the underlining Results-Based Management/Managing for Results concepts and principles are the same for most organizations and donors, applicants and partners working with ¶¶ÒùÊÓƵ can use this guide to understand ¶¶ÒùÊÓƵ’s approach to Results-Based Management/Managing for Results and its application to the projects financed by ¶¶ÒùÊÓƵ.

This guide will also be useful to all ¶¶ÒùÊÓƵ staff interested in understanding Results-Based Management/Managing for Results in general and more specifically in the international assistance programming context.

Staff managing country, institutional and other programs/portfolios may also draw on these guidelines in managing their programs/portfolios for results.

All Canadians interested in Results-Based Management/Managing for Results will find useful information and specific examples to improve their knowledge about this topic.

The guide is also meant to be a companion to ¶¶ÒùÊÓƵ’s application form for funding of an international assistance initiative that would contribute to meeting the Department’s expected results (outcomes) in international assistance.

Part One: An Introduction to Results-Based Management

1.0 Introduction

Part One explains Results-Based Management and provides an overview of its core principles.

The above core prinicples are further underpinned by:

These principles set the foundation for ¶¶ÒùÊÓƵ’s approach to Results-Based Management/Manging for Results in its international assistance programming.Footnote 1

Box 1 - Results-Based Management

Results-Based Management is also referred to as Managing for Results.

These terms will be used interchangeably throughout the guide.

In the context of international development, it is also called Managing for Development Results or Manging for Sustainable Development Results.

Box 2 - Definition: Results/Outcomes

Result/Outcome:  Results are the same as outcomes. An outcome is a describable or measurable change that is derived from an initiative's outputs or lower-level outcomes. Outcomes are qualified as immediate, intermediate, or ultimate; outputs contribute to immediate outcomes; immediate outcomes contribute to intermediate outcomes; and intermediate outcomes contribute to ultimate outcomes. Outcomes are not entirely within the control of a single organization, policy, program or project; instead, they are within the organization's area of influence.

The terms results and outcomes will be used interchangeably throughout the guide.

Box 3 - Definitions: Types of Stakeholders

Stakeholders include beneficiaries, intermediaries, implementers and donors as well as other actors:

Beneficiary (Rights Holder): The set of individuals that experience the change of state, condition or well-being at the ultimate outcome level of a logic model. In its international assistance programming, ¶¶ÒùÊÓƵ-funded implementers usually work through intermediaries to help achieve changes for beneficiaries. ¶¶ÒùÊÓƵ implementers may also work directly with beneficiaries. In this case, beneficiaries may, like intermediaries, also experience changes in capacity (immediate outcome), and changes in behaviour, practices or performance (intermediate outcome).

Intermediary (Duty Bearer / Responsibility Holder): Individual, group, institution or government, that is not the ultimate beneficiary of the project, but that will experience a change in capacity (immediate outcome) and a change in behaviour, practices or performance (intermediate outcome) which will enable them to contribute to the achievement of a sustainable change of state (ultimate outcome) of the beneficiaries. Intermediaries are often mandate holders or duty bearers that are responsible for providing services to the ultimate beneficiaries. They are the entities that implementers work with directly.

Implementer: Private firm, non-governmental organization, multilateral organization, educational institution, provincial or federal government department or any other organization selected by ¶¶ÒùÊÓƵ to implement a project in a partner country. Depending on the context, an implementer may be referred to as an implementing organization, executing agency, partner or recipient.

Donor: ¶¶ÒùÊÓƵ or another donor organization that provides financial, technical and other types of support to a project.

Other Stakeholder: An individual, group, institution, or government with an interest or concern, – economic, societal or environmental – in a particular measure, proposal or event.

1.1 Results-Based Management / Managing for Results

What is Results-Based Management?

The aim of Results-Based Management (RBM) / Managing for Results (MfR) is to optimize and improve the achievement of results.

Managing for Results (MfR), or Results-based Management (RBM), is a lifecycle approach to adaptive management that focuses on achieving from initiation, to design and planning, to implementation (results-based monitoringincluding performance measurement, adapting/adjusting and reporting), to closure (final evaluations and reports, and integrating lessons learned into future programming). It is a way of working and thinking strategically – a mindset – to manage programs/portfolios, projects and other activities more effectively and efficiently to achieve expected outcomes.

According to the ¶¶ÒùÊÓƵ :

RBM is a life-cycle approach to [adaptive] management that integrates strategy, people, resources, processes, and measurements to improve decision-making, transparency, and accountability. RBM is essential for […] senior management to exercise sound stewardship in compliance with government-wide performance and accountability standards. The approach focuses on achieving outcomes, by implementing performance measurement, learning, and adapting, as well as reporting performance. RBM means:

In other words, Results-Based Management is not just a set of tools or instructions. It is a way of thinking strategically about your programs/portfolios, projects and other initiatives that help you manage more effectively. By managing better, you can improve the achievement of results, that is, the positive changes you set out to achieve or contribute to with your programs/portfolios or projects.

Why use the Results-Based Management approach?

Over the past few decades, there has been constant pressure on governments around the world for greater transparency and accountability to taxpayers for the use of public resources. Public concern in the face of escalating national account deficits and the need for more transparent and accountable governance has been an important factor in the evolution of modern management.

Historically, government departments—and implementing organizations—focused their attention on inputs (what they spent), activities (what they did) and outputs (what they produced). While information about inputs, activities and outputs is important, it did not tell implementers whether or not they were making progress in addressing the issues they had identified. Losing sight of the results their programs were aiming to achieve limited the effectiveness of their programming.

A new management approach was needed to raise the standards of performance and define success in terms of actual results achieved. Results-Based Management/Managing for Results was introduced to meet this need.

The focus on activities at the expense of results is what management scholar Peter Drucker, in 1954, referred to as the “activity trap”Footnote 2. Instead, Results-Based Management requires that you look beyond activities and outputs to focus on actual results (outcomes): the changes to which your programming is contributing and/or contributed. By establishing clearly defined realistic expected outcomes, assessing risk, collecting information to assess progress on the outcomes on a regular basis during implementation, and adapting/making timely adjustments, practitioners can manage their projects and programs/portfolios better in order to optimize and improve the achievement of results.

This focus on measuring at the outcome level during implementation was one of the fundamental changes introduced by Results-Based Management / Managing for Results. While traditional approaches to management may have identified objectives or expected outcomes during planning, once implementation began monitoring focused on inputs, activities and outputs. With the advent of Results-Based Management, the focus remains on outcomes, not only during design and planning, but also during implementation.

Box 4 - Definition: Development Results

Development results: Development results are a sub-set of results of the ¶¶ÒùÊÓƵ's international assistance results (or outcomes) focused specifically on producing tangible improvements in the lives of the poor and vulnerable. In the Department's results chain for international assistance programming, these would be changes described at the immediate, intermediate and the ultimate outcome levels.

The policies and processes established by Treasury Board of Canada Secretariat, including the and the commit the Government of Canada to a focus on results as an integrating principle of management in all departments and agencies.

Results-Based Management/Managing for Results is not only a Government of Canada requirement; it is also a widely accepted approach to management in international development (often referred to as "Managing for Development Results" or “Managing for Sustainable Development Results”), and humanitarian actionFootnote 3 in crisis and post crisis settingsFootnote 4. Results-Based Management is promoted by the and the .

Managing for Results is one of the principles of aid effectiveness.Footnote 5 It is used by most donors, multilateral organizations, non-governmental organizations and an increasing number of country partners, and features prominently within international agreements related to development and other international assistance cooperation.Footnote 6

Managing for Results, helps ¶¶ÒùÊÓƵ implement the Treasury Baord of Canada Secretariat and .

GAC’s Results-based Management / Managing for Results approach in international assistance also aligns to OECD-DAC .

Box 5 – RBM in Crisis and Post-Crisis Settings

“Generally, the principles of implementing RBM [Results-Based Management] in crisis and post-crisis settings are the same as in development settings. However, there are a number of key factors to be considered when using RBM in crisis and post-crisis settings.” …. For example, “... in crisis and post-crisis settings there is a shorter timeframe for planning and reporting on results. There may be a different role for the government, especially in humanitarian emergencies. It is also important to ensure that articulated results respond to root causes of conflict and ‘do no harm’ during programme development and implementation.”Footnote 7

Evidence-based decision-making

The information–or evidence–collected about progress on or toward results enables managers and staff to make evidence-based decisions. Without evidence of progress, decisions tend to be based on budgets or other inputs, activities and outputs. This is a bit like trying to navigate by referring to your car’s fuel gauge–you may never run out of fuel, but you may also never get to your destination. More concretely, if you do not keep an eye on your progress toward expected results, you will never know whether you need to make adjustments to achieve them.

When evidence is not used as a basis for decision-making, or the evidence is not accurate, this can undermine the achievement of the expected results. This is why results-based monitoring and evaluation are such vital components of Results-Based Management. See section 1.3 on Results-Based Monitoring and Evaluation.

Box 6 - Progress on vs. Progress toward

When reporting on outcomes, you can speak about progress “on” or “toward” the achievement of that outcome. This difference allows you to report on progress “toward” an outcome early in the life of the project even when there has not been a significant change in the value of the indicators for that outcome. For the difference between the two, please see Box 55 - Definition: Progress on vs. Progress toward under section 4.3 on Reporting on Outcomes below.

In sum, Results-Based Management is about effectiveness; it aims to maximize the achievement of ultimate outcomes, i.e. improvements in people’s lives. The nature of ultimate outcomes may vary depending on the type of programming. For example, in the case of international development ultimate outcomes have to do with the sustained improvement in the lives of people in developing countries, such as improved economic prosperity, health and learning outcomes. In humanitarian assistance, they would describe a reduction in suffering, the maintenance of dignity or lives saved in crisis-affected populations. In international security, they may relate to the reduction of threats to the populations of countries where ¶¶ÒùÊÓƵ programs and to Canadians.

The following example of a student’s journey through the education system provides a simple illustration of how Results-Based Management concepts are being applied in everyday lives all over the world, and why this approach is useful.

Box 7 - Simple Illustration of Results-Based Management Concepts

Imagine yourself as a student. Your school will have established a curriculum that outlines expected learning outcomes and targets (specific knowledge and skills, and their application) that you are required to attain by the end of the year in order to move to the next level. The curriculum is based on analysis of education research, evidence and best practices, and establishes learning outcomes and targets that are realistic and achievable for your grade or level. The school has put in place systems that enable you to monitor your performance in order to ensure that you are on track to achieve your end-of-year targets for the expected learning outcomes.

During the year, you monitor your progress through quantitative indicators (e.g. scores, marks, rank) and qualitative indicators (e.g. your level of confidence with the subject, and your engagement in the course). Data on these indicators is collected through various collection methods (e.g. tests, essays, observation). These data are assessed and you are provided with regular feedback and reports on your performance throughout the year. If your progress falls behind during the year, the information provided by this regular monitoring of outcomes gives you the evidence needed for you to take corrective action e.g. hire a tutor. If you have to hire a tutor, this means an adjustment to the activities you planned to do outside the school and may mean an adjustment in your budget.

In order to be useful, and enable you to manage your education and take corrective action, the information you get via regular feedback and reports focuses on your progress toward an actual change in your skills, abilities or performance, rather than on what was done or taught in class. A report that stated you attended math classes or that the school provided you with English and Science classes would not give you useful information. A report that provided an assessment of your progress toward the end-of-year learning outcomes, based on an analysis of the actual data from indicators (your marks, scores, etc.), on the other hand, provides you much more useful information for making decisions about your education, and thus helps you to manage your education better.

1.2 Results-Based Management and the Theory of Change

The theory of change is a fundamental part of managing for results. The describes it as follows:

Every program [and project] is based on a "theory of change" – a set of assumptions, risks and external factors that describes how and why the program [or project] is intended to work. This theory connects the program's [or project’s] activities with its [expected ultimate outcome]. It is inherent in the program [or project] design and is often based on knowledge and experience of the program [or project design team], research, evaluations, best practices and lessons learned.

Theory of change reinvigorates the analytic roots of Results-Based Management, emphasizing the need to understand the conditions that influence the project and the motivations and contributions of various actors. When Results-Based Management is properly applied, project design is based on a thorough analysis of the issue and the context in which it exists, which informs an evidence-based solution to the issue: the theory of change.

A theory of change explains how an [initiative] is expected to produce its results. The theory typically starts out with a sequence of events and results (outputs, immediate outcomes, intermediate outcomes and ultimate outcomes) that are expected to occur owing to the [initiative]. This is commonly referred to as the “program logic” or “logic model.” However, the theory of change goes further by outlining the mechanisms of change, as well as the assumptions, risks and context that support or hinder the theory from being manifested as observed outcomes.Footnote 8

The following is a simple example of theory of change borrowed from conflict resolution:

As applied to the conflict field, theories of change refer to the assumed connections between various actions and the results of reducing conflict or building peace. … one of the most popular conflict mitigation strategies entails bringing representatives of belligerent groups together to interact in a safe space. The expectation is that the interactions will put a human face on the "other", foster trust, and eventually lead to the reduction of tensions. This strategy relies on a theory of change known as the contact hypothesis that can be stated as: ‘If key actors from belligerent groups are given the opportunity to interact, then they will better understand and appreciate one another, be better able to work with one another, and prefer to resolve conflicts peacefully’Footnote 9.

As an approach to program or project design, implementation and evaluation, the theory of change is not new. In recent years, however, it has become increasingly mainstream in international assistance programming. It is being used by a wide range of international actors, from government agencies to multilateral institutions to civil society organizations, in order “to bring a more integrated approach to programme scoping, design, strategy development, right through implementation, evaluation and impact assessment.”Footnote 10

A programs/portfolios and project’s theory of change will be revisited regularly during implementation, as the program/portfolio and project and the context in which they are being delivered evolve. This is in keeping with the Results-Based Management principle of continuous adjustment: monitoring progress, comparing expected outcomes to actual outcomes, learning and adapting/making adjustments as required.

The importance of assumptions

Assumptions are the conscious and unconscious beliefs we each have about how the world works. From the perspective of the design team, assumptions constitute beliefs (validated or otherwise) about existing conditions that may affect the achievement of outcomes and about why each level will lead to the next. In the context of the theory of change and logic model, assumptions are the necessary conditions that must exist if the relationships in the theory of change are to behave as expected. Accordingly, care should be taken to make explicit the important assumptions upon which the internal logic of the theory of change is based.

Assumptions can be difficult to identify, as they are often taken for granted or are linked to deeply held convictions. Participatory exercises with a wide variety of local and non-local stakeholders are a good way of uncovering assumptions. This is because assumptions tend to vary among stakeholders and will become apparent when there are differing views on whether or not a project will lead to the desired change.

The importance of identifying risks

¶¶ÒùÊÓƵ defines risk as the effect of uncertainty on expected results (outcomes). Developing a theory of change will also help identify any risks that would affect the achievement of outcomes.

Note: Once risks are identified, suitable response strategies should be developed and managed throughout the life of the project.

The results chain

Developing a theory of change combines a reflective process and analysis with the systematic mapping of the logical sequence from inputs to outcomes in a project. The results chain provides the conceptual framework for articulating this logical sequence. ¶¶ÒùÊÓƵ defines a results chain as follows (see Box 8 below).

Box 8 - Definition: Results Chain

Results Chain: A visual depiction of the logical relationships that illustrate the links between inputs, activities, outputs, and the outcomes of a given policy, program or project.

The results chain addresses practitioners’ need for a concept that allows them to break complex change down into manageable building blocks or steps that leadFootnote 11 to one another, making it easier to sequence and identify changes during both analysis and planning. These steps also become the points at which practitioners will measure whether or not the expected change is actually occurring throughout project implementation.

Each organization will have its own results chain, which will depict and define the number and type of building blocks or levels it uses. Not all results chains look alike. While a ¶¶ÒùÊÓƵ results chain has six levels (see example below), other organizations may have fewer levels and use different terms for the levels (e.g. the results chain of the Organisation for Economic Co-operation and Development – Development Assistance Committee may have only five levels: inputs, activities, outputs, outcomes and impact).

In sum, when practitioners approach a specific problem, their respective results chain will provide a structure to their project design, telling them what types of building blocks they should be identifying as they work on their theory of change.

Figure 1 - ¶¶ÒùÊÓƵ Results Chain

Figure 1 - Chaîne de résultats d’Affaires mondiales Canada
Text version

Levels for ¶¶ÒùÊÓƵ Results Chain

  • Inputs
  • Activities
  • Outputs
  • Immediate outcomes (Development Result)
    • (short-term)
  • Intermediate outcomes(Development Result)
    • (medium-term)
  • Ultimate outcome (Development Result)
    • (long-term)

Simple examples for each level of ¶¶ÒùÊÓƵ’s results chain:

  • Inputs - Funding, People, Material
  • Activities - Procure materials; hire builders; monitor construction
  • Outputs - Wells built according to specifications
  • Immediate Outcomes - Increased access to clean water in the community (Development Result)
  • Intermediate Outcomes - Increased usage of clean water in the community (Development Result)
  • Ultimate Outcome - Improved health of women, men, boys and girls in the community (Development Result)

¶¶ÒùÊÓƵ’s results chain

¶¶ÒùÊÓƵ’s results chain is divided into six levels. Each of these represents a distinct step in the logic of a project. The top three levels—ultimate, intermediate and immediate outcomes—constitute the actual changes expected to take place. In the context of development, these are also referred to as development results. The bottom three levels—inputs, activities and outputs—address the means to arrive at these changes.

Within the results chain, each level of outcomes is very distinct, with clear definitions of the type of change that is expected at that level. These definitions, along with the definitions for inputs, activities and outputs, are defined below. They, along with the definition for development results above, were adapted from the ¶¶ÒùÊÓƵ Results-based Management Policy Statement 2008.

Ultimate outcome – Change in state, condition or well-being of beneficiaries

Box 9 - Definition: Ultimate Outcome

Ultimate Outcome: The highest-level change to which an organization, policy, program, or project contributes through the achievement of one or more intermediate outcomes. The ultimate outcome usually represents the raison d'être of an organization, policy, program, or project, and it takes the form of a sustainable change of state among beneficiaries (rights holders).

The ultimate outcome represents the “why” of a project and should describe the changes in state, condition or well-being that a project’s ultimate beneficiaries should experience. These should not be confused with changes in surrounding circumstances, such as increased economic growth […]. An ultimate outcome should instead reflect changes in the lives of women, men, girls and boys in the partner country. For example:

An ultimate outcome usually occurs after the end of the project, but should, when feasible, still be measured during the life of the project as changes may occur earlier. Once the project is over, the achievement of the ultimate outcome can be assessed through an ex-post evaluation.

Box 10 - Definition: Ex-post Evaluation

Ex-post Evaluation: “Evaluation of a … [initiative] after it has been completed. Note: It may be undertaken directly after or long after completion. The intention is to identify the factors of success or failure, to assess the sustainability of results and impacts, and to draw conclusions that may inform other [initiative]”Footnote 14.

Intermediate outcomes – Change in behaviour, practice or performance

Box 11 - Definition: Intermediate Outcome

Intermediate Outcome: A change that is expected to logically occur once one or more immediate outcomes have been achieved. In terms of time frame and level, these are medium-term outcomes that are usually achieved by the end of a project/program, and are usually changes in behaviour, practice or performance among intermediaries and/or beneficiaries.

Intermediate outcomes articulate the changes in behaviour, practice or performance that intermediaries and/or beneficiaries should experience by the end of a project. For example:

Intermediate outcomes usually stem from the application of the capacity built among intermediaries or beneficiaries at the immediate outcome level. For instance, “Improved antenatal care by health professionals in region X” may stem from the immediate outcomes “Increased knowledge of antenatal care practices by health professionals in region X” and “Improved access to equipment and infrastructure by rural clinics in region X.”

Immediate outcomes – Change in capacities

Box 12 - Definition: Immediate Outcome

Immediate Outcome: A change that is expected to occur once one or more outputs have been provided or delivered by the implementer. In terms of time frame and level, these are short-term outcomes, and are usually changes in capacity, such as an increase in knowledge, awareness, skills or abilities, or access* to... among intermediaries and/or beneficiaries.

* Changes in access can fall at either the immediate or the intermediate outcome level, depending on the context of the project and its theory of change.

Immediate outcomes articulate the changes in capacity that intermediaries and/or beneficiaries should experience during the life of a project. For example:

Immediate outcomes represent the first level of change that intermediaries or beneficiaries experience once implementers start delivering the outputs of a project. For instance, “Increased knowledge of antenatal-care practices by health professionals in region X” may result from the outputs of “Training on antenatal-care practices provided to selected nurses and midwives” and “Mentorship program established for trainee nurses.”

Outputs – Products and services

Box 13 - Definition: Output

Output: Direct products or services stemming from the activities of an organization, policy, program or project.

In ¶¶ÒùÊÓƵ’s results chain for international assistance programming, outputs are the direct products or services stemming from the activities of an implementer. For example:

Activities

Box 14 - Definition: Activities

Activities: Actions taken or work performed through which inputs are mobilized to produce outputs.

In ¶¶ÒùÊÓƵ-funded projects, activities are the direct actions taken or work performed by project implementers. Activities unpack an output into the set of tasks required to complete it. There can be more than one activity per output. For instance:

Example No. 1

Output: Gender sensitive skills development programs and on-the-job coaching on triage, diagnosis and primary health care provided to staff (f/m) in regional health centres of country Y

Activities:

Example No. 2

Output: Training on responses to sexual and other forms of gender-based violence provided to field investigative teams in province Y of country X

Activities:

Inputs

Box 15 - Definition: Inputs

Inputs: The financial, human, material and information resources used to produce outputs through activities in order to accomplish outcomes.

Together, inputs, activities and outputs represent “how” implementers will work to achieve a project’s expected outcomes.

Figure 2 - ¶¶ÒùÊÓƵ's Results Chain

Text Version

Ultimate Outcome: Change in state, conditions or well-being of ultimate beneficiaries (not surrounding circumstances).

Considerations: Why are we doing this? What changes in state, conditions or wellbeing (not surrounding circumstances) will the ultimate beneficiaries (rights holders) experience?

Dependant on the achievement of the intermediate outcomes. Can occur during/end or after closing of the project or program/portfolio and should be measured accordingly.

Examples, changes in: Gender equality, health, enjoyment of human rights, quality of life, prosperity, living conditions, well-being, human dignity, security (environmental, economic, personal, community, food, etc.)

Intermediate Outcome: Change in behaviour, practice or performance of intermediaries or beneficiaries.

Considerations: What changes in behaviour, practice or performance, will intermediaries or beneficiaries experience? Dependant on the achievement of one or more immediate outcomes.

Achieved by the end of the project or program/portfolio and must be measured.

Examples, changes in: decision-making, services, participation, practice, protection of human rights, policy making, social norms, prevention of sexual, gender-based violence

Immediate Outcome: Change in capacities of intermediaries or beneficiaries.

Considerations: What changes in capacity will intermediaries or beneficiaries experience?

Dependant on the completion of outputs. Achieved during implementation of the project or program/portfolio and must be measured.

Examples, changes in: knowledge, opinions, skills, awareness, attitudes, ability, willingness, motivations.

Outputs: Products & services delivered by the project or program implementer(s).

Considerations: How will implementers work to achieve the above changes/outcomes.

Outputs depend on the completion of activities. Outputs must be measured. Completed during implementation according to work-plan schedule. The activity and input levels in the results chain are not included in the ¶¶ÒùÊÓƵ’s logic model. At the project level, activities are reflected in an Outputs and Activities Matrix and financial inputs are reflected in a budget.

Examples: workshops facilitated, training provided, policy advice provided, assessments conducted, report submitted, clinics built or refurbished.

Activities: Planned activities undertaken by project or program implementer(s).

Examples: Draft report, procure material, monitor implementation, analyze documentation, hire a GE specialist, conduct environmental assessment, provide technical assistance, develop training curriculum.

Inputs: Resources invested by implementer(s) & donor(s).

Examples: money, time, equipment, staff, materials and technology.

At what level does “access” belong in the results chain?

As mentioned above, changes in access can fall at either the immediate or the intermediate outcome level, depending on the context of the project and its theory of change.

If it is reasonable that a change in access can result directly from the delivery of one or more outputs, then “access” can be at the immediate outcome level. If, on the other hand, a change in capacity (or another change appropriate at the immediate outcome level) is needed in order for a change in access to take place, then “access” would be at the intermediate outcome level.

Purpose of the distinction between “How”, “What” and “Why”

Making a clear distinction between the “How” (inputs, activities and outputs, with outputs defined as “products and services” only), and the “What” and “Why” (outcomes), reinforces the point that results go beyond the products and services provided by implementers.

The illustration above shows simplified definitions of the distinct changes expected at each level of the results chain.

Attribution, Control, Contribution and Influence

The theory of change approach recognizes that each outcome may have more than one cause. This is why it is important that a project’s theory of change captures the complexity inherent in the project design.

This approach recognizes that at the intermediate and ultimate outcome levels, one organization or project cannot claim full attribution or sole responsibility for the achievement of these outcomes. Instead, organizations, programs/portfolios and projects contribute to, and influence the achievement of the changes described in the ultimate and intermediate outcomes. This contribution and influence works in tandem with other efforts, especially those of program/portfolio and project intermediaries and beneficiaries, and the contributions of other donors or actors.

Thus, as indicated by the double-headed arrow on the left side of the results chain diagram above, the input, activity, output and immediate outcome levels are where you will have the greatest degree of attribution and control. This will gradually give way to contribution and influence as you move up the results chain.

Box 16 - Definitions: Attribution and Accountability

Attribution: The extent to which a reasonable causal connection can be made between a specific outcome and the activities and outputs of a government policy, program or initiative.Footnote 15

Accountability: The obligation to demonstrate that responsibility is being taken both for the means used and the results achieved in light of agreed expectationsFootnote 16. While no one organization or project is entirely responsible for the achievement of outcomes—especially at higher levels in the results chain—the implementer is responsible for designing a project with achievable expected outcomes, and demonstrating that it is Managing for Results, i.e. that:

The Results Chain and the Logic Model

While some practitioners use the terms “results chain” and “logic model” interchangeably, ¶¶ÒùÊÓƵ differentiates between the two. As described above, the results chain provides a conceptual model for how a given organization breaks change down into building blocks or steps. It establishes and names the levels that will be used when that organization undertakes the development of the theory of change as part of project design.

The logic model, however, is a more complex and nuanced tool. Because change, particularly the types of change expected from international assistance programming, is complex and multi-faceted, a theory of change includes several complementary pathways that, in combination, lead to one ultimate outcome. Thus, at ¶¶ÒùÊÓƵ, the theory of change for a specific project is:

The pyramid structure of the logic model is particularly useful to illustrate the convergence of different pathways of change into one ultimate outcome. While the pathways of change flow vertically, keep in mind that in reality there is also a dynamic, complementary, horizontal relationship between the different pathways within a logic model.

Figure 3 - Illustration of the Pyramid Structure of the Logic Model

Figure 3 - Illustration of the Pyramid Structure of the Logic Model
Text version

One ultimate outcome stems from the achievement of a combination of several different intermediate outcomes.

Each intermediate outcomes stems from the achievement of a combination of different immediate outcomes.

Each immediate outcome stems from the completion of a combination of different outputs.

Tools related to the theory of change

In ¶¶ÒùÊÓƵ international assistance programming, three tools encompass the theory of change:

1.3 Results-Based Monitoring and Evaluation

Monitoring and evaluation have always been fundamental aspects of good project and program management. Before the introduction of Results-Based Management, projects and programs used traditional monitoring and evaluation. The difference between traditional monitoring and evaluation and results-based monitoring and evaluation is well explained in the World Bank publication Road to Results: Designing & Conducting Effective Development Evaluations:

Traditional M&E [Monitoring and Evaluation] focuses on the monitoring and evaluation of inputs, activities, and outputs (that is, on project or program implementation).

Results-based M&E combines the traditional approach of monitoring implementation with the assessment of outcomes […].

It is this linking of implementation progress with progress in achieving the desired […] results of government policies and programs that makes results-based M&E useful as a public management tool. Implementing this type of M&E system allows the organization to modify and make adjustments to both the theory of change and the implementation processes in order to more directly support the achievement of desired […] outcomes.Footnote 17

Box 17 - Controlling the Cost of an Evaluation

The cost of evaluation can be reduced substantially if monitoring information on outcomes is available.

Results-based monitoring and evaluation are distinct, yet complementary. They both require collecting data on outcomes, along with critical thinking and analysis. They both aim to provide information that contributes to learning and can help inform decisions, improve performance and achieve better results.

Results-Based Management is a continuous process of collecting and analyzing data on indicators and using these data to assess progress on or toward the expected outcomes. It provides information on, and evidence of, a program’s/portfolio’s and project’s status at any given time (and over any given time) relative to targets for outputs and expected outcomes at all levels: immediate, intermediate and ultimate. It is descriptive in intent, in that it assesses whether change is happening. In comparison, results-based evaluation provides in-depth evidence to support a specific purpose, such as learning or accountability, or sometimes both, at a specific point in time.

Monitoring is undertaken by different actors in different ways throughout implementation. At the project level, the implementer has primary responsibility for collecting and analyzing indicator data and assessing performance and progress on or toward the expected outcomes. However, ¶¶ÒùÊÓƵ staff also monitors projects. ¶¶ÒùÊÓƵ’s monitoring always entails reviewing performance reports provided by the implementers, but can also include site visits, cross-referencing with other stakeholders, or hiring external monitors, depending on the type of project.

Note that some organizations refer to Results-Based monitoring and evaluation as “performance monitoring and evaluation.”

Box 18 - Definitions: Results-Based Monitoring and Evaluation

Results-based monitoring: “… the continuous process of collecting and analyzing information on key indicators and comparing actual results with expected results in order to measure how well a project, program or policy is being implemented. It is a continuous process of measuring progress towards explicit short-, intermediate-, and long-term resultsFootnote 18 by tracking evidence of movement towards the achievement of specific, predetermined targets by the use of indicators. Results-based monitoring can provide feedback on progress (or the lack thereof) to staff and decision makers, who can use the information in various ways to improve performance.”Footnote 19

Evaluation: Evaluation is the systematic and objective assessment of an on-going or completed project [or part of], programme or policy, its design, implementation and results”Footnote 20. “In the development context, evaluation refers to the process of determining the worth or significance of a development [initiative].”Footnote 21

Results-based Monitoring enables management

Measuring outputs and outcomes through monitoring becomes essential during implementation. Collecting and analyzing indicator data on a regular basis empowers managers and stakeholders with real-time information allowing them to assess the progress on and toward the achievement of outcomes. This helps identify strengths, weaknesses and problems as they occur, and enables program/portfolio and project managers to adapt and take timely corrective action during implementation. This in turn increases the likelihood of achieving the expected outcomes.

This cycle of performance measurement, evidence-based assessment of progress, learning and adjustment is what makes Results-Based Management / Managing for Results an adaptive management approach. It is not only an indicator data collection or reporting exercise.

Box 19 – Manageable Monitoring System

“If the monitoring system is to be a useful management tool, it needs to be manageable. Do not overload the system with too many indicators. Otherwise, too much time will be spent managing the system that produces the data, and not enough time will be spent using the data to manage.”Footnote 22

Results-based monitoring and evaluation tools

In order to monitor and evaluate a project, it is important to establish a structured plan for the collection and analysis of performance information during project design. The tools ¶¶ÒùÊÓƵ uses for this are:

1.4 Taking a Participatory Approach

Effective Results-Based Management requires consensus among key actors on what is to be achieved, how to achieve it, and which monitoring and evaluation strategies will best inform any adjustments required to ensure expected results are achieved. Thus, Results-Based Management requires that projects be designed, planned and implemented using a participatory approach.

What is a participatory approach?

Shared ownership

Whether your project focuses on international development, humanitarian action, advancing democracy or international security, stakeholders must have a voice in decision-making and the project must make an active effort to meet their specific needs. In other words, the project must be “based on shared ownership of decision-making.”Footnote 23 In the context of development, participatory approaches came into practice in “response to ‘top down’ approaches to development, in which power and decision-making [was] largely in the hands of external development professionals.”Footnote 24

Projects focused on advancing democracy or international security may be mandated through instruments, such as United Nations Security Council Resolutions or Compacts, that do not enable stakeholders to provide input. In such cases, it remains important that the specific structure and design of the project allow for as much shared ownership as possible in order to ensure success.

Involving the appropriate people

Taking a participatory approach means that the design teamFootnote 25 should ensure that all key stakeholders—including intermediaries and beneficiaries, both female and male—are involved and consulted throughout the project’s life cycle, from planning and design to implementation, monitoring and reporting. While a participatory approach usually requires a good deal of time and resources during the project planning and design phases, this approach yields enormous and sustainable benefits over the long term.

Allocating appropriate time and resources during the project life cycle

Appropriate time and resources should be allocated to ensure that all key stakeholders are involved in planning, joint monitoring, evaluation and decision-making throughout the project life cycle.

Using the appropriate methodologies

A participatory approach can be facilitated through many different methodologies. Project teams should choose those most appropriate to the context in which they are working. Whatever methodologies are selected, it is vital that expected outcomes and indicators be developed through a consensus building process involving all key stakeholders. Any methodology chosen must also encourage equitable and gender sensitive participation.

Why is a participatory approach important?

A participatory approach increases effectiveness

A participatory approach is integral to the success of managing for results and increases the chances of achieving and maintaining expected outcomes. Here are three reasons to use a participatory approach.

1. It expands the information base needed for realistic project planning and design.

Results identification and assessment hinges on comprehensive information collection. Bringing together the project’s key stakeholders—including intermediaries and beneficiaries—will help ensure that their knowledge, experience, needs and interests inform project design. This is essential for obtaining information about local, cultural and socio-political contexts, and about other practices, institutions and capacities that may influence the project, thus ensuring a more realistic project design.

2. It encourages local ownership and engagement.

Close collaboration and participation of beneficiaries, intermediaries and other stakeholders during both the design and implementation phases increases the likelihood that outcomes will: reflect their needs and interests; be relevant to, and realistic for, the local context or situation; and be monitored on an ongoing basis. It creates a sense of ownership of the project and its expected outcomes.

3. It makes achievement of the expected outcomes and sustainability more likely.

When beneficiaries and intermediaries are fully engaged in the design, implementation and monitoring (including data collection) of a project, the expected outcomes are more likely to be achieved in a sustainable fashion. In other words, participation increases ownership of the results achieved and makes it more likely that local people will continue to be active agents in their own development.

¶¶ÒùÊÓƵ has obligations under the Official Development Assistance Accountability Act

Canada’s came into force on June 28, 2008, and applies to all federal departments and agencies that provide official development assistance (ODA). Section 4(1) of the Act sets out three criteria for how ODA is used:

4. (1) Official development assistance may be provided only if the competent minister is of the opinion that it:

  1. contributes to poverty reduction;
  2. takes into account the perspectives of the poor; and
  3. is consistent with international human rights standards

These criteria have specific implications for the design of ¶¶ÒùÊÓƵ projects and the formulation of their outcomes. Guidance notes have been developed to help ¶¶ÒùÊÓƵ staff and prospective implementers exercise due diligence in meeting the Act’s requirements (see below). The Results-Based Management approach requires that projects be planned, designed, and implemented using a participatory approach help ¶¶ÒùÊÓƵ comply with the ODAAA, particularly the criteria of taking into account the perspectives of the poor.

1.5 Integration of Gender Equality, Environmental Sustainability and Governance

Three concepts are integrated into all of Canada's international assistance policies, programs/portfolios, and projects:

Integrating these concepts is much more than a paper exercise. They provide a lens through which all aspects of results-based project planning, design and implementation should be viewed. Integration of these themes strengthens development and other international assistance programming by enhancing its inclusiveness, sustainability and effectiveness, which leads to better outcomes.

Gender equality

Box 20 - ¶¶ÒùÊÓƵ’s Gender Equality Policy for Development Assistance Objectives

Gender equality results are fundamental to program effectiveness, as it ensures that women and men receive the tailored support they need to achieve similar outcomes. This is why ¶¶ÒùÊÓƵ has a policy on gender equality. ¶¶ÒùÊÓƵ’s Results-Based Management methodology promotes gender equality by integrating gender dimensions.

Box 21 - Women’s Empowerment

According to this policy, gender equality outcomes should be incorporated into all of ¶¶ÒùÊÓƵ’s international development projects. The key to addressing gender equality in projects is a combination of gender equality results based on gender-based analysis; gender-sensitive indicators and targets that aim for substantial reductions in gender inequalities; and activities within the project that contribute to gender equality. Gender equality results are formulated within the outcomes of the Logic Model, ideally at the intermediate and immediate outcome levels, to address the gaps and issues identified in the project’s gender-based analysis. Developing gender equality results does not mean adding “women and men” or “including women” in an outcome statement. The gender equality result needs to explicitly demonstrate changes in gender inequalities.

Sometimes it may be necessary to have a project focus specifically on addressing gender inequalities or women's empowerment. Such a project is considered gender equality specific, and is expected to have gender equality results at all levels of its logic model, starting at the ultimate outcome level.

Projects are assessed based on their level of gender equality integration, and this informs Canada's reporting on gender equality to Canadians and internationally.

Box 22 - Examples of Gender Equality Expected Outcomes and Indicators

Ultimate Outcome: Improved living conditions, especially for women, in poor rural areas of X, Y, and Z regions in country X

Indicators:

Intermediate Outcome: Strengthened local government policy commitments and programs that respond to sexual and gender-based violence in selected rural communities in country X

Indicators:

Immediate Outcome: Strengthened abilities, including advocacy and negotiation, of civil society, especially women, to participate in democratic-management bodies in country X

Indicators:

Immediate Outcome: Strengthened knowledge and skills of staff (f/m) in institution YZ to develop gender responsive economic-development policies in country X

Indicators:

Immediate Outcome: Increased awareness on gender-equality issues among decision-makers in country X

Indicators:

Box 23 - Definitions: Gender Equality Terms Frequently Used in Outcome Statements

Gender balanced refers to promoting equal numbers of women and men in consultations, decision-making structures, and other activities and roles. Gender balance implies full participation, voice and decision-making authority for both women and men. To achieve gender balance, special measures may need to be put in place. For example, increased gender balanced participation of women and men in decision-making at the community level.

Gender equitable refers to policies, practices, regulations, etc. that ensure equal outcomes for women and men based on gender analysis. For example, strengthened gender equitable economic growth.

Gender responsive refers to an approach to programs, policies, budgets, etc. that assesses and responds to the different needs/interests of women and men, girls and boys, as well as to the different impacts projects have on them. Through gender responsive programming, gender gaps in decision-making, access, control and rights can be reduced. For example, strengthened gender responsive planning and budgeting.

Gender sensitive refers to approaches incorporating gender analysis and gender equality perspectives. It reflects an awareness of the ways people think about gender, so that individuals rely less on assumptions about traditional and outdated views on the roles of men and women. For example, gender sensitive training will challenge gender stereotypes and bias, and provide examples to ensure that women and men (girls and boys) are involved and benefit equally; e.g. enhanced gender sensitive curriculum.

Environmental sustainability

Environmental sustainability is a critical factor in poverty reduction and sustainable development. Indeed, people around the world, but particularly in developing countries, are highly dependent on the natural environment for their physical, social and economic well-being. From the necessities of life, such as water, food and air, to the supply of resources for economic growth and resilience to natural hazards, their development is directly linked to the state of the natural environment and the opportunities it offers.

Environmental sustainability should be reflected in project outcomes in all international assistance projects, as appropriate, in accordance with the Department’s Sustainable Development Strategy, the Canadian Environmental Assessment Act (2012) and the Cabinet Directive on Environmental Assessment of Policy, Plan and Program Proposals. To ensure the integration of environmental sustainability, an Environmental Integration Process is applied, which includes an environmental analysis of proposed policies and programming and the integration of appropriate environmental sustainability considerations in their design, implementation and monitoring.

Box 24 - Environmental Sustainability Integration Principles

Do no harm: Initiatives (projects) will not pollute or degrade the environment or the natural resources of partner countries.

Mitigate environment related risks: Environmental risks, including those posed by climate change, will be considered, and mitigation measures will be integrated into strategies, policies, and programming.

Capitalize on environmental opportunities: Canada will seek to capitalize on opportunities offered by the natural environment and/or emerging environment related opportunities.

The application of ¶¶ÒùÊÓƵ’s environmental-integration process leads to the adoption of the following approaches for international development projects.

An integrated approach is applied to safeguard or enhance results and the environment through the incorporation of environmental sustainability considerations into all projects. Specific environment indicators and targets, corresponding to the environmental sustainability considerations reflected in project outcomes, must be identified.

Box 25 - Examples of Expected Outcomes and Indicators in an Integrated Approach

Intermediate Outcome: Enhanced sustainable management of healthcare facilities in district X of country Y

Intermediate Outcome: Increased adoption of more productive and sustainable agriculture practices by small- scale farmers of province X in country Y

A targeted approach is used when environment related opportunities are aimed at specifically, or when the state of environmental degradation is such that other development efforts would be compromised in the absence of targeted initiatives. With the targeted approach, specific environment outcomes, indicators and targets must be developed.

Box 26 - Examples of Expected Outcomes and Indicators in a Targeted Approach

Environment Intermediate Outcome: Enhanced water quality of rivers in district X of country Y

Environmental Intermediate Outcome: Strengthened environmental-legal framework for the mining sector in country X

Box 27 - Examples of Other Environmental Expected Outcomes and Indicators

Intermediate Outcome: Enhanced international, regional and cross-border cooperation on water and other environmental issues in region X

Immediate Outcome: Increased capacity of trade negotiators to promote stronger environmental governance regimes in region X

Intermediate Outcome: Increased integration of appropriate* measures for environmental protection in trade agreements by government X in country X

Intermediate Outcome: Increased access by civil society to information and policy fora on government policy and decision-making on environment and natural resources in country X

It is important to note that for a project to be considered as integrating environmental sustainability, its performance measurement framework must include at least one indicator measuring the environmental sustainability dimension reflected in an outcome at the intermediate (fully integrated) or immediate (partially integrated) level.

Governance

Effective governance is about how the state, individuals, non-state actors and civil society interact to effect change, allocate resources and make decisions. The achievement of sustainable results in all sectors of international assistance depends on efficient, stable and effective governance systems, and institutions that reflect the will of the people. Strengthening governance is therefore a key means of achieving poverty reduction, sustainable development and addressing drivers of conflict and fragility in states at various levels of development. Conversely, political instability, arbitrary use of power and policy uncertainty has significant negative effects on the sustainability of development results.

Box 28 - Examples of Governance Expected Outcomes and Indicators

Intermediate Outcome: Increased security of land tenure for low-income citizens, especially women, in region Y of country X

Intermediate Outcome: Increased effectiveness of national human rights institutions and other mechanisms in investigating and taking action on violation of child rights in country X

Immediate Outcome: Increased capacity of Civil Society Organizations (CSOs) in country X to advocate with the government at local, regional and national levels for human rights, especially lesbian, gay, bisexual and transgender (2SLGBTQI+) rights

Immediate Outcome: Increased capacity of the national bureau of statistics in country X to disaggregate data on children and youth by sex, age, household income, geographic area, ethnicity and disability status

¶¶ÒùÊÓƵ has identified governance as an important component of international assistance programming. This means that governance considerations must be reflected in project and program / portfoliosituation analysis, planning and design. They should also be reflected in expected outcomes and tracked with appropriate governance indicators. Governance considerations are also key to ensuring compliance with the . The Act specifies that for investments to be considered as official development assistance, the minister must be of the opinion that they contribute to poverty reduction, take into account the perspectives of the poor and are consistent with international human rights standards. The two latter criteria are key to the integration of governance in international assistance programming.

Consultation with governance specialists on the integration of governance in international assistance programming can help ensure that programs are both technically sound and politically feasible. The specific objectives for the integration of governance into international assistance and other international programming are to:

To better integrate governance in international assistance programming, a governance analysis of any proposed project should be conducted by the country-program and subject-matter specialists at the earliest possible time. The purpose of this analysis is to ensure that projects across sectors strengthen governance systems and processes as part of their program results, and also address governance risks. The analysis of the governance landscape should take into account the following considerations:

Guidance on governance is available on ¶¶ÒùÊÓƵ's website; here are links to guidance notes related to compliance with the :

Box 29 - Key Governance Considerations

Participation and Inclusion:

Transparency and Accountability:

Efficiency and Effectiveness:

Equity, Equality and Non-Discrimination:

Capacity and Responsiveness:

Responsiveness reflects the capacity of individuals, institutions and governments to accommodate, protect and serve stakeholders within a reasonable time frame and without discrimination.

Part Two: Results-Based Management Methodologies and Tools

2.0 Introduction

¶¶ÒùÊÓƵ has adopted a set of methodologies and tools to make managing for results easier for staff, implementers and other stakeholders.

For each project, the theory of change is housed in the logic model, the outputs and activities matrix, and the theory of change narrative.

The results-based monitoring and evaluation strategy is summarized in the performance measurement framework and expanded upon in the monitoring and evaluation plan.

These tools are meant to be used throughout the entire project life cycle. They should be developed during the project planning and design phase, validated during project inception as part of the development of the project implementation plan or its equivalent, and used as management tools during implementation.

Box 30 - Iterative Tools

“It is important to remember that the logic model is not static; it is an iterative tool. As the program changes, the logic model should be revised to reflect the changes, and these revisions should be documented.”

Retrieved from Treasury Board of Canada Secretariat, :

Since Results-Based Management is an iterative approach to managing complex change and encourages a cycle of continuous improvement, these tools are living documents. As the project changes, these tools can be adjusted and modified within certain parameters (see section 4.2 for more details) to reflect the change. This cycle of improvement enables proactive management for results throughout implementation.

Before developing these tools, it is important to have a good understanding of their components. The sections below define basic components such as outcomes, outputs, activities and indicators, describe each of ¶¶ÒùÊÓƵ’s Results-Based Management tools, and explain how to use them in project planning and implementation.

On using the tools of other partners

Keep in mind that different practitioners may use different tools to display the theory of change for a specific project or program. Whereas ¶¶ÒùÊÓƵ uses the logic model, the outputs and activities matrix, the theory of change narrative and the performance measurement framework to apply Results-Based Management at the project level, other practitioners use tools such as the logical-framework analysis and results frameworks. What remains important is that once an agreement has been reached about which tools and terminology will be used, all project partners use the same tools, whatever they may be, to ensure a common understanding of project expected outcomes and overall logic.

Although the tools and terminology may vary, the underlying principles of Results-Based Management remain the same. Much of the guidance outlined below will apply regardless of the template or terminology being used. As part of their general due diligence, ¶¶ÒùÊÓƵ officers are responsible for ensuring that a proposed project design is sound and that implementers are managing for results, whatever tools they use.

2.1 Outcomes and Outputs

As discussed in section 1.2, the logic model is the tool used to visually represent the logical relationships between a project's planned outputs, immediate outcomes, intermediate outcomes and ultimate outcome. Distinguishing between outcomes and outputs is key for ensuring that results (outcomes) go beyond the products and services (outputs) rendered by implementers. Before explaining more about the logic model, this section looks at both outcomes and outputs in detail.

What is an outcome or result?

As defined in Part One:

Results are the same as outcomes. An outcome is a describable or measurable change that is derived from an initiative's outputs or lower-level outcomes. Outcomes are qualified as immediate, intermediate or ultimate; outputs contribute to immediate outcomes; immediate outcomes contribute to intermediate outcomes; and intermediate outcomes contribute to ultimate outcomes. Outcomes are not entirely within the control of a single organization, policy, program or project; instead, they are within the organization's area of influence.

How to formulate an expected outcome statement

It is short but specific

An expected outcome is formulated as a one-sentence statement. It is a brief but specific description of a realistic change you expect the beneficiary or intermediary to experience. Its specificity ensures that it communicates your exact expectations and leaves as little room as possible for interpretation, despite its short length.

It describes a change

An outcome statement articulates a specific change that a policy, program or project is expected to achieve or contribute to, stemming from ¶¶ÒùÊÓƵ’s investment in a programming activity in cooperation with others. It describes a continuum rather than a static event or state.

It is relevant

An outcome statement must be relevant to the actual needs of the country, beneficiaries and intermediaries. This can be ensured through the sustained use of participatory approaches throughout planning and implementation. An outcome statement should also be relevant to the gender-equality, environmental and governance dimensions of the issue at hand. Finally, intermediate and ultimate outcome statements should be aligned with appropriate ¶¶ÒùÊÓƵ program, branch and corporate priorities.

It follows a specific syntax

An outcome statement is phrased in the past tense and should follow a specific syntax, indicating:

Table 1 - Illustration of the Syntax Structure of an Expected Outcome StatementFootnote 26

DirectionWhatWhoWhere
Increasedusage of agriculture extension servicesby dairy farmers, especially women farmersin selected communities in rural Sampleland
Increasedprotection of the rights of minoritiesby government Xin country X
Reducedvulnerability to transnational threats posed by international crimefor the peoplein region Y
Improvedearly-warning mechanismof ministry of interiorin country Z
Increasedexportationby small- and medium-sized enterprises, especially those led by womenin country Y
Improvedprovision of sexual and reproductive services, and antenatal care to womenby health professionalsin region X

Below is an alternative order. Regardless of which you use, the expected result statement should start with the direction of the expected change.

DirectionWhatWhoWhatWhere
Increasedaccessby civil society, particularly women’s organizationsto information and policy fora on government policy and decision-making on environment and natural resourcesin country X
Increasedabilityof health workersto address the nutrition challenges of women and children, especially girlsin country Z

*Note: The “where” (or location), must be identified at the ultimate and intermediate outcome level. If the location is different at the immediate outcome level (e.g. specific village within the province or country identified in the ultimate or intermediate outcome) it should be included in the statement. If it is not different or the location is implicit in the “who,” it can be left out.

The syntax used by ¶¶ÒùÊÓƵ for outcome statements helps demonstrate the incremental and continuous nature of positive change expected in the context of international assistance programming.

Outcome statements start with an adjective that indicates direction (increased, improved, strengthened, reduced, enhanced, etc.), and qualifies the expected change. The placement at the beginning (“Increased usage…”) suggests the possibility of further change and improvement.

In contrast, the use of passive voice, with the placement of the directional word in the middle (“…usage is increased by…”) can make who is experiencing the change unclear, altering the meaning of the statement. Placement at the end (“…is increased”) implies that no further change is necessary. In the context of international assistance programming, it is recognized that the change brought about is always incremental and rarely conclusive (e.g. there will always be more scope to increase the use of extension services).

Moreover, the inclusion of a verb preceding the adjective (“…is increased”) draws attention to the efforts (activities) to achieve the outcome rather than to the outcomes themselves.

It is simple and it expresses only one change

An outcome statement should be simply worded and easily understood by a general audience, such as beneficiaries or the Canadian public. If any technical terms are used, they should be footnoted and defined in the logic model and/or the theory of change narrative.

Outcome statements should never include words or expressions such as “via,” “through,” “in order to,” “leading to” or “stemming from.” Their use indicates that the outcome statement contains more than one level of change, because they point to relationships across different levels of the logic model—not in a single outcome. In such cases, the statement can likely be split into two outcome statements at two different levels of the logic model.

For example, the outcome statement “Improved economic prosperity of villagers through increased opportunities in the tourism sector” is incorrect because it contains two changes at different levels: “improved economic prosperity” (an ultimate outcome) and “increased opportunities” (an intermediate outcome).

Even at the same level, outcome statements should only express one change. For example, the statement “Increased production of quality nutritious food by smallholder farmers (women and men) and sale of locally grown food by vendors in region ABC” describes two different changes at the intermediate outcome level, and should be separated into two outcomes.

It is measurable

An outcome statement must be clear and specific enough to be measured. Each outcome statement should be measurable by two to three indicators, ideally by a mixture of qualitative and quantitative indicators.

It is different from indicators

Remember!

Results are the same as outcomes

With the exception of very targeted programming, such as funds set up to address specific diseases, an outcome statement should not be so specific as to be measurable only by one indicator, nor should it mimic or duplicate that indicator. For example:

Keep in mind that very targeted or “vertical” programming is mainly used when it is complemented by, or is part of, more comprehensive, holistic approaches at the community, country or regional level.

It is realistic and achievable

An outcome statement needs to capture a realistic change given the project’s scope, timeframe and budget. For example, it is not realistic to have an ultimate outcome stated as “Increased health of men, women, girls and boys in country X” if the project takes place in municipality Y of country X and targets women. In this case, such a project’s ultimate outcome might be “Improved health of women in municipality Y of country X.”

While they may well communicate high expectations and good intentions, overambitious or unrealistic statements give a false impression of what can actually be achieved in a given timeframe and with the resources available. Furthermore, they skew a project's results-based monitoring tools. For instance, if a statement commits to “increased employment,” when only “increased employability” is realistic, then the indicators developed for the overambitious statement will likely not be sensitive to changes in employability, even if they happen. As such, they will prevent the proper Results-Based Management of the project.

Table 2 - Examples of Weak and Strong Expected Outcome Statements

Examples of Weak Outcome StatementsIssueExamples of Strong Outcomes
Increased literacy through training programs
  • Does not identify for whom or where the expected change will occur.
  • It contains the word “through,” which is bad practice because it combines different levels into one statement.
  • The moment an outcome or output statement includes multiple levels of change, it becomes very difficult to know what to measure when selecting indicators. You also run the risk of repeating a change already described in the level below, leading to further confusion.
Increased literacy among men and women in selected rural communities in northern districts of country X
Women can get maternal healthcare services
  • Static rather than dynamic
  • Doesn’t indicate direction of change
  • Does not identify where the expected change will occur
Improved access to gender-sensitive maternal-healthcare services for women in rural communities in country X
Peace in country X
  • Not achievable in the context of one project
  • Static rather than dynamic
  • Does not specify direction of expected change, nor whom, specifically, it will affect
Enhanced security for women, men, and children in conflict affected areas of country X

What is an output?

As defined above in Part One:

Outputs are direct products or services stemming from the activities of an organization, policy, program or project.

In the context of a project funded by ¶¶ÒùÊÓƵ, outputs are the products and services stemming from the project activities undertaken by an implementer with the project funds. If there is more than one implementer, responsibility, whether individual or shared, should be clearly established.

An output is not:

Box 31 - Whose Outputs?

In most cases, the outputs in the logic model are the products and/or services funded by ¶¶ÒùÊÓƵ. You may, however, find yourself in a situation where some of the outputs are not being funded by ¶¶ÒùÊÓƵ, but are essential to the theory of change for the project.

For example, the ¶¶ÒùÊÓƵ-funded project may be a small technical assistance component of a larger program-based approach or grant. In these cases, in order to accurately represent the theory of change to which the ¶¶ÒùÊÓƵ-funded outputs will contribute, you can chose to present in the logic model the theory of change of the entire initiative and use font or colour or other markings to highlight those outputs stemming from ¶¶ÒùÊÓƵ funds and for which the implementer will be responsible.

In other situations, you may find yourself working on a project where multiple ¶¶ÒùÊÓƵ-funded implementers are working together to deliver the project. In this case, the responsibility of each implementer at the output level can be represented by a different font or colour in the logic model.

How to formulate an expected output statement

It clearly indicates what the implementer will deliver

An output statement describes a product or service to be provided by an implementer to a specific population, group or organization (i.e. project intermediaries or beneficiaries). Output statements should be specific and detailed enough so that it is clear what product or service the implementer will provide, yet they should not attempt to cover every activity required to deliver the output.

It follows a syntax different from that of outcome statements

Since outputs are not results, an output statement is different from an outcome statement. An output statement refers to what an implementer produces or provides, as opposed to an outcome statement which describes the changes intermediaries or beneficiaries experience. It should therefore not begin by describing a change and its direction, and should avoid words such as “increased” or “improved.”

Syntax of an output statement

Remember!

Outputs are not results

Table 3 - Illustration of the Syntax Structure of an Expected Output StatementFootnote 27

WhatVerbWhat subjectTo or for whom
Technical assistanceprovidedon gender-responsive and environmentally sensitive project managementto regional government staff (f/m)
Trainingprovidedon trade negotiation techniquesto staff (f/m) in organization X
Technical assistanceprovidedon legal instruments (e.g. laws, policies, legislations, model laws and regulations)to personnel (f/m) in organization Y
Technical assistanceprovidedon standard operating proceduresto security personnel (f/m) in ministry X

Below is an alternative order.

WhatVerbWhat subjectTo or for whom
Technical assistancein project managementprovidedto regional-government staff (f/m)
Trade mission to Canadaon promoting trade and investment in the regionorganizedfor representatives (f/m) of ministries of X and firms from region Y
Trainingon how to respond to sexual and other forms of gender-based violenceprovidedto field-investigative teams (f/m)

An output should never be confused with a result. An obvious difference in syntax allows the reader to make these distinctions more easily.

It should be objective

Outputs should be objective and contain no subjective terms. If words are added to further qualify the product or service the output describes, the words should have a standard and commonly understood definition. The definition can be included as a footnote in the logic model.

Box 32 – Example of Objective vs. Subjective Expected Output Statements

Objective outputs statements:

Output statement: Awareness campaign on the availability of health services in newly rehabilitated regional health centres provided to men and women in village X

Output statement with term defined: Gender-sensitive awareness campaign* on the availability of health services in newly rehabilitated regional health centres provided to men and women in village X

* Gender-sensitive is a standard term with a commonly understood definition. In this example, gender-sensitive campaign is defined as a campaign that is designed based on gender analysis to promote equal roles for women and men in healthcare (e.g. women and men as doctors).

Subjective output statements:

Awareness campaign on the availability of health services in newly rehabilitated regional health-centres provided to appropriate members* of local communities

User friendly* computer services provided to Y and Z in city X

*Note: In both of the subjective examples above, the terms “appropriate” and “user-friendly” are subjective and do not have a standard or commonly understood definition; they can be interpreted very differently by different stakeholders, leading to ambiguity regarding the nature of the outputs (products or services) the implementer has committed to deliver under an agreement with ¶¶ÒùÊÓƵ.

It represents a completed package of activities

In the logic model, an output statement is a package of completed work. In the outputs and activities matrix, each output is broken down into its component activities. Further breakdown below the activity level to sub-activities is possible. However, sub-activities should appear only in the project work breakdown structure and not in the outputs and activities matrix. Consequently, it is important to differentiate between the output itself, activities and sub-activities.

Box 33 – Example of Outputs vs. Activities vs. Sub-activities

Output: Technical assistance in project management provided to regional-government staff.

Activities: Conduct gap analysis with regional-government staff. Develop training package. Hire trainer. Facilitate delivery of training. Conduct ongoing mentoring with selected government staff.

Sub-activities (in this example, sub-activities are listed only for the activity “hire trainer”): Develop terms of reference. Create job poster. Post advertisements. Screen applications. Conduct interviews. Select candidate. Inform candidate and negotiate salary. Draft and conclude contract.

The degree to which outputs are broken down will depend on the scope and scale of the project, and the budget associated with each output. In small projects, the breakdown to activities in the outputs and activities matrix may provide a sufficient level of detail for scheduling and budgeting. A very large project, though, may need to break down activities even further in the work breakdown structure in order to plan effectively.

Box 34 – Definition: Work Breakdown Structure

Work Breakdown Structure: “the [] describes the work breakdown structure as a ‘deliverable-oriented hierarchical decomposition of the work to be executed by the team.’” The work breakdown structure is a key project implementation tool that can be used to expand on the outputs and activities matrix by breaking the project outputs and sets of activities into corresponding sub-activities or tasks. In other words, the work breakdown structure subdivides the various components of project implementation into lower-level components that provide sufficient detail for planning and management purposes, and tasks that people can actually perform.

Table 4 - Common Mistakes to Avoid with Expected Outputs

Example of MistakeIssuesPotential Correction
  • Regional Chamber of Commerce established by government is functioning
  • The implementer does not have control over government actions, such as the formal establishment and day-to-day operations of over such organizations.
  • The output does not describe the specific products or services the implementer is actually expected to deliver, such as technical assistance, or training, or mentorship, etc.
  • Functioning of regional planning centres is evidence of a change in performance on the part of the government.
  • Technical assistance for the operationalization of Chamber of Commerce provided to selected staff
  • Facilitator and translators hired
  • Needs assessment and capacity-gap analysis, including gender dimensions, conducted with boy and girl students, teachers and primary-school administrators
  • Local school administration holds consultations with parents and teachers
  • Gender-sensitive teacher- training programsFootnote 28 developed
  • Gender-sensitive teacher- training program delivered
  • These outputs are detailed at the level of activity, leading to a much longer and more detailed list of outputs than necessary for the logic model.
  • These outputs also contain elements over which the implementer does not have control or that will be conducted by other actors, such as consultations by the local school administration with parents and teachers.
  • Technical assistance provided to local school administration for the participatory development of new gender sensitive teacher training programs
  • Improved gender-sensitive community participation in the design and planning of policies through increased knowledge of consultative mechanisms such as surveys and workshops
  • This output is actually an intermediate outcome, because it describes a change in behaviour.
  • It contains the term “improved.” Only outcome statements start with an adjective that indicates direction (increased, improved, strengthened, etc.); outputs do not.
  • It contains the word “through,” which is bad practice because it combines different logic model levels into one statement.
  • The moment an outcome or output statement includes multiple levels of change, it becomes very difficult to know what to measure when selecting indicators. You also run the risk of repeating a change already described in the level below, leading to further confusion.
  • Training in gender sensitive community consultation and participation mechanisms for policy planning and design provided to selected regional government staff
  • 80 Women in refugee camps trained in human rights
  • This output includes a target.
  • Targets, although necessary, are not displayed in the output or outcome statement; rather, they appear in the performance measurement framework. These will be discussed in further detail in section 2.6 and section 3.4.
  • Selected women in refugees camp X trained in human rights
  • Training in human rights provided to selected women in refugee camp X

2.2 The Logic Model

A roadmap for project outcomes

Like a roadmap or a blueprint, a logic model is a visual depiction of the main elements of a theory of change for a specific project or program, reflecting the series of changes that are critical to achieving project success. It depicts the logical connections between the planned outputs and the expected outcomes (immediate, intermediate and ultimate) that the project aims to achieve or contribute to. ¶¶ÒùÊÓƵ’s logic model starts at the ultimate outcome level and now ends at the output level.Footnote 29

The logic model forms a pyramid shape with multiple complementary pathways branching off below one ultimate outcome level. Each pathway addresses a different aspect or element of the issue targeted by the project. Achievement of the ultimate outcome depends on the achievement of all outcomes along each pathway. Arrows between the levels represent assumptions (explained in the theory of change narrative) about why the outputs or outcomes from one level should lead or contribute to the changes at the next level, and about existing conditions, including risks, which may affect the achievement of the outcomes.

Remember!

The logic model is a key Results-Based Management design and management tool—not a form to fill out and then file away

Keep in mind while the pathways of change flow vertically, in reality there is also a dynamic, complementary, horizontal relationship between the different pathways within a logic model.

The logic model is used as both a planning and design tool during the development of a project, and a management tool during project implementation.

The purpose of the logic model is to:

The work of others

Note that the logic model captures only the relationships between the outputs delivered by the project and outcomes to which they contribute. In many cases, logic model outcomes are also dependent on the work of other actors, e.g., other donors or local organizations. The work of others is not usually captured in the logic model, but it should be captured as “assumptions” in the theory of change narrative. See section 2.4 below for more information.

Logic modelling

We use the process of logic modelling to help with the further development of the theory of change. This involves creating a shared understanding of how the project will work by first reflecting on the specific situation and examining everything the design team identified and learned through the situation analysis and consultations. The design team then applies this evidence and knowledge to the exploration of different pathways that can bring about the desired change. The pathways identified as the most appropriate provide the basis for how the project will work.

The collaborative, iterative process of developing the logic model contributes to a shared understanding of the project and will help you and other members of the design team clearly envision and articulate what you want to achieve and how to go about achieving it. The logic-modelling process also helps to identify common assumptions that are made in project design, as well as risks and risk-management strategies. See section 3.3 Step 3 below for more information.

The logic model is the final product of the logic-modelling process, and should not be created outside of this process.

Standard template – logic model

¶¶ÒùÊÓƵ has a standard template for a

Logic model structure

In a ¶¶ÒùÊÓƵ logic model, an ultimate outcome (change in state, conditions or well-being of beneficiariesFootnote 31) should be supported by two or three intermediate outcomes (changes in performance, behaviour or practice) that are expected to occur in order for it to be achieved. This is because there are usually multiple changes in performance, behaviour or practice among various actors that need to occur to make the change at the ultimate outcome level possible.

Each intermediate outcome should be supported by two or three immediate outcomes (changes in capacity: skills, ability, knowledge, etc.). This is because there are usually multiple needs in terms of capacity that need to be addressed in order for a change in performance, behaviour or practice (the intermediate outcome) to occur.

Each immediate outcome should be supported by two or three outputs (direct products or services stemming from the project activities). This is because it will often take more than one product or service to bring about a change in capacity.

Logic-model parameters

One page

The logic model must not exceed one page. As the logic model is intended to be a visual depiction of the main elements of the project’s theory of change, its level of detail should be comprehensive enough to adequately describe the project but concise enough to capture the key details on a single page.Footnote 32

Remember!

Enter only one outcome per box.

If you find that the level of complexity and detail in your logic model is forcing you to go beyond one page, try the following:

If this still does not address the issue, consider using nested logic models to “unpack” different elements of the design. A Results-Based Management specialist should be consulted for guidance on nested logic models.

Numbers of outputs and outcomes

Below are the recommended minimum and maximum numbers for each type of statement in the logic model. These parameters should also help keep the logic model down to one page.

How to develop a logic model

Please refer to section 3.3 for a detailed explanation of how to develop a logic model.

2.3 The Outputs and Activities Matrix

The outputs and activities matrix is a companion to the logic model and the theory of change narrative. Together, they capture the project’s theory of change along the ¶¶ÒùÊÓƵ results chain, from the ultimate outcome to the activities and, if the outputs and activities matrix is used to develop an outcome or output-based budget, to inputs.

The outputs and activities matrix breaks down the outputs into the activities required to produce them. As defined above in Part One, activities are “actions taken or work performed through which inputs are mobilized to produce outputs.” Activity statements should begin with a verb in the present imperative tense, for example: “Conduct geological survey and water testing.”

The outputs and activities matrix is presented as a table, unlike the visual diagram of the logic model. This both saves space and allows for other types of information to be added in extra columns (more on that below). It repeats the immediate outcome and output levels from the logic model in order to facilitate cross-referencing between both documents. This also allows the reader to follow the logic of the results chain from the activities to the immediate outcome level.

Box 35 - It wasn’t like this before!

In previous guidance, the activity level was the “package of work” required to produce an output. In effect, activities were mirror images of outputs. The new approach to activities is more useful and does not waste space and time repeating similar information.

Contrary to the logic model, there is no page limit for the outputs and activities matrix. However, we recommend keeping “parent” and “child” statements, such as all of the statements under one immediate outcome or one output, on the same page, if possible.

Standard template – outputs and activities matrix

¶¶ÒùÊÓƵ has a standard template for an .

Outputs and activities matrix – other considerations

Work breakdown structure thinking

One way of thinking about the outputs and activities matrix is to think about it from the perspective of the work breakdown structure. The structure expands on the output and activities matrix by breaking the project outputs and sets of activities into corresponding sub-activities or tasks. The activity level corresponds to the first level under the outputs in the work breakdown structure. The sub-activities or tasks would correspond to the second or even third levels in the work breakdown structure.

In keeping with a widely accepted work breakdown structure best practice, the activities must represent 100 percent of the work required to achieve the output. It is also important that an activity not be repeated under another output. If so, the sum of project activities would represent more than 100 percent of the work actually done.

In some cases, similar types of activities can happen under more than one output (e.g. “Hire trainer….”). However, each of these activities should be differentiated under their corresponding outputs (e.g. “Hire financial-management trainer” as distinct from “Hire community-outreach trainer”).

What about sub-activities?

Older Results-Based Management guidance often refers to the concept of sub-activities. Briefly, sub-activities are any tasks that make up an activity. In the context of the work breakdown structure, sub-activities are simply any tasks that further break down an activity.

Neither the logic model nor the outputs and activities matrix captures sub-activities. Sub-activities are restricted to the work breakdown structure.

How many activities per output?

Two to five activities are recommended per output, but the exact number will depend on the size and nature of a specific project. Each activity must represent a task necessary for producing the output, but no activity should be a task necessary for another activity (if it were, it would become a sub-activity at the next level of breakdown in the work breakdown structure).

Other possible uses of the outputs and activities matrix

For those interested, the outputs and activities matrix could also serve as the basis for the annual work plan schedules. For example, once a project has been approved (during inception stage) the implementer could use the outputs and activities matrix format as a basis for creating an outcome-based schedule by adding columns for timelinesFootnote 33 (please see Figure 4 below for an example).

Figure 4 - Example of an Outcome-Based Schedule

Outcome-Based ScheduleYear 1Year 2Year 3
Immediate Outcome 1110The outcome statement from the logic model would be entered here.
Output 1111The output statement from the logic model would be entered here.
Activity 1111.1Activity statements would be entered here.Apr. – Jun. 20XX
Activity 1111.2Activity statements would be entered here.Jun. – Aug. 20XX
Activity 1111.3Activity statements would be entered here.Feb. - Mar. 20XX
Output 1112The output statement from the logic model would be entered here.
Activity 1112.1Activity statements would be entered here.Apr. – Oct. 20XX
Activity 1112.2Activity statements would be entered here.Nov. – Mar. 20XX
Immediate Outcome 1120The outcome statement from the logic model would be entered here.
Output 1121The output statement from the logic model would be entered here.
Activity 1121.1Activity statements would be entered here.May – Aug. 20XX
Activity 1121.2Activity statements would be entered here.Aug. – Dec. 20XX
Activity 1121.3Activity statements would be entered here.Jan. – Mar. 20XX
Output 1122The output statement from the logic model would be entered here.
Activity 1122.1Activity statements would be entered here.Jan. – Mar. 20XX
Activity 1122.2Activity statements would be entered here.Jan. – Mar. 20XX

How to develop an outputs and activities matrix

Please refer to section 3.3, Step 3 d) for a detailed explanation of how to develop an outputs and activities matrix.

2.4 The Theory of Change Narrative

Box 36 - Assumptions

It is very important, during project planning and design, to identify, validate and document your assumptions.

Research and consultation can help refute or validate assumptions. Having a design team that includes both local and non-local participants can help prevent unconscious assumptions from negatively influencing project design.

Where assumptions are intentional they must be based on evidence, and should be documented in the theory of change narrative. You should use references, quotes and evidence from your analysis and consultations to justify the assumptions made at each level of the logic model.

For example: “The assumption being made with this outcome is that A & B will lead to C. Studies conducted by… and similar initiatives in neighbouring communities demonstrate that….”

The theory of change narrative is a crucial complement to the logic model and the outputs and activities matrix. It describes the project’s theory of change and focuses on what is not explicit in the logic model and outputs and activities matrix, such as the logical links between project outcomes and the key assumptions that underpin these links. It also justifies these links, assumptions and other project-design choices with evidence and lessons learned from other initiatives or practitioners. The narrative should also address any major risks to the achievement of outcomes and describe the measures that have been—or will be—implemented to respond to them. If there are any changes to the logic model and outputs and activities matrix, the theory of change narrative may need to be updated.

The theory of change narrative can be a helpful tool for anyone new to the project to more fully understand its logic. More specifically, it can communicate the details and considerations that were raised during the situation analysis and logic-modelling process, and that cannot be communicated using the logic model’s structure. It is the only part of the project documentation that explicitly discusses assumptions, which are just as crucial to understanding the logic of the project as the expected results. A well written theory of change narrative can also serve as a project description.

How to draft a theory of change narrative

Please refer to section 3.3, Step 3 g) for a detailed explanation of how to draft a theory of change narrative.

2.5 Indicators

Indicators are the core component of the performance measurement framework.

Box 37 - Definition: Performance Measurement Framework

A performance measurement framework is the Results-Based Management tool used to systematically plan the collection of relevant indicator data over the lifetime of the project, in order to assess and demonstrate progress made in achieving expected results. The performance measurement framework is the “skeleton” of the monitoring plan: it documents the major elements of the monitoring system in order to ensure regular collection of actual data on the performance measurement framework indicators. The performance measurement framework contains all of the indicators used to measure progress on the achievement of the project’s outcomes and outputs. In addition, it specifies who is responsible for collecting data on the indicator, from what source, at what frequency and with what method. It also includes the baseline data and target for each indicator.

See section 2.6 Performance Measurement Framework for more information.

Box 38 - Definition: Indicator

Indicator: An indicator, also known as a performance indicator, is a means of measuring actual outcomes and outputs. It can be qualitative or quantitative, and is composed of a unit of measure, a unit of analysis and a context. Indicators are neutral; they neither indicate a direction of change, nor embed a target.

It is important that the stakeholders agree beforehand on the indicators that will be used to measure the performance of the project.

Quantitative indicators

Box 39 - Example of Quantitative Indicators

#/total children (f/m, age group and rural/urban) living within a one-hour walk of a provincially-funded public school

%/total children aged 6-15 (f/m and rural/urban) that have been immunized against influenza

#/total of national-investigative agencies with whom contact and cooperation have been established

%/total of individual citizens trained who report changes in their media consumption habits one month after participating in the propaganda-proof training (disaggregated by sex, age, province)

# of human rights violations reported (by women / by men)

Ratio of women-to-men in decision-making positions in the government

#/total of small-scale farmers (f/m, region) who have used extension services in the past year

%/total of women-owned businesses represented in trade fairs

Qualitative indicators

Note: There has been much debate regarding the value of quantitative data and that of qualitative information and whether quantitative measures (or indicators) are better than qualitative ones. This debate is now almost settled in the evaluation field with the growing usage of mixed methods. Practitioners have abandoned the idea that these sources of information are irreconcilable: both types of information are necessary. In fact, all quantitative measures are based on qualitative judgments and all qualitative measures can be coded and analyzed quantitatively.

To adequately assess the achievement of results, an officer/manager needs both quantitative and qualitative measures. For example, ‎it is not enough to know how many women are participating in an activity. The quality of their participation and experience is also important to capture to have a full picture.

Because it is difficult to organize qualitative data for comparison or analysis, qualitative indicators should be quantified wherever possible. This can be done by using a scale, for example, “level of confidence (1-4 scale) of farmers (f/m) in the security of roads leading to local market”.

Box 40 – Example of a Qualitative Indicator with Scale

A project has, as one of its immediate outcomes, “Increased understanding of business application legislation by SMEs* in region Y of country X”.

Through consultation, it was decided that this would be measured in part by the following indicator: "%/total SMEs reporting “substantial” or “comprehensive” understanding of business application legislation (4 or 5 on a five-point scale)."

The baseline survey showed that 20% SMEs (or 6 out of 30 SMEs) reported that they had “substantial” or “comprehensive” understanding of the legislation. A survey conducted six months later showed that 50% of SMEs (or 15 out of 30 SMEs) reported a “substantial” or “comprehensive” understanding of the legislation.

*Small- and Medium-sized Enterprises

Box 41 – Example of other Qualitative Indicators

#/total of local independent journalists (f/m) who plan to continue cross-border investigations beyond the life of the project

%/total individuals (f/m) who felt that they were completely or mostly able to participate in democratic management bodies

Remember!

Proper disaggregation of data is vitally important to the usefulness of the data collected.

Structure of a performance indicator

Performance indicators are composed of three elements: a unit of measure, a unit of analysis and a context.

The unit of measure is the first element of the indicator: number, percentage, level, ratio, etc. It is important to include in the unit of measure the notion of proportionality, by ensuring that it contains both a numerator and a denominator. This is often expressed by stating the unit of measure as number out of total (#/total) or percentage out of total (%/total).

The unit of analysis is who or what will be observed: individuals, institutions, social artifacts or social groups. The type of unit of analysis will determine whether the data will need to be disaggregated by sex, age, ethnicity, rural/urban setting, socio-economic status, ownership or any other category relevant to the project or program. This disaggregation is vitally important to the usefulness of the data collected. For example, it is impossible to measure changes in women’s access to basic services if the data collected during project monitoring does not disaggregate by sex. Similarly, a project that aims to improve the health of a specific marginalized ethnic group through rehabilitating and staffing remote regional health centres would need those centres to collect patient information in a way that allows disaggregation by ethnicity.

Table 5 - Unit of Analysis by Type

TypeExamples
Individuals
(female and male)
Trainees, teachers, journalists, publishers, elected/appointed representatives, senior government officials, citizens, entrepreneurs, participants, law enforcement officials, judges, police, inspectors, persons with disabilities, indigenous children, trade officials, refugees, etc.
InstitutionsGovernment departments, human rights commissions, state institutions, private-sector institutions, peace and security institutions, law-enforcement institutions, executive bodies (i.e. prime minister’s office, cabinet), chambers of commerce, non-governmental organizations, community-based organizations, businesses, etc.
Social artifacts"A social artifact is any product of social beings [individuals/groups] or their behavior. Examples include: books, newspapers, paintings, poems ... songs, photos, etc."Footnote 34 Other examples could include: budgeting and reporting systems, arrests, codes of law, standard operating procedures, manuals, dialogue/forums, policies, official reports, maps, etc.
Social groups with shared defined characteristicsSocial groups could include professional groups, nationalities, ethnicities, or groups sharing socio-economic conditions. For example: National Association for Pediatrics, local religious association, media associations, bar associations, veterans associations, provincial college and university association, etc.

The context is the set of circumstances that specify the particular aspect of the output or outcome that the indicator is intended to measure. For example, if the expected outcome is "Improved access to government-funded primary schools for girls and boys of province X in country Y", and it has been determined that one way to measure progress is to see how many children live within a certain distance from a publically-funded school, then the context could be “living within a one kilometre walk of a provincially-funded primary school.”

Table 6 - Illustration of the Structure of a Performance Indicator

Unit of MeasureUnit of AnalysisContext
#/totalgirls and boys aged 6-11 (disaggregated by rural/urban setting)living within a one-km. walk of a publicly-funded primary school
Level of confidence (on a five-point scale)of rural farmers (f/m)in the security of police-patrolled rural roads leading to and from markets
%/totalhealth institutions (public/private)providing gender sensitive services to ethnic populations in their language of choice
%/totalof individual citizens trained (disaggregated by sex, age, and provinces)reporting change in media consumption habits one month after participating in the propaganda-proof training
#of policy proposals passedthat create conditions for national reconciliation in conflict zones
Ratioof women to menin decision-making positions in the government

Leading, lagging and coincident indicators

We generally use indicators to measure progress on outcomes in the logic model. Sometimes, however, you may also want to measure the assumptions articulated in the theory of change narrative represented by the arrows in your logic model. In this case you can use leading” indicators to measure things preceding the change or “lagging” indicators to measure things that follow the change. Data on these indicators can validate these assumptions. As explained above, at each level in the logic model, we are making assumptions. Leading and lagging indicators allow us to track those assumptions by measuring a little lower or a little higher than the actual outcome itself, without actually measuring the next level in the logic model.

Ideally, indicators would always measure things that directly coincide with the changes described in the expected outcomes of your logic model. In some cases it may be difficult or impossible to find such “coincident” indicators. In these cases, you can also use "leading" or "lagging" indicators.

The concept of leading, lagging, and coincident indicators is borrowed from the business cycle in economics. The following example of a traffic light is helpful to further explain the concept.

Box 42 – Definitions: Leading, Lagging and Coincident Indicators

Definitions adapted from Investopedia,

Leading Indicator:  These types of indicators signal future events. Think of how the amber traffic light indicates the coming of the red light, letting you know that very soon, you will not be able to go through the intersection. In international programming, leading indicators work the same way but, of course, are less accurate than street lights. For example, # of new schools established and # of additional teachers recruited can be leading indicators of increased access to basic education. They measure something that happens before classes start, and thus they should give you a good idea of future access to education for children (though not always).

Lagging Indicator:  A lagging indicator is one that follows an event. In the traffic light example, the amber light is a lagging indicator of a safe crossing situation. It tells you that, just before it came on; it was safe to go through the intersection. The importance of a lagging indicator is its ability to confirm that a pattern has occurred. For example, # of students graduating from primary school can be a lagging indicator of increased access to basic education, as more students graduating is typically associated with increased enrollment in schools.

Coincident Indicator:  Coincident indicators occur at approximately the same time as the conditions they signify. In the traffic light example, the green light would be a coincident indicator of the possibility of driving through the intersection safely. Rather than predicting future events, these types of indicators change at the same time as the expected outcome. For example, enrollment rates are a good coincident indicator of increased access to basic education, as increased enrollment rates should coincide with an increase in access.

Types of changes measured by indicators

Each indicator can be classified according to what level it measures in the logic model: outputs, immediate outcomes, intermediate outcomes and ultimate outcomes.

Criteria of a strong performance indicator

1. Validity: Does it measure what it is intended to measure?

2. Reliability: Will it be consistent over time?

3. Sensitivity: Will it measure changes as they happen?

4. Simplicity: How easy will it be to collect the data?

5. Usefulness: Will the information collected be useful for decision-making?

6. Affordability: Do you have the resources to collect data?

Other Considerations

As part of the Paris, Accra and Busan high level forums on aid effectiveness, Canada has committed to making increased use of existing country systems for monitoring. For this reason, ¶¶ÒùÊÓƵ encourages project officers and project partners to use monitoring systems or indicators that may already be in place in the partner country.

Selecting an indicator that respects each of the criteria above can be challenging. Time, resources and other restrictions often mean settling for what is realistic rather than ideal. Choose performance indicators that provide the best possible measurement of the outcomes achieved within the budget available and wherever possible use existing data sources and collection methods. Look for a balance between rigour and realism. In the end, the most important indicator criterion is that you actually collect data for it.

2.6 The Performance Measurement Framework

At ¶¶ÒùÊÓƵ, the performance measurement framework is the Results-Based Management tool used to systematically plan the collection of relevant indicator data over the lifetime of the program/portfolio and project, in order to assess and demonstrate progress made in achieving expected results. The performance measurement framework is the “skeleton” of the monitoring plan: it documents the major elements of the monitoring system in order to ensure regular collection of actual data on the indicators identified in the performance measurement framework. The performance measurement framework contains all of the indicators used to measure progress on or toward the achievement of the program/portfolio’s and project’s expected outcomes and outputs. In addition, it specifies who is responsible for collecting data on the indicator, from what source, at what frequency and with what method. It also includes the baseline data and target for each indicator.

As with the logic model, the performance measurement framework should be developed and/or assessed in a participatory fashion with the inclusion of local partners, intermediaries (duty bearers / responsibility holders), beneficiaries (rights holders) and other stakeholders, and relevant ¶¶ÒùÊÓƵ staff.

Using the Performance Measurement Framework for Management

The performance measurement framework facilitates the “management for results” during program/portfolio and project implementation. It provides a plan for the collection of data during implementation. The actual data collected on indicators identified in the performance measurement framework, and the program/portfolio’s and project team’s analysis of this data, allows the team to assess progress, and detect issues that may interfere with the achievement of expected outcomes early enough to take corrective action/make adjustments. An operationalized performance measurement framework is thus necessary for evidence-based program/portfolio and project management decision-making. Of course, this can only be done if there is a basis for comparison. For this reason, it is always necessary to capture baseline data and it is always necessary to set targets in the performance measurement framework. Remember: without knowing where you started and where you want to go, it is impossible to properly assess progress.

In sum, the performance measurement framework will help you:

The data collected on the performance measurement framework indicators will help you:

Standard template – performance measurement framework

¶¶ÒùÊÓƵ has a standard template for a .

Content of the performance measurement framework

The performance measurement framework is divided into eight columns: expected resultsindicatorsbaseline datatargetsdata sourcesdata collection methodsfrequency, and responsibility. To complete a performance measurement framework, you will need to fill in each of the columns accurately.

Expected results (first column)

This column of the performance measurement framework simply reflects the outputs and outcomes of the logic model. It is critical that any changes made in one document be reflected in the other, so that the outputs and outcomes identified in the performance measurement framework and logic model match at all times during the life of the program/portfolio and project. See section 2.1 and section 2.2.

Indicators (second column)

Indicators must be identified for each outcome and output of the logic model. See section 2.5.

How many outcome indicators?

For each outcome, select two to three indicators (you may include more if needed). Include at least:

Indicator(s) should measure specific dimensions of an outcome, such as access or quality, including gender inequalities or environmental sustainability, as applicable. Whether additional indicators should be qualitative or quantitative depends on the specific dimension of an outcome you want to measure. Note that wherever qualifiers such as transparent, participatory, effective, equitable or sustainable are added to an outcome statement, they need to be measured. In other words, ensure indicators selected measure each element of the outcome statement. This may mean that more than three indicators may be required.

Box 43 –

Triangulation: “The use of three or more theories, sources or types of information, or types of analysis to verify and substantiate an assessment. Note: by combining multiple data sources, methods, analyses or theories, evaluators seek to overcome the bias that comes from single informants, single methods, single observer or single theory studies.”

Additional indicators could also contribute to triangulation, which is the process of gathering information on the same issue from multiple sources. Multiple lines of evidence increase the reliability of data. For example, you will have a more complete picture of the quality of services if you ask service users, talk to service providers, and check service records.

This helps to better validate outcomes, while keeping the overall number of indicators manageable. Remember that it is the cumulative evidence of data collected on a cluster of indicators that managers examine to see if their projects and programs are making progress. No outcome should be measured by just one indicator.

How many output indicators?

For each output, select one to two indicators (you may include more if needed).

An output indicator can measure different aspects of a product or service, for example:

Baseline data (third column)

Box 44 – Example: An Indicator with its Baseline and Targets

Indicator: Percentage out of total single-parent households (f/m) in region Y living within a one-km. walk on maintained paths of a potable well.

Baseline: In 2012, 5% out of 2000 single female-headed households and 15% of 75 single male-headed households in region Y live within a one-km. walk on maintained paths of a potable well.

Target, First year of Project/Year one (2013): 15% out of 2000 single female-headed household and; 20% out of 75 single male-headed households in region Y live within a one-km. walk on maintained paths of a potable well.

Target, End of Project/Year 5 (2017): 65% out of 2000 single female-headed households; 65% out of 75 single male-headed households in region Y live within a one-km. walk on maintained paths of a potable well.

Note: In this example, the Year 5 target is realistic because the percentage was low to begin with (as identified in the baseline study) and due to the fact that some communities in region Y are very remote and potentially difficult to work in. The disaggregation by head of household will provide important information that will be factored into selecting locations for the wells that will benefit all types of households.

Baseline data provides a specific value for an indicator at the outset of a project or program. Baseline data is collected at one point in time, and is used as a point of reference against which progress on the achievement of outcomes will be measured or assessed.

It is required in order to establish realistic, achievable targets. Baseline data is needed for each performance indicator in the performance measurement framework, and should be disaggregated by sex, ethnicity, age, socio-economic status or any other category relevant to the indicator.

When should it be collected?

Baseline data should be collected before project implementation. Ideally, this would be undertaken during project design. However, if this is not possible, baseline data must be collected as part of the inception stage of project implementation in order to ensure that the data collected corresponds to the situation at the start of the project, not later. The inception stage is the period immediately following the signature of the agreement, and before the submission of the Project Implementation Plan (or equivalent).

Targets (fourth column)

A target specifies a particular value, or range of values, that you would like to see in relation to one performance indicator by a specific date in the future. Together, the targets established for the various indicators of a specific expected outcome will help you determine the level of achievement of that outcome. Targets should be set in light of baseline data to ensure that they, in fact, are a good measure of achievement. Without this information, there is a risk of setting unrealistic targets or even of setting targets that are too easily, or already, achieved. In the performance measurement framework, the target column must show end of project targets, but annual targets can also be included. Ideally, annual targets should continue to be updated through the annual work plan process.

Targets provide tangible and meaningful points of discussion with intermediaries, beneficiaries, and other stakeholders. If key targets are missed, it is a signal for stakeholders and managers to collectively analyze how and why plans or strategies have gone off track, how they could be brought back on track, and then take corrective measures in constructive and mutually supportive ways so that outcomes are achieved.

Targets should never be embedded in expected outcome statements

Targets should only appear in the performance measurement framework and not be included in the expected outcomes statements themselves. At the planning stage, targets are often indicative until a project implementation plan or a first annual Work plan has been approved. They can also be adjusted within reason as part of sound management for results during the life of the project. This is one of the reasons that targets should not be embedded in the expected outcome statements.

Moreover, excluding targets from the expected outcome statements allows the theory of change to stand alone. The logic model does not need to be adjusted if a target is adjusted, and can even be replicated across similar programming and subsequent phases where the same theory of change could apply. In these cases, context-specific indicators and targets can be established separately.

What to keep in mind when developing targets

Data sources (fifth column)

Data sources are the individuals, organizations or documents from which data about your indicators will be obtained. The implementer will need to identify data sources for indicators. Data sources can be primary or secondary.

Primary data will always be project specific. Using secondary data, when relevant to your indicators and outcomes, can help the project save funds and generate synergies with partner country systems, other projects, or between donors/organizations.

Table 7 - Examples of Data Sources

PrimarySecondary
  • Beneficiaries
  • Intermediaries
  • Financial market data
  • Demographic health survey data
  • UNICEF Multiple Indicator Cluster Survey data
  • Human development report
  • Global Peace Index
  • Stockholm International Peace Research military expenditures
  • Amnesty International - Human Rights Report
  • International Crime Victims Survey
  • United Nations Comtrade Database
  • United Nations Human Rights Council & the Universal Periodic Review reports (UN-UPR)
  • Freedom House’s report on Freedom in the World
  • Ibrahim Index of African Governance
  • Transparency International’s Corruption Perception Index

Data collection methods (sixth column)

Data collection methodsFootnote 35 represent how data on indicators are collected. Choosing a data collection method depends on the type of indicator and the purpose of the information being gathered. Data collection methods can be informal and less structured, or more formal and more structured. Different methods involve “trade-offs with respect to cost, precision, credibility and timeliness.”Footnote 36

When choosing data collection methods, it is important to ensure that those who will be using the performance information, including ¶¶ÒùÊÓƵ, are comfortable with the trade-offs that stem from the collection methods chosen, and thus the type of performance information they will be receiving.Footnote 37 Data sources and collection methods should be established by implementers in collaboration with stakeholders and with support from monitoring/evaluation specialists.

The figure below illustrates some possible data collection methods. “The more structured and formal methods for collecting data generally tend to be more precise, costly and time consuming.”Footnote 38 If your indicators are disaggregated (by age, sex, ethnicity, etc.), it is necessary to ensure that the related data collection methods can indeed enable the collection of disaggregated data.

Figure 5 – Data Collection Methods

Figure 5 – Data Collection Methods
Texte version

This diagram illustrates Data Collection Methods that can range from informal and less-structured methods, such as:

  • conversation with concerned individuals;
  • community interviews;
  • field visits;
  • reviews of official records (management information system and administrative data);
  • key informant interviews;
  • participant observation; and
  • focus group interviews.

To formal and more-structured examples, such as:

  • direct observation;
  • questionnaires;
  • one-time survey;
  • panel surveys;
  • census; and
  • field experiments.

Source: World Bank Book, entitled Ten Steps to a Results-Based Monitoring and Evaluation System, p. 85.

Choosing a data collection method depends on:

Data collection methods should not be chosen in an ad hoc manner. They should be carefully selected as part of the indicator development process, while recognizing associated costs and limitations. In fact, the identification of data collection methods and data sources can help with the selection and validation of realistic and affordable performance indicators.

Selecting appropriate data collection methods and sources

Table 8 - Additional Data Collection Methods

  • Conduct case studies
  • Record testimonials
  • Review of diaries and journals
  • Take photos and videos
  • Review logs
  • Review reports or documents

Frequency (seventh column)

Frequency looks at the timing of data collection: how often will information for each indicator be collected or validated? Will information for a performance indicator be collected regularly (quarterly or annually) as part of ongoing management for results and reporting, or at specific times during the project cycle, such as at midterm or end of project?

Considerations for deciding how frequently to collect data for a performance indicator include:

Responsibility (eighth column)

Responsibility refers to who is responsible for collecting the data for indicators in the performance measurement framework. It is important to be specific when identifying the responsible actors in the performance measurement framework. Use a title or role rather than the name of an individual (for example, field officer, gender expert, project manager, etc.).

How to develop a performance measurement framework

Please see refer to section 3.4 for a detailed explanation of how to develop a performance measurement framework.

2.7 The Results-Based Monitoring and Evaluation Plan

A results-based monitoring and evaluation plan is a detailed plan that expands on the performance measurement framework and specifies the logistics, budgets and other operational details of data collection and analysis. It is important to note that the performance measurement framework, while being the “skeleton” of the plan for the systematic collection of data, does not contain enough information to guide the implementation of a monitoring system. A preliminary results-based monitoring and evaluation plan should be developed before the project is submitted for approval, so that required resources are taken into consideration during the budgeting process. The monitoring and evaluation plan can be finalized by the implementer as part of the project implementation plan or equivalent.

Monitoring

The results-based monitoring and evaluation plan should establish specific monitoring activities, responsibilities (for collection, analysis and storage of the data) and timelines. It should provide a detailed explanation of the data collection tools identified in the data collection methods column of the performance measurement framework. This includes describing how samples will be selected and how data will be analysed, captured, stored and used. It should highlight any expected challenges related to the collection and analysis of data (including baseline data) and target setting, as well as outline strategies for addressing these challenges. Finally, it should also commit specific financial and human resources to these results-based monitoring activities, which should be reflected in the project budget. This may involve the hiring of a project monitor, the allocation of dedicated project staff and financial resources to monitoring, and the establishment of a monitoring system to collect data on the output and outcome indicators in the performance measurement framework.

Evaluation

The results-based monitoring and evaluation plan should specify any evaluations to be undertaken and ensure that sufficient project resources are set aside. Evaluations may be commissioned by ¶¶ÒùÊÓƵ, the implementer, or jointly with the implementer or other stakeholders. An evaluability assessment may also be included in the plan. See section 3.4, Step 4 f) for more details.

Synergy between monitoring and evaluation

There are significant opportunities for synergy between monitoring and evaluation, which can translate into significant savings in data collection at midterm and at the end of the project. See section 3.4, Step 4 f) for more details.

Part Three:
Step-by-Step Instructions

3.0 Introduction

There are four main steps to results-based project planning and design.

Part03
Texte version

Step 1: Identify design team and stakeholders

Step 2: Conduct situation analysis

Step 3: Develop theory of change, including LM, OAM and narrative

Step 4: Develop PMF and M&E plan

Part three of this guide presents steps to help project teams understand the processes or techniques used to develop ¶¶ÒùÊÓƵ’s Results-Based Management tools.

Step 1: Identify design team and stakeholders.

The composition of the design team can have a significant impact on the quality of project design. As outlined in section 1.4, a gender equitable, participatory approach to project planning and design can yield tremendous benefits.

Step 2: Conduct situation analysis.

In its description of what Results-Based Management entails, the starts with “Results-Based Management means: defining realistic expected results based on appropriate analyses […]” (emphasis added). Situation analysis is therefore a fundamental step in results-based project planning and design.

Step 3: Develop theory of change, including logic model (LM), outputs and activities matrix (OAM) and narrative.

This step focuses on how to determine a project's expected outcomes and the means to achieve them, and how to document the assumptions that are being made and the external factors and risks that may influence the achievement of the outcomes.

Step 4: Develop a performance measurement framework (PMF) and a results-based monitoring and evaluation (M&E) plan.

The final step of results-based project planning and design is the development of tools that will enable the gathering and analysis of the information needed for proper Results-Based Management of the project throughout its implementation.

3.1 Step 1: Identify Design Team and Stakeholders

Get the right people on your design team

Identify the team to be involved in the project design. Ensure your design team includes local stakeholders, if possible. This will help the team avoid incorrect (and often unconscious) assumptions about the local context that could lead to poor project design and negatively influence the achievement of expected outcomes. The composition of your design team may vary depending on the type of programming but should always include:

Make sure they’re available

Once you have identified your team, check that they are available and willing to participate in all four steps of the project-design process. Consider holding this process where the project will be implemented to facilitate the participation of local team-members and stakeholders.

Identify stakeholders and keep them involved

Since you will need to design the project in a participatory way, the design teamFootnote 40 should always identify key stakeholders, including local intermediaries and beneficiaries, and ensure that they are involved and consulted regularly during the design process.

3.2 Step 2: Conduct a Situation Analysis

Situation analysis is a structured exercise that helps the design team: a) identify the issues they plan to address; and b) understand the complex context (national, regional, political, cultural, social, gender, environmental, etc.) in which those issues exist. This should be done through research, consultation, analysis and discussion. As such, a situation analysis is a fundamental part of results-based project planning and design. It provides a critical part of the evidence behind the theory of change.

The situation analysis will help you and other members of the design team:

How to conduct a situation analysis?

The first step of a situation analysis is to pinpoint the issue or need to be addressed. Common sources of ideas include:

If none of these are available or useful, consider undertaking needs assessments, analyses or evaluations, as required.

Understanding the context

Once you have identified an idea or issue to be addressed, the next step is to understand the context in which this issue takes place (cultural, socio-political, gender equality, economic, and environmental), the roles played by stakeholders, and the issue's different impacts on the lives of women, men, boys and girls. You can use data and information from a number of different sources as the basis for this analysis.

For example:

Using the information

Once preliminary data has been gathered, there are many ways of using this information to establish a picture of the context and narrow the focus of the project. Common tools for this stage include:

Situation analysis tools used at ¶¶ÒùÊÓƵ

Problem tree analysis

Box 45 - A Solution Tree

A solution tree is a diagram that translates selected elements of the problem tree into a rudimentary theory of change.

Once the first four steps of problem-tree exercise have been completed, compare the findings to those findings of other exercises, such as program/portfolio review and donor mapping, and budget and organizational priorities, to determine which elements of the situation the project will attempt to address. Next, develop a solution tree for the selected elements. For each selected negative statement, the solution tree should contain a corresponding outcome statement, and output or activity statement.

The problem tree is one of the methods used most frequently at ¶¶ÒùÊÓƵ—although staff and partners may choose to use others. This is a visual situation analysis tool that enables its users to break down a very complex issue into its components, and then to examine and explore the cause-and-effect relationships between these components. It enables users to identify potential reach (intermediaries and beneficiaries), activities, outputs and outcomes for a project and gives users an idea of other key stakeholders and how they relate to and experience the issues. As such, it is particularly well suited to supporting the articulation of a theory of change and the development of a logic model.

Its key steps are:

  1. Identify the core problem(s).
  2. Identify the causes and effects.
  3. Note the relationships.
  4. Review the problem tree.
  5. Create a solution tree.

Figure 6 - The Problem Tree

Figure 6 - The Problem Tree
Texte version

In a problem tree, the trunk represents the core problem(s), the roots represent the causes of the core problem and the branches represent the effects.

Stakeholder mapping

Stakeholder mapping is another tool used during the situation analysis stage. Stakeholder mapping enables the design team to identify key stakeholders—including intermediaries and beneficiaries—their relationships to each other, and their level of interest in, and influence over, the issues at hand.

Stakeholder mapping can be done as a separate exercise or as part of the problem tree exercise. Key questions to ask for every issue explored are:

3.3 Step 3: Develop the Project’s Theory of Change, including Logic Model, Output and Activity Matrix, and Narrative

The logic-modelling process

Once the situation analysis is complete, you should be ready to develop the project’s theory of change and its logic model. This will involve determining the outcomes and outputs of the project, the activities best suited to producing the outputs, as well as identifying assumptions and evidence to explain how one change is expected to lead to another. This process is also known as logic modelling; see the “Logic modelling” heading under section 2.2 for more details.

Remember!

The logic model’s pyramid structure enables practitioners to illustrate the complex and multifaceted nature of international assistance programming—the convergence of different, but complimentary pathways of change under one ultimate outcome.

Different intermediate outcomes represent different pathways leading to the same ultimate outcome.

Each pathway addresses a different aspect of the problem.

There are different ways of undertaking logic modelling. A commonly used approach is to bring together the key stakeholders in one room and use sticky notes to brainstorm on the theory of change. Because sticky notes can be moved and re-ordered, this is a helpful and accessible way to engage in logic modelling in the early stages of the process; draft outcome and output statements (one per note) and then organize them to depict your project's theory of change.

Reminders

Use participatory methodologies to ensure equitable and valuable participation from relevant stakeholders throughout the entire process, from brainstorming together to completing a final draft. This will help meet the requirements of Canada’s .

When developing your theory of change, always keep in mind the country context and priorities, as well as ¶¶ÒùÊÓƵ program, branch and corporate priorities. Also consider potential limiting factors such as duration and budget.

The best way to develop a theory of change is to start with the ultimate outcome before determining the intermediate and immediate outcome and deciding what programmatic approaches are needed.

When developing your outcomes and outputs, consider the level of gender-equality integration in your project in order to determine at which level of the logic model gender-equality outcomes should be included and how.

Step 3 a) Identify the expected ultimate outcome

You should work backwards from the ultimate outcome, as described in the steps below. Starting with the outcomes (ultimate, intermediate and immediate) will ensure that the outputs and activities selected are those that are required to lead to the changes described.

Box 46 – Definition: Tautology

Avoid tautologies in the logic model

Tautology means saying the same thing with different words. In the logic model this often manifests as an outcome which summarizes the level below and does not describe a substantively different change.

The example below illustrates an bad example of an ultimate outcome that summarizes the changes described in the intermediate outcomes but does not describe a substantively different change stemming from the intermediate outcomes.

Example of tautology at the ultimate outcome level in a logic model:

Ultimate outcome: Improved use of well managed water, waste and sanitation infrastructure by women, men, girls and boys in community Y

Intermediate outcomes:

  • Increased proper usage of safe drinking water by women, men, girls and boys in community Y
  • Improved management of water, waste and sanitation infrastructure in community Y

In this example, the ultimate outcome is not/not at the right level - it is another intermediate outcome which summarized the two intermediate outcomes. This is incorrect and should be avoided. 

A correct ultimate outcome could be:

  • Improved health of women, men, girls and boys living in community Y. OR
  • Reduced vulnerability to waterborne illnesses for men, women, girls and boys in community Y

Working together as a team:

Box 47 - Example of an Issue and an Expected Ultimate Outcome

Issue: Poor health among male and female inhabitants of region Y of country X due to waterborne illness.

Ultimate Outcome: Improved health of women, men, girls and boys in region Y of country X

Reminders

Box 48 - Definition: Ultimate Outcome

The highest-level change to which an organization, policy, program, or project contributes through the achievement of one or more intermediate outcomes. The ultimate outcome usually represents the raison d'être of an organization, policy, program, or project, and it takes the form of a sustainable change of state among beneficiaries (the rights holders)

The ultimate outcome is the “why” of the project. It should describe a sustainable positive change in state, conditions or well-being of the beneficiaries (the rights holders).

Although the ultimate outcome usually takes place after the end of the project, it is important to measure it during the life of the project. This is to assess whether the project is:

If your project is specific to gender equality (i.e. the project was designed specifically to address gender inequalities or women's empowerment and would not otherwise be undertaken), you should have gender-equality results at all levels of the logic model, starting at the ultimate outcome level.

The ultimate outcome has to be realistically grounded in the project’s theory of change. For example, if the project is working in a village Y in country X to improve the health of single mothers, then the ultimate outcome cannot be “improved health of all men and women in country X.” It should reflect the reality of the project: “improved health of single mothers in village Y of country X.”

Box 49 - Definition: Intermediate Outcome

A change that is expected to logically occur once one or more immediate outcomes have been achieved. In terms of time frame and level, these are medium-term outcomes that are usually achieved by the end of a project and program/portfolio, and are usually changes in behaviour, practice or performance among intermediaries (the duty bearers / responsibility holders) and/or beneficiaries (rights holders)

Step 3 b) Identify expected intermediate outcomes

Once you have identified the ultimate outcome, continue brainstorming as a team to develop the intermediate outcomes.

Box 50 - Example of Expected Intermediate Outcomes

Increased equitable use of clean drinking water by women, men, girls and boys in region Y

Improved provision of front-line gender responsive health services to women, men, girls and boys in region Y

If your project fully integrates gender equality, gender equality results should be included at the intermediate outcome level and below.

For projects with moderate or high environmental relevance, environmental considerations should be integrated into the outcome statements. Ideally, this should be done at both the immediate and intermediate levels and, at a minimum, at the immediate level.

Step 3 c) Identify expected immediate outcomes

Box 51 - Definition: Immediate Outcome

A change that is expected to occur once one or more outputs have been provided or delivered by the implementer. In terms of time frame and level, these are short-term outcomes, and are usually changes in capacity, such as an increase in knowledge, awareness, skills or abilities, or access* to... among intermediaries and/or beneficiaries.

* Changes in access can fall at either the immediate or the intermediate outcome level, depending on the context of the program/portfolio and project and its theory of change.

Once you have identified your intermediate outcomes, brainstorm the immediate outcomes making sure to identify everything required to allow each intermediate outcome to occur.

Box 52 - Example of Expected Immediate Outcomes

Improved equitable access to clean drinking water for women, men, girls and boys in region Y

Increased ability to maintain wells among female and male members of community water collectives in region Y

Improved equitable access to health facilities for women, men, girls and boys in region Y

Improved skills of local health-centre male and female staff in gender-sensitive triage, diagnosis and primary health care in region Y

Immediate outcomes will lead or contribute to the intermediate outcomes and represent the changes that are directly linked to the existence of outputs (products and services).

If your project has limited gender-equality integration, gender-equality results should be included at the immediate outcome level and below.

For projects with moderate or high environmental relevance, environmental considerations should be integrated into the outcome statements. Ideally, this should be done at both the immediate and intermediate levels and, at a minimum, at the immediate level.

Step 3 d) Identify main expected outputs and planned activities

Box 53 - Definitions: Outputs and Activities

Outputs: Direct products or services stemming from the activities of an organization, policy, program or project.

Activities: Actions taken or work performed through which inputs are mobilized to produce outputs.

Remember, the outputs represent completed products or services stemming from the activities of an implementer. Activities represent the separate components required to complete those products or services. Another way to think about activities is in terms of the work breakdown structure: activities are the next level of breakdown under the outputs. Refer to section 2.3.

Step 3 e) Validate the theory of change

Arrange all of the sticky notes with your outcome and output/activity statements in the pyramid shape of the logic model. Check back and forth through the levels (from ultimate outcome to activities and from activities to ultimate outcome) to make sure everything flows in a logical manner and that the theory of change is sound and evidence-based, incorporates sectoral best practices and lessons learned, and integrates  gender equality, environmental sustainability and governance in international assistance programming. Make sure that each outcome is well supported by the level below. Make sure that all activities and outputs contribute directly to the immediate outcome for which they were identified. Make any adjustments required, such as moving or adding outcomes or outputs and activities.

Validate any assumptions and risks and make sure you document them so you can use them when you write your theory of change narrative (see section 3.3 Step 3 g)).

Check that your outcomes statements are robust and meet the criteria identified in section 2.1. One way to do this is to brainstorm potential indicators for each outcome. This helps you ensure that you will be able to measure the achievement of your outcomes. It also helps ensure you have not identified an indicator and tried to formulate it as an outcome (e.g. reduced maternal mortality rate) rather than use a proper outcome (e.g. improved maternal health ....). Early identification of indicators is a useful technique for refining outcome statements and the overall theory of change of a project.

¶¶ÒùÊÓƵ has developed the Logic Model Checklist to help staff and partners assess the soundness of the project’s design/theory of change as reflected in the logic model and outputs and activities matrix, or in other results framework tools.

Step 3 f) Pull it all together

Fill out the logic model template using the outcome and output statements you’ve developed during your brainstorming sessions.

Figure 7 - Completed Logic Model

Figure 7 - Completed Logic Model
Texte version

This is an example of a completed Logic Model. Each level of the results chain stems from the level below.

Ultimate outcome: 1000 Improved health of women, men, girls and boys living in region Y of country X.

Intermediate outcomes:
1100 Increased equitable use of clean drinking water by women, men, girls and boys in region Y.
1200 Improved provision of front line gender-responsive health services to women, men, girls and boys in region Y

Immediate outcomes:
1110 Improved equitable access to clean drinking water for women, men, girls and boys in region Y.
1120 Increased ability to maintain wells among female and male members of community water collectives in region Y
1210 Increased equitable access to health facilities for women, men, girls and boys in region Y.
1220 Improved skills of local health centre male and female staff in gender-sensitive triage, diagnosis, and primary health care in region Y.

Outputs:
1111 Wells built using gender equitable participatory approaches in region Y.
1112 Existing wells of region Y rehabilitated using gender equitable participatory approaches.
1121 Training on well maintenance developed and delivered to female and male members of community water collectives in region Y.
1122 Technical assistance provided to community water collectives for the sourcing of parts from local and regional suppliers.
1211 Regional health centres in region Y rehabilitated and equipped.
1212 Gender-sensitive awareness campaign on the availability of health services in newly rehabilitated regional health centres conducted.
1221 Gender-sensitive materials for skills development programs and on-the-job coaching on triage, diagnosis and primary health care developed.
1222 Gender-sensitive skills development programs and on-the-job coaching on triage, diagnosis and primary health care provided to male and female staff in regional health centres.

*Note: In the context of this project, gender sensitive is defined as: gender sensitive awareness campaign, training materials, and programs that are designed based on gender analysis to promote equal roles for women and men in healthcare (e.g. women and men as doctors and women and men as care providers); to challenge gender stereotypes and biases that lead to discrimination and harmful practices (e.g. boy preference, sexual abuse/harassment, gender-based violence); to support the rights of women and girls in health decision-making, particularly in sexual and reproductive rights; and to promote equal participation of, and benefit to, women and men (girls and boys).

Fill out the outputs and activities matrix, copying the immediate outcomes and outputs from the Logic Model and listing the activities for each output.

Figure 8 - Completed Outputs and Activities Matrix

Outputs and Activities Matrix
Immediate Outcome 1110Improved equitable access to clean drinking water for women, men, girls and boys in region Y.
Output 1111Wells built in community X, in consultation with local stakeholders, especially women as primary water managers in the community.
Activity 1111.1Undertake gender sensitive consultations with community members, especially women
Activity 1111.2Prepare well construction plan
Activity 1111.3Conduct geological survey and water testing.
Activity 1111.4Procure construction materials and equipment.
Activity 1111.5Contract construction firm.
Activity 1111.6Facilitate community oversight of well construction.
Output 1112Existing wells of region Y rehabilitated using gender equitable participatory approaches.
Activity 1112.1Conduct water testing. [Remaining activities removed for the purposes of the How-to Guide.]
Immediate Outcome 1120Increased ability to maintain wells among female and male members of community water collectives in region Y.
Output 1121Training on well maintenance developed and delivered to female and male members of the community water collectives in region Y.
Activity 1121.1Conduct project management gap analysis with male and female community members and gender equality and environmental technical advisors.
Activity 1121.2Design training and handouts.
Activity 1121.3Deliver training.
Activity 1121.4Evaluate course.
Activity 1121.5Conduct ongoing mentoring with selected male and female community members.
Output 1122Technical assistance provided to community water collectives of region Y for the sourcing of parts from local and regional suppliers.
Activity 1122.1Research suppliers. [Remaining activities removed for the purposes of the How-to Guide.]
Immediate Outcome 1210Improved equitable access to health facilities for women, men, girls and boys living in region Y.
Output 1211Regional health centres in region Y rehabilitated and equipped.
Activity 1211.1Conduct needs assessments with health centres’ staff.
Activity 1211.2Prepare procurement plan.
Activity 1211.3Implement procurement plan.
Activity 1211.4Prepare rehabilitation plan.
Activity 1211.5Implement rehabilitation plan.
Output 1212Gender sensitive awareness campaign on the availability of health services in newly rehabilitated health centres conducted.
Activity 1212.1Develop messaging. [Remaining activities removed for the purposes of the How-to Guide.].
Immediate Outcome 1220Improved skills of local health centre male and female staff in gender sensitive triage, diagnosis, and primary healthcare in region Y.
Output 1221Gender sensitive materials for skills development programs and on-the-job coaching on triage, diagnosis and primary healthcare developed.
Activity 1221.1Conduct project management gap analysis with regional government staff and gender equality and environmental technical advisors.
Activity 1221.2Design gender sensitive training slides and handouts.
Output 1222Gender sensitive skills development programs and on-the-job coaching on triage, diagnosis and primary healthcare provided to male and female staff in regional health centres.
Activity 1222.1Deliver gender sensitive training sessions to female and male staff.
Activity 1222.2Evaluate training sessions.
Activity 1222.3Conduct ongoing mentoring with selected male and female staff.

Step 3 g) Write a narrative description of the theory of change

Every logic model should be accompanied by a narrative that describes the theory of change for the project. This narrative should be developed iteratively with the logic model. It should focus on what is not explicit in the logic model, and how the expected outcomes of the project will unfold. It should explain the linkages between each level, including assumptions and risks, and provide reference to the evidence and best practices that justify the design choices made. These linkages and assumptions should include the roles and contributions of other actors not directly involved in the project but on whom the achievement of project outcomes also depends. For example, reference should be made to any recipient-country government commitments, policies and programs important to achieving project outcomes. See section 1.2 and section 2.4.

One to three pages long

Ideally, the narrative should be one to three pages long. However, you may find that large projects will require more pages to cover the breadth of the initiatives.

Structure it by outcome

Start the narrative with a section discussing how the intermediate outcomes (end-of-project results) will contribute to the ultimate outcome. Focus your attention on the relationships between each intermediate outcome and the ultimate outcome, explaining the theory, best practice, assumptions and risks underlying your choice of intermediate outcomes.

Create separate sections/paragraphs for each intermediate outcome, where you explain how the project’s outputs will lead to the immediate outcomes, and how the immediate outcomes will lead to that intermediate outcome. Focus your explanation on the relationships between the outputs and the immediate outcome, and the immediate outcomes and intermediate outcomes, explaining the theory, best practice, assumptions and risks underlying your choices. Where applicable, include how environmental sustainability, gender equality and governance are integrated throughout the logic model to the intermediate outcome level.

Focus on assumptions

Describe the most important assumptions made with each step of the logic model (i.e. the ones without which the next level of outcomes could not be achieved). You should use references, quotes and evidence from your socio-economic, cultural, political, environmental, and gender analyses and consultations to justify the assumptions made at each step of the logic model.

Identify risks and response strategies

Include a brief mention of any key risks and contextual factors that could influence the achievement of outcomes. Identify any risk response strategies that you are undertaking or are planning to undertake.  

Refer to the work of other actors

Refer to the work other organizations are doing in the area and describe how their outcomes may relate to the project’s theory of change. In some cases, the work of others may explain the choices made in project design (e.g. choosing not to undertake an activity because it is being done by another donor).

Describe how participation will be encouraged

Describe the methods that you will use to foster participation of a broad range of stakeholders (including intermediaries and beneficiaries) throughout the project’s life cycle.

Step 3 h) Assessing the logic model and the outputs and activities matrix

Once a project has been designed, it must be reviewed, both for quality control and as part of the decision-making and approval process before moving to implementation. Whether reviewing your own design or that of a proposed project from another organization, the same guidance applies.

¶¶ÒùÊÓƵ staff should use the Logic Model Checklist when reviewing the logic model submitted by applicants with their proposals. Note that although many of the questions in the checklists are specific to ¶¶ÒùÊÓƵ’s terminology and tools, the Logic Model Checklist includes questions related to the use of non-¶¶ÒùÊÓƵ Results-Based Management tools. The kinds of questions asked can inform a review of proposals where ¶¶ÒùÊÓƵ has agreed to use partners’ own templates.

The assessment of the logic model and outputs and activities matrix should also be done in a participatory manner. Share the draft logic model, outputs and activities matrix and narrative with colleagues, thematic, sectoral and gender equality, environmental and governance  specialists, stakeholders, including beneficiaries, etc. This will help ensure that the final version reflects the shared understanding of the project’s theory of change, including the expected outcomes, assumptions and design.

Box 54 - Common Logic Model Problems to Avoid

Common Problems to Avoid in a Logic Model

Process

  • The logic model was developed by only one person, e.g. a manager, in-house expert or consultant.
  • Project team is engaged after the logic model is developed and no effort is made to validate it with them.
  • No local stakeholders were involved in developing the logic model.
  • Outcome statements are not realistic/overly ambitious.

Logic

  • Logic model is not linked to any problem or stakeholder analysis.
  • The priority problems are not apparent.
  • Desired changes have been reduced to overly simplistic results statements.
  • Gender equality is not integrated to ¶¶ÒùÊÓƵ standards.
  • There are gender equality activities, but no gender equality outcomes.
  • Tautology - saying the same thing with different words. In the logic model this often manifests as an outcome which summarizes the level below and does not describe a substantively different change.  (For further explanation and an example, please see Box 46 - Definition: Tautology, under section 3.3 Step 3 a).)

Outcome Statements

  • Statements are general and generic.
  • The intended change is not clear.
  • The logic model has too many intermediate outcomes.
  • Statement includes more than one idea or change (“and”).
  • “Through,” “by,” “in order to” or other expressions in the statement that describe linkages to other levels of the logic model.
  • Statement includes targets.
  • Logic model contains too many details and is confusing.
  • Statements describe changes at the wrong level of the logic model.

Output Statements

  • Output statements include change words like “strengthened.”
  • Statement includes targets.
  • Statement is too long, vague or wordy to communicate the output being delivered.
  • The output represents an activity that could fall under another output.
  • The range of activities presented in the outputs and activities matrix is too limited to allow for the production of the output.

3.4 Step 4: Develop a Performance Measurement Framework and a Results-Based Monitoring and Evaluation Plan

Remember!

The performance measurement framework is not a paper exercise, or a form to fill out and file away. It is the implementer’s framework for results-based monitoring, reporting and the foundation of evaluations.

Key considerations

The development of a performance measurement framework starts at the design phase as an iterative process that goes hand-in-hand with logic-modelling. You should ensure that the content for the performance measurement framework is developed in a gender-balanced, participatory fashion. Include key local stakeholders, partners, beneficiaries, and appropriate specialists (sectoral, gender equality, environmental and governance) in the process.

As you develop your performance measurement framework, consider the following factors:

Step 4 a) Copy expected outcome and output statements from the logic model to the performance measurement framework

Copy and paste the most recent outcomes and outputs from your logic model into the boxes of the first column in the performance measurement framework template.

Step 4 b) Formulate indicators

Establish performance indicators for all of your expected outcomes and outputs, following the guidance outlined in section 2.5. You will normally have started thinking about appropriate indicators during the process of validating your logic model and theory of change.

The process of identifying and formulating indicators may lead you to adjust your outcome and output statements. Ensure any changes made to these statements in the performance measurement framework are reflected in the logic model. If the logic model has already been approved by ¶¶ÒùÊÓƵ and stakeholders, keep in mind that any changes in scope, scale or intent of the project need to be approved by the original approval authority at ¶¶ÒùÊÓƵ. See section 4.2 for more details on this subject.

Validate and check the quality of your performance indicators, using the ¶¶ÒùÊÓƵ’s Performance Measurement Framework Checklist. For example, are the indicators valid, reliable, sensitive, simple, useful and affordable? Are they gender sensitive? Where appropriate, do they include proportionality? If they deal with people, are they disaggregated by sex and any other categories of concern to the project?

Step 4 c) Determine data sources, and collection methods, frequency and responsibility

Determine the data sources and data collection methods for your chosen performance indicators. Look to include multiple lines of evidence wherever possible to increase the reliability of the data you will collect on your indicators. Consider sampling strategies. If you collect data from beneficiaries, will you collect it at the household, community or other level? For example, if your data source is school-aged children, will you collect data in the classroom or, assuming not all children are in school, within households? At first glance, the data collection at the household level may seem too costly, but with a statistically valid sampling methodology, it might well be within your reach. Sampling can help reduce cost while maintaining data reliability and validity.

Your examination of available data sources and relevant data collection methods may lead you to adjust your choice of indicators.

The frequency of data collection and the responsibility for gathering it are often a function of the data source. For instance, the data source might be an annual government report. In this case, collection frequency would be annual, in line with the frequency of the report, while the responsibility would likely rest with the implementer to collect it from the government source.

Determine the frequency and responsibility (for data collection and analysis) for each performance indicator. This would also be a good time to assess the cost of data collection.

Step 4 d) Enter baseline data

Use the data collected during the project baseline study to complete the baseline data column in the performance measurement framework.

If the baseline data study was not conducted during project design, but will be conducted during the inception stage of the project:

Step 4 e) Define targets

Select your targets and determine expected achievement date

Your targets set the expectations for performance by the end of a fixed period of time, usually the duration of the project. This will help you determine realistic budgeting, allocation of resources and end-of-project scope and reach. You are also encouraged to set annual targets through the annual work plan, which will help you to better monitor progress over time.

If a baseline study has been conducted:

If the baseline data will be collected later:

Step 4 f) Draft a preliminary results-based monitoring and evaluation plan

Monitoring

Resources for the development and testing of data-collection instruments, as well as for training staff and stakeholders, need to be allocated in the project budget.

By the time the monitoring and evaluation plan is finalized, the implementer should be able to answer questions related to the cost of data collection, sampling methodologies, sample sizes, statistical analyses to be used, data-capture templates and data-storage systems.

Evaluation

The evaluation component of your monitoring and evaluation plan should specify the following:

Evaluability Assessments

An evaluability assessment goes far beyond providing information regarding whether or not an initiative or a project can be evaluated. It also:

Synergy between Monitoring and Evaluation

Answer the following questions during the development of the monitoring and evaluation plan to strengthen the synergy between monitoring and evaluation:

Finalize the Monitoring and Evaluation Plan as part of the Project Implementation Plan

The results-based monitoring and evaluation plan should be finalized and submitted to ¶¶ÒùÊÓƵ for approval as part of the project implementation plan or equivalent.

Step 4 g) Assess the performance measurement framework

Whether reviewing your own design or that of a proposed project from another organization, the same guidance applies.

¶¶ÒùÊÓƵ staff should use the Performance Measurement Framework when reviewing the performance measurement framework submitted by potential implementers with their proposals. Note that although many of the questions in the checklist are specific to ¶¶ÒùÊÓƵ’s terminology and tools, the Performance Measurement Framework Checklist includes questions related to the use of non-¶¶ÒùÊÓƵ tools. The kinds of questions asked can inform a review of proposals where ¶¶ÒùÊÓƵ has agreed to use partners’ own templates.

Implementers and ¶¶ÒùÊÓƵ staff can also use this checklist during the iterative process of developing their performance measurement framework in order to validate and improve it.

The assessment of the performance measurement framework should be done in a participatory manner with the relevant stakeholders and subject matter experts.

Part Four: Managing for Results during Implementation

4.0 Introduction

Managing for Results during program/portfolio and project implementation entails collecting data on indicators that measure both the output and outcomes, using this and other information to compare expected outcomes with actual outcomes, and adjusting/adapting operations during implementation in order to optimize the achievement of the expected outcomes.

This cycle of measurement, evidence-based assessment of progress on or toward the expeted outcomes, learning and adjustment during implementation (as illustrated by the diagram below) is what makes Results-Based Management / Managing for Results an adaptive management approach as opposed to just an indicator data collection or reporting exercise.

Part04
Texte version

Managing for results during implementation. What do we do during implementation to manage for results to achieve expected outcomes?

  • We collect data on indicators (qualitative and qualitative) that measure outputs and especially outcomes.
  • We analyze the collected indicator data and other information.
  • We use the analysis for an assessment of progress on or toward the expected outcomes.
  • This assessment, supported by indicator data, is used in a purposeful way for:
    • Learning what works and what does not, and integrating lessons learned into evidence-based decision making.
    • Adjusting and adapting programming as needed to achieve our expected outcomes.
    • Evidence-based narrative results reporting and communication.

Lessons learned should be shared and integrated into the decision-making process for other similar current, as well as future programming.

4.1 Monitoring and Indicator Data Collection

Throughout implementation, ¶¶ÒùÊÓƵ staff and the implementer monitor the project in different ways, according to their roles and responsibilities. The implementer has primary responsibility for collecting and analyzing data on all the indicators of the performance measurement framework, according to the frequency and data collection method indicated. More detailed information on data collection, including schedules and tools such as questionnaires, forms, etc., are normally set out in the results-based monitoring and evaluation plan. Monitoring by ¶¶ÒùÊÓƵ staff varies according to the type of project or investment. It always entails reviewing reports, but can also include site visits, cross-referencing with other stakeholders, or the hiring of external monitors.

As discussed in section 1.3, collecting data on the project’s indicators on a regular basis empowers managers and stakeholders with real-time information on progress on or toward the achievement of outcomes. This helps identify strengths, weaknesses, and problems as they occur, and enables project managers to take timely corrective action during project implementation. This in turn increases the likelihood of achieving the expected outcomes.

Data collected during implementation is also a crucial foundation for evaluations. As mentioned earlier, the cost of an evaluation can be greatly reduced by diligent monitoring and documenting of the achievement of results. Moreover, evaluators come on board at a particular point in time. Even when they devote significant efforts to data gathering at the time of evaluation, the resulting data set can never replace the wealth of information generated through continuous results-based monitoring and performance measurement. For example, even the best evaluation-recall techniques cannot replace missing monitoring data needed to properly analyze trends for the project or program/portfolio being evaluated.. Lack of data can limit the quality of an evaluation or in some cases make evaluation too expensive to conduct. In short, monitoring indicator data is essential.

4.2 Making Adjustments to the Logic Model and Performance Measurement Framework of an Operational Project

The logic model and performance measurement framework are developed in the planning and design stage, but, as discussed in section 2.0 and section 2.6, they are not static. They are iterative tools that can and should be adjusted as required during implementation as part of ongoing management for results. The logic model and the performance measurement framework should be validated during the development of a project implementation plan or equivalent. As project circumstances evolve or as the analysis of the data collected on the indicators suggest adjustments are required to achieve the expected outcomes, additional changes to these tools may be required.

An advisable time to make these changes is during the submission of the project implementation plan and thereafter during the submission of the annual work plan or during project steering committee meetings. Regardless of timing, any such changes must be discussed and agreed amongst all project stakeholders, including ¶¶ÒùÊÓƵ. In addition, such adjustments must be justified, documented and not change the scope, scale and intent of the project.

In particular, any change to intermediate or ultimate outcomes, or to targets at any level of the logic model, should be discussed with ¶¶ÒùÊÓƵ to assess whether they constitute a change in scope. Examples of changes in scope include changing the geographic scope of a project, changing the project reach (i.e., number and type of beneficiaries) or removing, adding or significantly altering an outcome or targets at the outcome and output levels. If ¶¶ÒùÊÓƵ determines that the changes constitute a change in scope and\or imply significant increases to the resources/funding required, this will trigger an amendment to the financial instrument used by the project and will need to be approved by the original approval authority at ¶¶ÒùÊÓƵ.

Changes to the logic model and performance measurement framework that trigger the (2012) will require that steps be taken in compliance with the Act.

4.3 Reporting on Outcomes

Reporting is an important part of an organization’s ongoing operations and decision-making. Reporting helps to promote a continuous feedback loop in which reports on activities, outputs, and outcomes provide information and analysis for decision-making over the life of a project.

What is reporting on outcomes?

Results-based performance reporting is the process of reporting on progress on or toward the achievement of the expected outcomes: comparing what you expected to achieve with what you have actually achieved, and explaining any variation between the two. To report on outcomes, implementers must assess actual outcomes based on actual data collected during implementation on the qualitative and quantitative indicators identified in the performance measurement framework.

Box 55 - Definitions: Progress on vs. Progress toward

When reporting on outcomes, you can speak about progress “on” or “toward” the achievement of that outcome. This difference allows you to report on progress “toward” an outcome early in the life of the project even when there has not been a significant change in the value of the indicators for that outcome.

  • Progress on is defined as actual change in the value of indicators being tracked for the respective outcome or output. An outcome or output is considered to have been achieved when its targets have been met.
  • Progress toward is defined as actual change in the value of indicators tracked at the next level down in the logic model (i.e. the intermediate outcomes, or their supporting immediate outcomes, or their supporting outputs depending on the level in question), with an explanation of how they are expected to lead to the higher-level outcome.

When there has been no perceptible change in the actual value of indicators at the respective outcome level, go to next level down in the logic model. For example, if there has been no perceptible change in the actual value of indicators at the intermediate outcome level, go to the supporting immediate outcomes and their indicators.

In each case, provide evidence (actual quantitative and qualitative data/information). Explain how these interim accomplishments, at the next level down in the logic model, will, over time, lead to the achievement of the higher level outcome.

Why report on outcomes?

Box 56 – Reporting Weaknesses to Avoid

Lack of balanced reporting: The reports focus on good news only and neglect the discussion of expected outcomes not achieved and of lessons learned.

Credibility of performance information: Actual data from performance indicators are not used to substantiate progress on or toward the achievement of expected outcomes, nor are they compared to baseline data and targets.

Reports that lack high-level analysis: Reports include a lot of detail but do not draw conclusions or tell the performance story.

Gaps in the performance story: Variance between planned and actual performance is not described or explained. There is limited discussion of risks and challenges faced.

Activity/output-based reporting: Reports focus heavily on activities (what is done) and outputs (what is produced) and not enough on actual outcomes (changes that have occurred).

Too much jargon and complex language: Reports should use clear language, keeping in mind diverse audiences, while still respecting any sector-specific and Results-Based Managment technical terminology.

Too early to assess: Reports should avoid using the phrase "too early to assess.” Even in the first year of a project, reports can briefly assess whether or not the project is on track to achieve intermediate outcomes, based on progress on outputs and immediate outcomes to date. In other words, assess the progress towards the expected intermediate outcome.

Reporting on outcomes, and not only on outputs, supports decision-making, ensures accountability to ¶¶ÒùÊÓƵ, local stakeholders and Canadians, and provides a basis for citizen engagement in Canada and partner countries.

Reporting is thus more than a vehicle for meeting accountability requirements. Reports are important management tools that allow implementers, key stakeholders and ¶¶ÒùÊÓƵ staff to:

Reporting includes a systematic analysis of the progress the project is making on or toward its expected outcomes, which supports a rigorous results-based approach to project management. It also provides a basis for assessing and communicating a project's contribution to broader programming.

Reporting guidance

The following guidance offers general advice on how to report on actual outputs and outcomes (i.e., outputs and outcomes achieved).

A results-based report (quarterly, midyear, annual or final) is a performance story about actual outcomes (substantiated by data collected on the indicators identified in the performance measurement framework) as compared to expected outcomes from the logic model. Any variance between the two should be explained, and include an assessment of their significance and impact on the project. The performance story should be contextualized, including a discussion of any risks that occurred and how they were addressed.

For every indicator

Describing progress

Box 57 - Definition: Actual Data

Actual data is:

  • collected on each indicator (quantitative and qualitative) as per the collection frequency identified in the performance measurement framework during implementation and documented in various reports and data systems
  • analysed, and this analysis is used to assess progress on or toward expected outcomes, in comparison to baseline data and targets
  • used as evidence of progress on or towards or on the achievement of an expected outcome in the narrative of performance reports

See Illustrative example - Reporting on outcomes below.

When describing progress made on or toward achieving outcomes and outputs, implementers should provide an evidence-based narrative that uses the actual data collected on the indicators (qualitative and quantitative) identified in the performance measurement framework. In other words, actual data provides the evidence that supports the assessment and assertion made by the implementer about the status of progress on the expected outcome or the outcomes achieved.

For each output and immediate outcome

Box 58 – Definition: Unexpected Outcome

Unexpected Outcome: A negative or positive change that is not part of the logic model but can be linked to the project. Not to be confused with a risk occurring or with other results not linked to the project.

For each intermediate outcome and ultimate outcome

Other considerations

Ensure that all narrative text on outcomes not only describes the change that has taken place, but also provides sufficient context and gives a sense of proportionality, for example:

Ensure that all of your explanations are clear and concise. If unexpected outcomes occur, report on these as well.

Illustrative example - Reporting on outcomesFootnote 41

Expected Intermediate Outcome: 1100 Increased environmentally sustainable use of potable drinking water by households in Region X

IndicatorsBaselineTargets (end of project unless marked otherwise)Actual Data 2009Actual Data Reporting Period 2010Actual Data Cumulative
1100.1 #/total households (community A and B) using wells as source of water for drinking and cooking60/250 (24%) households (community A)
10/100 (10%) households (community B)
225/250 (90%) households (community A)
85/100 (85%) households (community B)
100/250 (40%) (community A)
15/100 (15%) (community B)
Data to be collected Dec 2010 during bi-annual household survey (as per frequency identified in the performance measurement framework)100/250 (40%) (community A)
15/100 (15%) (community B)
1100.2%/total women (community A and B) walking to river for water daily80% (416/520) women (community A)
95% (247/260) women (community B)
15% (78/520) women (community A)
20% (52/260) women (community B)
70% (224/520) (community A)
93% (242/260) (community B)
65% (338/520) (community A)
90% (234/260) (community B)
65% (338/520) (community A)
90% (234/260) (community B)
1100.3 %/total well inspections passed0%80% (estimated 80/100)90% (9/10)70% (7/10)80% (16/20)
1100.4%/total women (community A and B) who feel they are using safe drinking water “most of the time” or “all of the time” (levels 4 or 5 on a 1-5 scale35% (182/520) women (community A)
15.4% (40/260) women (community B)
90% (468/520) women (community A)
85% (221/260) women (community B)
50% (260/520) (community A)
20% (52/260) (community B)
60% (312/520) (community A)
23% (60/260) (community B)
60% (312/520) (community A)
23% (60/260) (community B)

Progress from Project Inception to Date (Cumulative):

There has been a modest increase in the use of potable water by households in Region X since the start of this project in early 2008, from 24% of 250 community A households and 10% of 100 community B households, to at least 40% and 15% respectively as of 2009 (last reporting period). Although a household survey is not being conducted this year, evidence gathered through observation and conversations with stakeholders in the community, including the Women’s Water Collective (which has members from both communities) indicates that more households of both communities are using the wells this year.

The female head of a small farming household (community A) on the outskirts of Region X, said, “I got water from the well this year, instead of the river like I used to. Last year my children got sick often and my daughter did not have time to go to school. Now they seem healthier and she can go to classes almost every morning.”

The completion of several outputs to date, including the construction of 5/6 wells, two training sessions on well maintenance and 10 community awareness-raising sessions on the use of safe drinking water have led to an increase in access to safe drinking water and understanding of its importance by the members of the two communities (see actual data on immediate outcomes #1110 and 1120), both of which have contributed to this increase in use.

Furthermore, this increased use of safe drinking water is demonstrated by the fact that fewer women from both communities are using the river as their source of water. At the start of the project, 80% of 520 women (community A) and 95% of 260 women (community B) used the river daily. As of January 2010, 65% of 520 (community A) and 90% of 260 (community B) were using the river. As trends for this indicator show, the desired changes are not being experienced equally by both communities. As discussed in reporting on output 1111, construction of the last of the three wells planned for community B neighbourhoods (the one for the most populous neighbourhood) had to be postponed because of a risk of contamination from latrines located too close to the planned site. The construction of the remaining well next year should correct this imbalance.

60% of women (community A) and 23% of women (community B) now feel they use clean water most or all of the time. This is an increase from 35% and 15.4%, respectively, since the start of the project. A discrepancy between the percentages of women who feel they use clean water and those who probably use safe water (considering the numbers still using water from the river) indicates the need for more community awareness-raising.

Members of the Women’s Water Collective are continuing to inspect the wells to ensure their use doesn’t lead to water pooling and that water isn’t being wasted because of leaks. The percentage of well inspections passed has decreased from 90% (9/10) during year one to 70% (7/10) in year two. This decrease is a sign that users need to be encouraged to report pooling and leaks to the Women’s Water Collective right away, so parts can be sourced and repairs done in a timely manner.

Variance and Unexpected Outcomes:

Daily use of the river among women of both communities remains higher than expected. In the case of community A, there was a 15% variance between this year’s target for this indicator (50%/520 [260 women]—see annual work plan) and the actual usage (65%/520 [338 women]). An informal survey of members of the Women’s Water Collective is being carried out to find out why women who already have access to the wells are still going to the river, and adjustments to the awareness-raising sessions will be made based on the findings.

In the case of community B, the 40% variance between the annual target of 55%/260 (143) women set in this year’s annual work plan and the actual usage of 90%/260 (234 women) is largely due to the construction delays described above. New sites have been proposed and an environmental analysis of these sites supported their selection by the community. In the meantime, since observation and anecdotal evidence indicated that many women of community B did not feel comfortable using wells located in primarily community A neighbourhoods on their own, we are supporting the Women’s Water Collective’s project to organize inter-community water collections.

For comprehensive guidance on reporting, please see ¶¶ÒùÊÓƵ’s International Assistance Results Reporting Guide for Partners, and its companions, RBM Check List 5.1 - Reviewing Results Reports from ¶¶ÒùÊÓƵ Implementers and Results-Based Management Tip Sheet No. 3.2 – Outcomes, Indicators, Baseline, Targets and Actual Data: What’s the Difference?

Conclusion

Remember

The main purpose of Results-Based Management is to optimize and improve the achievement of the expected outcomes that a program/portfolio and project sets out to achieve.

This means managing the program/portfolio and project for results from start to finish and ensuring a continuous focus on the achievement of outcomes by:

The guide will be updated periodically as required. Enquiries concerning this guide should be directed to gar.rbm@international.gc.ca

References

Bradley, D. and H. Schneider. . Part 1. Kingston upon Thames, UK: Voluntary Service Overseas, 2004.

Canada. ¶¶ÒùÊÓƵ. Policy on Gender Equality. Ottawa: Author, 2010.

Canada. Canadian International Development Agency. Ottawa: Author, 1999.

Canada. Canadian International Development Agency (since 2015 ¶¶ÒùÊÓƵ). . Ottawa: Author, 2008.

Canada. ¶¶ÒùÊÓƵ. . Ottawa: Author, 2008.

Canada. Justice Canada. . Statutes of Canada 2012, c. 19, s. 52.

Canada. Justice Canada. . Statutes of Canada 2008, c. 17, s. 4.

Canada. Treasury Board of Canada Secretariat. . Ottawa: Author, 2000.

Canada. Treasury Board of Canada Secretariat. . Ottawa: Author, 2010.

Canada. Treasury Board of Canada Secretariat. . Ottawa: Author, 2012.

Crossman, Ashley, .

Drucker, Peter F. The Practice of Management. New York: Harper & Row, 1954.

Funnell, Sue, C., and Patricia J. Rogers, , John Wiley and Sons, Inc., Copyright 2011.

International Committee of the Red Cross. , May 2008.

Investopedia. “?”

Morra Imas, Linda G. and Ray C. Rist. . Washington, DC: The World Bank, 2009. License: CC BY 3.0 IGO.

Organisation for Economic Co-operation and Development. . Development Assistance Committee (DAC) Guidelines and Reference Series. Paris: Author, 2010.

Organisation for Economic Co-operation and Development. , 2010.

Organisation for Economic Co-operation and Development. . Development Assistance Committee (DAC).

Rietbergen-McCracken, Jennifer, ed. “” World Bank discussion papers, no. WDP 333. Washington, DC: The World Bank, 1996.

Vogel, Isabel. London: UK Department for International Development, April 2012.

United Nations Development Group, .

United Nations Development Group, October 2011.

University of Wisconsin-Extension, .

Zall Kusek, Jody and Ray C. Rist. . Washington, DC: The World Bank, 2004. License: CC BY 3.0 IGO.

Report a problem on this page
Please select all that apply:

Thank you for your help!

You will not receive a reply. For enquiries, please .

Date modified: