¶¶ÒùÊÓƵ

Language selection

Search

Planning and managing feminist evaluations

Version PDF (2.51 MB)

On this page

  1. 1. About this guide
  2. 2. What is feminist evaluation at ¶¶ÒùÊÓƵ?
    1. 2.1 Key principles of the feminist evaluation approach
  3. 3. Questions to ask when considering a feminist approach to evaluation
  4. 4. Planning and designing feminist evaluations
    1. 4.1 When to start planning for an evaluation
    2. 4.2 Identifying who should be involved and how
    3. 4.3 Engage key partners and stakeholders
    4. 4.4 Set up the evaluation governance structure
  5. 5. Ethical considerations in evaluation
    1. 5.1 Mitigate power dynamics and avoid causing harm
    2. 5.2 Respect stakeholders’ time
    3. 5.3 Engage participants in data analysis and sensemaking
    4. 5.4 Clarify data ownership
  6. 6. Develop the evaluation terms of reference
    1. 6.1 Identify the purpose and objectives of the evaluation
    2. 6.2 Develop the evaluation questions
    3. 6.3 Qualifications of the evaluation team
    4. 6.4 Budget and level of effort
  7. 7. Managing feminist evaluations
    1. 7.1 Orientation meeting with the evaluation team
    2. 7.2 Managing the evaluation
    3. 7.3 Reviewing evaluation deliverables
    4. 7.4 Management response
    5. 7.5 Dissemination and learning
  8. Annex I: Overview of feminist evaluation and good practices
  9. Annex II: Guidance on the evaluation use and influence plan
  10. Annex III: Elements to look for in feminist evaluation deliverables
  11. Annex IV: Resources

1. About this guide

This document was developed by the Data and Evaluation and Results at ¶¶ÒùÊÓƵ. It is not a methodological guide. Organizations may find this document useful as a practical guide to commissioning feminist evaluations.

This guide provides a practical approach for commissioning and managing feminist evaluations. Feminist evaluation approaches can be applied to all projects, regardless of the sector. The approach outlined in the guide should be adapted based on:

Incorporating feminist principles into evaluation can:

2. What is feminist evaluation at ¶¶ÒùÊÓƵ?

Canada’s international engagement is based on its Feminist International Assistance Policy, and its policy on (GBA Plus)Footnote 1. Together, these policies ensure that monitoring, evaluation and learning systems measure and sustain transformative change in support of gender equality and inclusion. They commit the Government of Canada to improve its evidence-based decision-making by investing in:

Canada has been exploring how to apply feminist principles to evaluative work. We have taken an incremental and pragmatic approach, building on efforts in programs and evaluations.  

Feminist evaluation emphasizes participatory, empowering and inclusive approaches that actively support social justice agendas and aim to shift unequal power dynamics.Footnote 2 Rather than a framework or precise approach, feminist evaluation is often defined as a way of thinking about evaluationFootnote 3 and is described as “fluid, dynamic and evolving.”Footnote 4

Feminist evaluation focuses on gender inequalities that lead to social injustice as they intersect with other causes of discrimination. A feminist evaluation challenges and changes inequalities at every step of the evaluation. It encourages the evaluation process to be transformative and recognizes that evaluation itself can be a tool for positive change and for rebalancing the distribution of power.

How ‘feminist’ does my project need to be to use feminist evaluation?

2.1 Key principles of the feminist evaluation approachFootnote 5

A focus on gender equality is foundational to feminist approaches to evaluation.

You do not need to include all the other principles in this guide for an evaluation to be considered feminist. You should aim to use as many of these elements as possible or relevant to the needs and context of the evaluation

Focus on gender equality

Feminist evaluation makes gender equality and the empowerment of women, girls and other gender-diverse people central in all stages of the evaluation. It examines how discrimination based on gender is systemic and structural and leads to social injustice.

Gender equality is a core factor in shaping the evaluation questions, methodology, findings, conclusions and recommendations. Feminist evaluation assesses results as they relate to gender equality objectives and identifies lessons learned.

Foster an inclusive and intersectional approach

Feminist evaluation examines the ways that different forms of discrimination intersect to create power inequalities and marginalization.Footnote 6 Discrimination can be based on gender, race, age, culture, sexual orientation, disability and many other factors.

Feminist evaluation asks why a group (e.g. women, gender-diverse people, the elderly, people with disabilities, marginalized individuals) is treated differently or benefits differently from a policy, a program or a project and what can be done about it.

Feminist evaluation uses processes that enable a diversity of stakeholders, including marginalized and hard-to-reach groups, to meaningfully participate and shape the evaluation. It provides a platform for voices that are often unheard and ensures that knowledge generation is inclusive.

Support transformative change

Transformative approaches go beyond describing differences and disaggregating data. Instead, they push for more structural changes to power systems and gender relations based on an understanding of the root causes of inequality.

An effective feminist evaluation helps enhance our understanding of how to achieve transformative change. It recognizes that social change, especially as related to gender dynamics, is non-linear. It allows for flexibility and complexity. Feminist evaluation

Shift power and promote ownership  

Knowledge generated through evaluation is a powerful resource that is first owned and used by participating stakeholders and partners. This knowledge supports their own social change agendas.

Feminist evaluation uses participatory and empowering methods to shift power, involving participants as full partners in the creation and use of knowledge, while ensuring that safeguarding and privacy protocols are respected.

Learning objectives and evaluation methods are decided on jointly with the partners and stakeholders and are embedded in their local contexts. It is important to contextualize evaluation, recognizing that cultural, social and temporal factors are at play. Feminist evaluation encourages partner-led or joint evaluations and the meaningful involvement of local expertise to the greatest extent possible.

Acknowledge position and privilege

Feminist evaluation acknowledges and takes into account that both evaluators and stakeholders have personal experiences, perspectives and characteristics that come from and lead to a particular stance, worldview or bias.

Instead of emphasizing neutrality or independence, feminist evaluation encourages transparency, reflection and engagement with these potential biases. It adopts a reflective approach that encourages regular consideration of positionality, privilege and the assumptions/values that individuals bring to the evaluation process.

Feminist evaluation respects multiple ways of knowingFootnote 7 and recognizes that some ways are privileged over others.Footnote 8

Take an activist stance

Feminist evaluation encourages the use of the evaluation process and evaluation findings to positively influence the rights of women, girls and other traditionally excluded groups. One factor distinguishing feminist evaluation from other evaluation approaches is its activist stance. Evaluators act on opportunities to create, and advocate for and support change.

Feminist evaluations aim to go beyond acknowledging inequality. They address inequality throughout the evaluation process and in the messages of the final report.

Feminist evaluation does more than make recommendations—it makes the evaluator or those initiating the evaluation responsible for actively promoting change.

At the beginning of the evaluation process, evaluation managers and key stakeholders should consider which key principles of feminist evaluation will guide the evaluation process and how, given

In addition, they should ensure that there is time expressly dedicated throughout the evaluation process to check in and reflect on how a feminist approach is guiding their work, what is working and what is not. 

Please also consult Annex I for an Overview of feminist evaluation and good practices.

3. Questions to ask when considering a feminist approach to evaluation

Below is a list of questions that can guide the evaluation manager’s thinking when considering a feminist evaluation approach to evaluation.

Questions for reflectionFootnote 9

4. Planning and designing feminist evaluations

4.1 When to start planning for an evaluation

Guiding questions

Ideally, initial planning for an evaluation will start with the implementing partner(s) and other local stakeholders at the project design stage, as appropriate. This will allow sufficient time for joint planning with partners to agree on evaluation needs and timing.

All planned evaluations should align with partners’ monitoring and evaluation plans. Early discussions and planning will also help to ensure that evaluation findings are relevant and timely. They can also lead to a timely decision on who will commission the evaluation (i.e. the donor or the implementing partner[s]), or if it will be managed jointly.

Funds should be set aside at the project approval stage for any evaluations.

Practical tips

4.2 Identifying who should be involved and how

Guiding questions

4.3 Engage key partners and stakeholders

There is sometimes a concern that stakeholder involvement can affect the neutrality or independence of an evaluation. A feminist approach recognizes that all people have personal experiences, perspectives and characteristics. These result in some form of bias, which shapes the evaluation.

As a first step, it is important to identify and consult with the intended users of the evaluation to determine:

An Evaluation Use and Influence Plan (see Annex II) can be developed to guide the discussions and consultations about how the evaluation will be used, and how learning opportunities can be built into the evaluation process itself. The plan helps identify:

The Evaluation Use and Influence Plan should be revised regularly throughout the evaluation. Based on what is being learned during the evaluation, adapt the plan based on:

In some instances, additional users may also be identified at later stages of the evaluation.

In a feminist evaluation, a participatory approachFootnote 10 is used with partners and stakeholders, who are the primary users of the evaluation and also the most prominent participants. Evaluation participants may need additional resources and/or capacity to participate meaningfully in the evaluation. Ideally, these will be identified early in the process, and planned and budgeted for as much as possible.

When the Evaluation Use and Influence Plan is being developed, determine how to make participation easy for each group. If additional resources, coaching or guidance are needed, these can be included in the evaluation timeline and/or terms of reference.

Early engagement and ongoing communication are key

Once the primary users of the evaluation have been identified, they should meet to share their priorities, needs and concerns about the evaluation. The session should establish a common understanding of the planned evaluation process. Additional sessions may need to be organized to build a deeper understanding of feminist evaluation approaches and/or data collection and analysis. These can be facilitated by the evaluator, the implementing partner(s) or by an external facilitator.

Practical tips

4.4 Set up the evaluation governance structure

Guiding questions

Inclusive and diverse stakeholder involvement in the planning, design, implementation, and follow-up of evaluations is critical to ensuring the ownership, relevance, credibility and use of the evaluation.

Processes should be in place to ensure the participation of individuals or parties

In particular, feminist evaluation recognizes that there is an inherent power imbalance between those who commission and conduct evaluations and those who are asked to provide information. Intentionally shape the evaluation so that power and decision-making is distributed. Local users and stakeholders, who usually do not have power in evaluation processes, can influence thus the nature and course of the evaluation. This allows you to facilitate a supportive context for feminist evaluation practices.Footnote 11

While there is no uniform approach to establishing an evaluation governance structure, the chosen model should enhance the distribution of knowledge and information. This ensures multi-directional flows of information.

A common way to support more horizontal decision-making and leadership in evaluation is to establish an evaluation steering committee. The role of the committee is to facilitate the engagement of key stakeholders to ensure that their perspectives are adequately represented throughout the evaluation from beginning to end.

The committee serves as the link between program managers, implementing partners, other stakeholders, and the evaluation team. The participation of different stakeholder groups ensures a broad ownership of the results and follow-up on the recommendations stemming from the evaluation.

The committee’s responsibilities include:

Membership of the evaluation steering committee

Members of the evaluation steering committee may include:

The specific role of governing bodies may differ across evaluations and larger evaluations may also have additional reference, learning or working groups focused on specific issues or themes (some could also include learning networks or hubs). These groups can be formalized with terms of reference specifying their roles and expected engagement or can be more informal. In most cases, the evaluation manager will coordinate and convene these groups.

Regardless of the structure and its degree of formality, the chosen governance mechanism needs to allow local partners and stakeholders and other key evaluation users to participate meaningfully, freely and without repercussions. That may mean

Participants should be free to communicate with all other members of the groups horizontally and vertically. There should be clarity on how the input provided is going to be tracked and used, and a feedback mechanism should be established to ensure that participants understand how their input has influenced the evaluation.  

5. Ethical considerations in evaluation

Guiding questions

All those engaged in commissioning, designing, conducting and managing evaluations should conform to agreed ethical standards. Ethical principles for evaluation include:

Contracted evaluators and donors who fund evaluations have a responsibility to those who will be affected by the evaluation. They need to consider how the evaluation will address the following areas, some of which will be the responsibility of the evaluation managers and others of which may need to be clearly articulated in the terms of reference for the evaluation team to address.

5.1 Mitigate power dynamics and avoid causing harm

Individuals that are planning, conducting and managing evaluations should:

There are many types of harm to anticipate and consider in evaluations. Examples include discomfort, embarrassment, intrusion, devaluation of worth, unmet expectations, stigmatization, physical injury, distress and trauma. Political and social factors may also jeopardize the safety of participants before, during or after an evaluation. Evaluation managers should discuss these issues and potential mitigation strategies beforehand with the implementing partner(s).

5.2 Respect stakeholders’ time

To ensure a fair and effective participatory process, it's important to manage power dynamics carefully. Make sure to consider participants' time and costs. This includes transportation, childcare, and Internet access for virtual participation.

Involving participants, such as recipient organization staff, in reviewing, analyzing and making sense of data can cause harm to the organization if it takes time away from other essential activities. A feminist evaluation often requires a significant time commitment. It is important to ensure that participants are not overburdened and that the process is of value to those participating in the evaluation.

Participatory vs. extractive

Efforts to increase the involvement of project participants in the evaluation without considering how findings will be returned to them or how they will benefit from these efforts may result in extractive experiences. This is particularly true for the participation of women, who tend to bear a large share of household and childcare responsibilities, or other marginalized groups who take time away from other essential productive activities.

If the evaluation process and purpose are not clear, project participants may also have false expectations about how their input will be used or how they and their communities may benefit (e.g. will their groups or communities receive more funding in the future if they participate or treat the evaluators in a particular way?).

5.3 Engage participants in data analysis and sensemaking

Collaborative knowledge creation is critical to counteracting power imbalances or misinterpretations. Certain methods such as storytellingFootnote 12 or outcome harvestingFootnote 13 may be well-suited to support collaborative knowledge creation.

Participants are encouraged to share their stories in the way they wish and to shape the sensemaking process.Footnote 14 This is a process in which people develop a shared understanding. It assumes that individuals have different interests and perspectives, and often see information in different ways. When used for monitoring and evaluation purposes, sensemaking can draw on information acquired through both formal and informal processes.Footnote 15

5.4 Clarify data ownership

Issues of data ownership need to be clarified at the beginning of the evaluation process. A feminist evaluation encourages as much of the data ownership as possible to reside with local stakeholders and communities. This empowers these stakeholders to have a final say in how data is used.

The storage of data is a key consideration for the safety and security of at-risk groups, such as two-spirit, lesbian, gay, bisexual, transgender, queer, intersex and additional sexually and gender-diverse people (LGBTQI+), women human rights defenders, and Indigenous peoples.

Practical tips

In consultation with the affected groups, adjust engagement strategies to eliminate the risk.

6. Develop the evaluation terms of reference

The terms of reference (ToR) define the key parameters of the evaluation. The development of an accurate and precise ToR is critical for hiring the right evaluator, guiding the evaluation and managing a high-quality evaluation. The initial planning and consultations that take place before the launch of the evaluation will contribute to the development of the ToR.

Below are the recommended steps for evaluation managers to take in this stage:

  1. Program managers should contact their internal evaluation unit (if applicable) and involve them early in the process. Conversely, if evaluation managers are responsible for planning the evaluation, they should work closely with their program counterparts.
  2. If an evaluation steering committee is established, decide together with the committee on the purpose, objectives and scope of the evaluation (see section 6.1)
  3. Develop the evaluation questions or areas of investigation jointly with the key users/evaluation committee (see section 6.2)
  4. Agree on the evaluation timelines with key users/stakeholders.

Practical tips

6.1 Identify the purpose and objectives of the evaluation

Guiding questions

Feminist evaluations need to demonstrate a clear intention about the purpose and use of findings to improve the work of achieving gender equality outcomes and changes in power structures. Evaluation may be conducted for the purposes of:

Ultimately, evaluation seeks to inform social action, help solve social problems and contribute to organizational or social value.Footnote 16

The learning objectives of the primary users, particularly those who are marginalized or facing discrimination, should determine the purpose and objectives of the evaluation.

Some examples of the evaluation purpose(s) include:

Some examples of evaluation objectives include:

The users’ needs, purpose, objectives, and evaluation budget and timeline will determine the scope of the evaluation.  

Practical tips

6.2 Develop the evaluation questions

The evaluation manager, in collaboration with the implementing partner(s) and other users, will develop the key evaluation questions. There should be no more than 5 to 6 evaluation questions. The list should include questions related to gender equality and structural barriers in the specific context. While feminist evaluations tend to emphasize learning and co-creation of evaluation questions, the evaluation manager may also wish to consult the in developing evaluation questions.  

A list of sample evaluation questions is presented below. In all these questions, it is important to consider and assess intersectionality.

The evaluation questions may be refined during the participatory scoping phase, based on input from stakeholders and changing needs and contexts.

6.3 Qualifications of the evaluation team

Guiding questions

Having the right team is essential to the success of any evaluation. The expertise required will depend on the purpose and objectives of the evaluation and other factors. Most importantly, a feminist evaluation leverages experts from the programming country/region, such as local feminist researchers and/or evaluators, to the greatest extent possible.

The goal is to assemble a team that understands the local context and reflects it in the evaluation’s methodological considerations. The team should bring a diversity of backgrounds, experiences, skills and perspectives.

These differences enrich the evaluation because they allow the evaluators to see the world in different ways and broaden their understanding of the issues the evaluation addresses. The evaluators’ background and experience should also reflect the background and experience of the project participants.

Hiring consultants from the country/ region where the project is implemented will enable the evaluation to be more culturally responsive, relevant and rigorous. The ability to speak the local language and understand the underlying social dynamics is important for mobilizing different groups to participate meaningfully in an evaluation. Hiring local evaluators to lead evaluations can also help to strengthen evaluation capacity in developing countries, while also building more diverse evaluation systems.

The required qualifications and experience should be determined based on the specific needs and budget of the evaluation. It is very important to identify the most important qualifications required while not being too narrow in the requirements. It may be very challenging to attract bidders if the list of required qualifications is too long and/or too specific.

In line with the qualifications and experience criteria identified above, the evaluation manager may wish to assess the bidders’ understanding of the assignment and evaluation needs. The following elements could be considered:

Practical tips

6.4 Budget and level of effort

The budget and level of effort are key components of the ToR. A feminist evaluation may require more time (and therefore budget) in order to allow for the engagement and participation of different stakeholders. It is essential that the estimated budget be based on real costs and a sound analysis of the level of effort.

7. Managing feminist evaluations

Managing feminist evaluations requires that the evaluation manager and everyone involved do things differently. Feminist evaluations differ from other types of evaluations in that they:

The following tips focus on specific steps in managing an evaluation contract.

7.1 Orientation meeting with the evaluation team

After the contract has been signed, organize an orientation meeting with the evaluation team.

7.2 Managing the evaluation

The evaluation manager leads the evaluation together with the evaluation steering committee. In most cases, the evaluation manager will be the main point of contact for the evaluation and will facilitate communication between the evaluation team, the evaluation steering committee and other stakeholders (as appropriate).

The evaluation manager will coordinate meetings and inputs from the evaluation steering committee and other stakeholders, unless an alternative arrangement is agreed among key evaluation stakeholders.

Evaluation managers are encouraged to play a facilitation role in shifting power for the benefit of local stakeholders in the conduct of the evaluation.

The evaluation manager and the evaluation steering committee should periodically check in with the evaluation team and the implementing partner(s)/local partner. They need to see how the process is going. The following guiding questions can help assess progress and identify any adjustments that need to be made:

7.3 Reviewing evaluation deliverables

The evaluation manager, together with the evaluation steering committee and other stakeholders, is responsible for reviewing deliverables submitted by the evaluator. They also ensure that all deliverables meet the quality standards established for the evaluation. It is up to the evaluation manager and the evaluation steering committee to decide who should provide input on which deliverables.

Who should be involved in the process?

While these groups will probably already be represented on the evaluation steering committee, new stakeholders may emerge during the evaluation process.

Broad consultation is encouraged. However, some users may choose to limit their involvement due to limited staff capacity (or interest). Refer to Annex III: Elements to look for in feminist evaluation deliverables. It lists some elements that should be reflected in each deliverable.

The evaluation steering committee may choose to follow an established set of quality standards for evaluation to assess the quality of deliverables such as the or the .

7.4 Management response

Once the evaluation report has been finalized, it is a common practice to develop a management response that establishes how each organization will follow-up on the evaluation recommendations. However, the key users/stakeholders may agree on an alternative mechanism to keep parties accountable for responding to the recommendations.

7.5 Dissemination and learning

Feminist evaluation challenges evaluators to consider who needs to know the findings, who wants to know the findings, and who should know the findings. Understanding these different perspectives informs not only how the information is written but also how it should be presented and shared at the conclusion of the evaluation.

Planning for the dissemination of evaluation knowledge products should begin early in the evaluation planning process (see Annex II, Evaluation Use and Influence Plan). The Use and Influence Plan should outline the products that the different users need and how they plan to use the knowledge and learning gained from the evaluation.

Additional dissemination products and strategies may be considered at the end of the evaluation. There may be important lessons that the stakeholders would like to share with broader audiences or specific project implementation strategies that stakeholders would like to showcase.

The advocacy/communication needs of relevant stakeholders may also have changed with potential shifts in their socio-political contexts. All these needs can lead to a variety of knowledge products and engagements, such as:

It is important to remember that in a feminist evaluation, partners and stakeholders are the primary owners of the knowledge and should be consulted on its use prior to any further dissemination. It is also imperative to ensure the protection of any evaluation participants who may be harmed if identified through any of the dissemination efforts.

Some additional dissemination strategies may include:

This guide will continue to be adapted and strengthened as we learn more about feminist evaluation and how to apply it in various contexts. We welcome all feedback. Please send any comments to evaluation@international.gc.ca.

Annex I: Overview of feminist evaluation and good practices

Feminist evaluation

Feminist evaluator

Notes on methodology

Good practices

Annex II: Guidance on the evaluation use and influence plan

The Evaluation Use and Influence Plan aims to promote opportunities for stakeholder involvement and iterative learning throughout an evaluation in order to enhance the use of evaluation results.

The tip sheet helps you consider and identify:

Tip 1: Consider which groups have capacity to act on the evaluation findings and in what ways

Tip 2: Identify the potential barriers or constraints to use. For example, factors might include:

Tip 3: Agree on the plan with as wide a range of stakeholders as possible to gain buy-in. Early engagement also helps manage stakeholders’ expectations.

Who should be involved in this discussion: Evaluation managers can facilitate the discussion at the initial stages of the evaluation planning. They are encouraged to collaborate and consult widely, especially with those who can represent end-user groups or have knowledge of users’ information needs. If an evaluation steering committee has been established, it should also be involved in preparing the plan.

Identify the intended end users: Evaluation managers may consider conducting to identify the range of potential users for the evaluation. After conducting this type of analysis, evaluation managers can use the Evaluation Use and Influence Plan to facilitate a discussion with key partners and stakeholders.

Use: Evaluation managers should verify with identified users what they hope to do with the evaluation findings. Knowing what information is needed by each group will inform the evaluation questions asked. The intended purpose will also affect the type of evidence generated (e.g. data gathered, data collection methods used). These elements will inform the development of the terms of reference.

User engagement: Evaluation managers should clarify with identified users what role they wish to take within the evaluation process and identify opportunities for the users to be part of the evaluation (e.g. participate in the committee, formulate evaluation questions, discuss appropriate ways to collect data from target groups, participate in data analysis and sensemaking, be engaged throughout the process, etc.). Evaluation manager should share knowledge about and from the evaluation in a format that is tailored to the needs and preferences of the various user groups (e.g., accessible language, graphic format). Strategies can include opportunities for user groups to engage with learning and evidence throughout the evaluation, and products/activities that are appropriate, relevant and appealing to different intended users. Note that the final evaluation report is rarely the best way to ensure impact and influence. It is important to consider whether all users can engage with the evaluation in their preferred way and whether their needs are being met. This will help determine whether capacity building is needed to support them throughout the process.

Responsible party: Each identified strategy should have an individual or individuals responsible for its completion.

Timing: To the extent possible, schedule the evaluation to accommodate upcoming planning and decision-making timelines of the various user groups. Time opportunities for stakeholder engagement and products to meet those needs.

Plan for follow-up: Follow up with the end users to understand if and how they used the evaluation to inform their work. This can be done through a survey or feedback form, or through informal conversations.

Iterate: As needs change, evaluation questions or other elements of the evaluation plan may change. The terms of reference can include a statement that the evaluation plan must be revised as necessary to address changing needs during the evaluation process.

Evaluation use and influence plan

The following elements can be used by evaluation managers to guide conversations about the intended use and influence of an evaluation. Together, they make what is called an evaluation use and influence plan.  This type of plan can help facilitate conversations with key partners and stakeholders and guide the different stages of the evaluation. Here is what typically goes into an evaluation use and influence plan. (Remember that any actions that will be the responsibility of the evaluation consultant should be included in the terms of reference.)

Annex III: Elements to look for in feminist evaluation deliverables

1. Work plan

In a feminist evaluation, evaluators work with partners and stakeholders to co-create the different parts of the evaluation. This includes the final evaluation questions; appropriate data collection tools; who should participate in data collection; how to make sense of collected data (analyze); and deciding on the type of evaluation products that work for their evaluation uses (e.g. a long final report is rarely the most suitable product). The work plan should reflect these features of feminist evaluation.

It is recommended that the work plan include an evaluability assessment (unless this has been done as a separate component prior to the launch of the evaluation). The evaluability assessment outlines the following:

The work plan also:

2. Evaluation report or other final evaluation product

User-focused knowledge products. All evaluation products should be written in a language that is accessible to the key users, and all products should be specifically reviewed for any biases. To add a human touch to the reports, they can include photos (with consent), and have rich, focused stories or supporting quotes that bring the voices of women and marginalized people to life.

The evaluation report is still the most common final evaluation product. However, the rich data and findings of an evaluation can also be captured in other formats (e.g. a deck, a short document on a particular group/country/theme etc.). The final evaluation product should:

In addition, the evaluation team should ensure that partners and stakeholders understand the process by which the findings and recommendations were formulated and that they agree with the interpretation of the findings and with the recommendations.

Other deliverables (as applicable)

Some evaluations will include additional deliverables such as an executive summary, an infographic, impact or success stories etc. These products should:

Annex IV: Resources

¶¶ÒùÊÓƵ

Government of Canada

External

Online tools

Report a problem on this page
Please select all that apply:

Thank you for your help!

You will not receive a reply. For enquiries, please .

Date modified: