We are currently moving our web services and information to Canada.ca.

The Treasury Board of Canada Secretariat website will remain available until this move is complete.

EVALUATION POLICY Results-based Management and Accountability Framework (RMAF)

Archived information

Archived information is provided for reference, research or recordkeeping purposes. It is not subject à to the Government of Canada Web Standards and has not been altered or updated since it was archived. Please contact us to request a format other than those available.

ACKNOWLEDGEMENT

INTRODUCTION

PROFILE OF THE POLICY

ONGOING PERFORMANCE MEASUREMENT STRATEGY

EVALUATION STRATEGY - ISSUES

EVALUATION STRATEGY - DATA COLLECTION

EVALUATION STRATEGY - PERFORMANCE INDICATORS

REPORTING STRATEGY


 

Centre of Excellence for Evaluation
Treasury Board Secretariat

Acknowledgement

This RMAF was developed by the Centre of Excellence for Evaluation in collaboration with an inter-departmental Working Group of evaluation representatives and the Evaluation Senior Advisory Committee; and with the assistance of Hara Associates Inc.

The new role for evaluation sees the principles of evaluation embedded into management practices, facilitating the move towards results-based management. This strategic plan adheres to these emerging principles.

As part of these policies, TBS created a Centre of Excellence for Evaluation and a Centre of Excellence for Internal Audit. The broad mandates of these Centres are derived from the new policies, requiring TBS to take a leadership role for the respective communities across the federal government. The Centre of Excellence for Evaluation was created on April 1, 2001, and is headed by a Senior Director, with a core group of 12 FTEs assigned to the Centre. The Centre is currently completing staffing activities and has started engaging in a series of proactive outreach activities according to its mandate.

Introduction

On April 1, 2001, Treasury Board replaced the older policy on Review with two separate policies, one for the practice of Evaluation within the federal government and one for Audit. Both policies are to be evaluated on an interim basis after 18 months (fall 2002), and fully evaluated at five years (2005/2006).

In parallel with establishing a new Evaluation Policy (the Policy), Treasury Board also funded the first two years of a proposed four-year program to assist implementation. Departments and agencies that submitted qualifying Business Cases are receiving additional resources to improve their evaluation capacity (Business Case Funding). A Centre of Excellence for Evaluation (CEE) has also been established to monitor the success of the Policy and to assist in implementation. Funding for the first two years of the program is $8.6 million. Funding for the third and fourth years is contingent on the interim evaluation of the Policy in the second and third quarters of 2002.

This RMAF defines the strategy for both the 18-month and 5 year evaluations of the Policy and associated measures, the Business Case Funding and of the Centre of Excellence. The RMAF defines the performance indicators; strategies for data collection and reporting; and evaluative questions which will be used to judge the progress and success of Policy implementation, and the actual contribution of the CEE to it.

Ongoing Performance Monitoring for the Evaluation Policy

A key undertaking accompanying the approved Policy was that Treasury Board Secretariat (TBS), in consultation with its stakeholders, would include in this RMAF a measurement strategy for tracking the ongoing performance of the policy.

Consistent with this undertaking, an annual system of performance monitoring is proposed, with roles and responsibilities for maintaining key performance indicators defined for the department/agency and Treasury Board partners who share responsibility for implementing the Policy.

Shared Responsibility

Responsibility for Policy implementation is shared among departments and agencies, whose evaluation functions are directed by the Policy, and central agencies such as Treasury Board and the Privy Council Office, who must provide the appropriate support.

In the context of shared responsibility, the CEE is responsible for the evaluations of the Policy, and for the ongoing monitoring of the function. Departments and agencies are responsible for providing the necessary data collection and self-assessments to support these efforts.

Specific roles and responsibilities for departments, the CEE, and other stakeholders are proposed within this document.

Development of the RMAF

In the spirit of partnership, this RMAF was developed in consultation with the Senior Advisory Committee (SAC) on Evaluation, and an ad hoc working group established by SAC. SAC membership is drawn from department and agency members of the evaluation community within the federal Public Service. Drafts of key elements of the RMAF, including logic model, roles and responsibilities, and evaluation questions, were also presented to the Heads of Evaluation Strategic Planning Session in October of 2001.


PROFILE OF THE POLICY

Context: Renewal of the Evaluation Function

The new Evaluation Policy occurs within a climate of renewal for the Federal Public Service. "The revised policies on internal audit and evaluation are part of the Government's ongoing commitment to continuous management improvement and accountability," (Minister Robillard, Feb. 14 Press Release). Both the Evaluation and Audit policies are the result of parallel reviews that included public and private sector consultations. Improved internal audit and evaluation supports the management agenda, as articulated in Results for Canadians: A Management Framework for the Government of Canada (Treasury Board Secretariat, 2000.).

Focus on Serving Results-based Management

The revised Evaluation Policy gives evaluation a key role in supporting managing for results by public service managers. Evaluation can support managers' efforts to track and report on actual performance and help decision makers objectively assess program or policy results. This distinguishes evaluation from internal audit - a function that provides assurances on a department or agency's risk management strategy, management control framework and information, both financial and non-financial.

The stated objective of the policy is "to ensure that the government has timely, strategically focused, objective and evidence based information on the performance of its policies, programs and initiatives to produce better results for Canadians."

Acknowledged Capacity Gap

Along with other parts of the Public Service, the evaluation function is facing the challenges of renewal. The numbers of personnel and other resources devoted to evaluation has declined significantly since the early 1990's. An administrative survey of major departments and agencies prior to the introduction of the new Policy suggests that capacity may be as low as 56% of levels needed to fully implement the new requirements (Internal communications). In addition, there is the challenge of the aging public service, and associated high departure rates among experienced evaluation staff.

Added to this capacity gap is the increased scope of work required by the new Evaluation Policy. To assist in closure of this capacity gap, Treasury Board has established the previously mentioned Business Case Funding for qualifying departments and agencies, and the Centre of Excellence for Evaluation.

Progress in closure of this capacity gap is one of the significant evaluation issues for this RMAF.

Pressure from Related Policies - Transfer Payments Policy and Active Monitoring Policy

An important environmental factor for the Evaluation Policy is the pressure on capacity created by other related policies on risk management and results-based management. Two policies are having a particularly significant impact.

  • Transfer Payments Policy. Treasury Board's recent Transfer Payments Policy has required all department/agency grants and contributions programs to develop RMAFs before they can be approved. RMAFs are part of the core work of the evaluation function (see Policy requirements below). This has created a significant backlog of work in creating RMAFs. There is also an even larger future load for department Evaluation units and program units in carrying out the work committed to by the RMAFs for performance monitoring and evaluations.
  • Policy on Active Monitoring. A key component of "Results for Canadians", active monitoring requires departments to actively monitor management practices and controls used to deliver programs. The intent is to ensure early warning and effective control, without returning to the command and control approach of years past. Implementing active monitoring means increased demands on the role of the evaluation function in each department. Evaluation is a key tool supporting results-based management, accountability, and reporting.

Requirements of the Policy

A copy of the new Policy is provided in Appendix A. Key requirements are:

  • A required capacity, integrated with senior management. Deputy-heads are to establish an appropriate evaluation capacity. They should appoint a senior head of evaluation, and establish an evaluation committee with a senior departmental executive to chair it. Departments/agencies may still combine audit and evaluation functions in one branch.
  • Increased Scope. Evaluations are now to cover policies, programs and initiatives, rather than just the traditional program areas. Further, they are to cover any of these delivered through partnership mechanisms (inter-department, inter-governmental, etc.).
  • Increased Emphasis on Performance Monitoring & Early Results. Departments are to embed the discipline of evaluation into the life-cycle of programs, policies and initiatives to:
    • Develop results-based management and accountability frameworks (RMAFs) for new or renewed policies, programs and initiatives.
    • Establish ongoing performance monitoring measurement practices.
    • Evaluate issues related to early implementation and administration of the policy, program, or initiative.
    • Evaluate issues related to relevance, results, and cost-effectiveness.

To this end, Heads of Evaluation are to work with managers to help them enhance design, delivery and performance measurement.

  • Strategic Planning for Evaluation. To ensure the most effective balance of evaluation work, and to best serve department and government priorities, departments/agencies should develop a strategically focused evaluation plan.
  • Integration with Management and Strategic Decision-making. The policy defines factors contributing to success of evaluation, including establishing a conducive environment where managers embed the practice of evaluation into their work; and where evaluation discipline is used in synergy with other management tools to improve decision-making. Heads of Evaluation are to work to ensure that evaluation in their organization is healthy with respect to these factors. To this end, there is an emphasis on objectivity of the evaluation function, not independence.
  • Revised Standards of Practice. The policy includes a simplified and consolidated set of guidelines for professional practice in evaluation.

All of the above is to achieve the objective of the policy "to ensure that the government has timely, strategically focused, objective and evidence based information on the performance of its policies, programs and initiatives to produce better results for Canadians."

Centre of Excellence for Evaluation (CEE)

The CEE was established within the Treasury Board Secretariat (TBS) to provide leadership and support the implementation of the new Policy. The CEE was established as a focus for leadership for the evaluation function within the federal government; to take initiative on shared challenges within the community, such as devising a human resources framework for long-term recruiting, training and development needs; and to provide support for capacity building, improved practices, and a stronger evaluation community within the federal Public Service.

The CEE's activities are to include:

  • Policy Implementation. The CEE supports system wide implementation on behalf of TBS. This includes ensuring the Policy and associated standards are well understood by departmental management and deputy heads through presentations and representation at department senior management committees, and Audit & Evaluation Committees. It also includes coordination of the governments $8.6 million investment over two years to support implementation (Business Case Funding to departments and agencies).
  • Monitoring. The CEE, in partnership with departments and agencies, is responsible for monitoring policy implementation and the health of the evaluation community across government. Activities to collect and share data include: monitoring evaluation plans and reports; review of RMAFs; and a variety of data collection and reporting roles identified in this RMAF.
  • Capacity Building. The CEE will, in consultation with clients, partners and stakeholders, develop a Community Development Strategy and Plan. The plan will address a variety of capacity needs, including a human resources framework for recruiting, training and development. Supporting activities may include:
    • Information sessions.
    • Internship program for evaluators.
    • Mentoring programs.
    • Training & development.
    • Recruitment strategies.
    • Workshops
    • Electronic tools, templates and guides
    • Advice on an as needed basis.

For the most part, the CEE will not be directly involved in the delivery of training, but may play a role in development, partnering, and coordinating, or in endorsing content.

  • Strategic Advice. The CEE will provide advice, guidance and support to departments on how to meet Policy requirements. The CEE will also share strategic and ad hoc advice with key stakeholders in departments and in TBS on horizontal, cross-jurisdictional or system-wide issues.
  • Communications and Networking. In partnership with clients and stakeholders, the CEE will foster increased communication and networking within the evaluation community. This will include facilitating communications and networking opportunities, such as hosting annual events, creating an Evaluation Network, and creating and maintaining a web site.

Appendix B provides a more detailed statement of the Strategic Role for the TBS Centre of Excellence for Evaluation.

Partnership in Implementation

Responsibility for implementing the Evaluation Policy is shared among all stakeholders. It is a partnership between Treasury Board Secretariat and departments and agencies. Within departments, success is not the sole responsibility of Heads of Evaluation - departmental senior managers and program managers each have a role in ensuring a conducive environment "where managers embed the practice of evaluation into their work" and "where evaluation discipline is used in synergy with other management tools to improve decision-making" (Evaluation Policy - "Effective Deployment of Evaluation"). Within Treasury Board, the CEE and other parts of TBS must work together in supporting the evaluation community and in "using the products of evaluation to support decision making at the centre."(Evaluation Policy - "Effective Deployment of Evaluation") Finally, other senior agencies, such as Privy Council Office, also have a role in ensuring the products of evaluation are well used. To this end, Table 1 describes the roles and responsibilities for implementing the Evaluation Policy.

This partnership is also an integral part of the joint effort to implement Active Monitoring of programs, policies and initiatives by the federal government.

Table 1: Evaluation Policy Roles and Responsibilities

Departments & Agencies

TBS (& PCO)

TBS Centre of Excellence for Evaluation

Ensure a supporting environment that:
  • Values and implements Results-based Management.
  • Supports maintenance of ongoing performance monitoring for policies, programs & initiatives, as well as for evaluation practices.
  • Recognizes managers for implementing performance monitoring and for learning from both good and poor results.
  • Uses the evaluation function to support strategic decision-making
  • Provides adequate resources to fulfill the expanded scope of the policy and implement Results Based Management.
  • Maintains an objective competent evaluation workforce through retention, recruiting, and training & development.
  • Integrates performance monitoring and evaluation with other accountability and performance assurance systems.

Implement the Evaluation Policy to achieve:

  • An active monitoring and planning structure for evaluation involving senior management and integrated with department/agency strategic decision-making.
  • Integration of ongoing performance monitoring into policy, program and initiatives.
  • Adequate evaluation coverage of policies, programs and initiatives.
  • Good quality evaluations and RMAFs consistent with the Standards of Practice set by the policy and by TBS.
  • Improved design, implementation and resourcing of programs, policies and initiatives, through timely and actionable evaluation recommendations.

Work in partnership with the CEE and other departments and agencies to:

  • Actively monitor the health of the evaluation function and the implementation of the Evaluation Policy.
  • Achieve and maintain an effective evaluation function in the Public Service.
Ensure a supporting environment that:
  • Uses evidence based performance monitoring and evaluation results to make decisions (e.g. TB submissions, Memoranda to Cabinet, budgets)
  • Recognizes good planning for performance monitoring and evaluation in submissions and memoranda.
  • Provides adequate resourcing to implement Evaluation Policy requirements.
  • Ensures a consistent policy regime with other policies impacting the Evaluation function.
Be responsible to Treasury Board, in partnership with departments and agencies, for:
  • Monitoring and reporting on the implementation of the policy.
  • Monitoring and reporting on the health of the evaluation function.
  • Providing leadership to the evaluation function.
  • Providing advice to TB on evaluation related questions.

Serve as an advocate for the evaluation community of the Public Service by:

  • Advising TBS, PCO and other central agencies on consistency of vision and approach between other policies and the Evaluation Policy.
  • Communicating to departmental senior managers the Evaluation Policy and the role of evaluation in the modern management framework.
  • Communicating to Parliament and Canadians the value and contributions of evaluation.
  • Assessing and making representations on resource requirements.

Building department/agency evaluation capacity by:

  • Providing tools, guides, best practices, policies, and advice.
  • Developing a community-based human resources framework, including training and certification, competency profiles, and a human resources strategy.
  • Fostering an informed and collaborative evaluation community through consultation, workshops, information sessions, events and other networking activities.

From Activities to Outcomes

Activities and Outputs

Figure 1 illustrates how the Evaluation Policy is intended to achieve its desired outcomes. Five kinds of activities are undertaken by both departments/agencies and by TBS through the Centre of Excellence:

  • Resourcing provides human and financial resources necessary to carry it out.
  • Training and development includes the provision of developmental programs, workshops, development experiences and other activities to ensure staff are qualified.
  • Facilitation activities by both departments/agencies and the CEE lead to information sharing events, guides to best practices, workshops, and other outputs that develop and share information and practices and provide professional tools.
  • Planning occurs at a number of levels including the planning of individual evaluations, the planning by departments/agencies of which priority areas to evaluate, and the broad strategic planning across the federal government for the function as a whole. These in turn lead to outputs such as RMAFs, plans for specific evaluations, annual and multi-year Evaluation Plans.
  • Evaluation is the actual work of examining key issues in systematic and disciplined fashion, applying appropriate methodologies and standards so as to produce credible and timely assessments. Key outputs are evaluation studies and reports, and assistance provided to managers in developing ongoing performance measurement. Reports include those generated within departments/agencies for themselves and their partners, and those generated to address government-wide issues.

Immediate Outcomes

Resourcing, training, and development are expected to lead to a competent evaluation workforce and adequate capacity to carry out the needed work. The facilitation outputs, such as guides, tools and information sharing, are expected to result in improved evaluation practices. Better planning will cause a more strategic use of evaluation within departments and across government and improved integration of evaluation with the planning, design and management, and accountability needs of programs/policies/initiatives. Evaluation studies and reports are expected to provide evidence based reporting on a timely and credible basis.

Logic Model of TB Evaluation Policy

Intermediate Outcomes

A competent workforce with sufficient capacity, following improved evaluation practice integrated with the work of managers, and producing timely, credible, evidence-based reporting, is expected to result in:

  • Improved use of evaluation results for department decision-making.
  • Improved design, implementation and resourcing of programs, policies, and initiatives.

Final Outcome

Improved use of evidence based results in decision making, and improved programs, programs and practices, will in turn lead to better results for Canadians, better accountability to Parliament, and more effective use of resources.


ONGOING PERFORMANCE MEASUREMENT STRATEGY

As part of the decision to implement the new Policy, there is a Treasury Board requirement to develop, in consultation with stakeholders, an ongoing performance measurement strategy for the policy.

A set of core measures of performance, is proposed for monitoring on an annual basis. They will be supplemented by one-time data collection efforts in the 18 month and 5 year evaluations, discussed in other sections further below.

Shared Responsibility

The monitoring strategy calls for joint implementation by departments and by the Centre of Excellence. Respective roles are proposed below. While some effort is required, it is recognized that accurate reporting on the health of the evaluation function is of mutual interest and benefit to all stakeholders. For example, the early results will assist TBS in assessing whether to extend Business Case Funding for an additional two years (2003/04 and 2004/05).

Departmental Role in Ongoing Performance Measurement of the Policy

Departments and agencies are where the principal impacts of the Policy will be felt. There is a need to assess progress in a way that respects:

  • Self-assessment of progress by each department/agency.
  • The need for a shared format to permit the CEE to report on the health of the evaluation function and to efficiently roll up to a government wide assessment of progress.

Baseline Data Collection - 4th Quarter 2001/02

As an initial step to develop baseline data, departments would be asked to respond to a request for baseline data conducted by the CEE during the fourth quarter of 2001/02. The survey would collect basic information available or anticipated for the fiscal years 2000/01 and 2001/02. The initial survey is proposed for these reasons:

  • There is an immediate need for baseline information to assist the CEE in monitoring and reporting on the health of the evaluation function, as required by Treasury Board.
  • Timely collection is necessary to serve as a starting point for the 18-month evaluation, whose work must begin in early in 2002/03 to feed into the next budget cycle.
  • The initial effort of the survey will help departments prepare for implementing the annual process proposed below.

Annual Process

The Evaluation Policy itself calls for departments to develop a "strategically focused plan" for evaluation. Most departments already maintain such a plan. Heads of Evaluation are to forward such plans to the CEE annually (once they have passed through the department approval process). It is proposed that:

  • Departments and agencies include in their plans a section to report on commonly agreed indicators to be shared across departments. A guideline to be issued by the CEE will include a suggested title. For the purposes of this text we will refer to "Progress Report in Implementing the New Evaluation Policy."
  • This section of the Evaluation Plan would be revised annually.
  • The section would contain indicators and assessments as proposed below.
  • CEE, in consultation with departments and agencies, will develop and circulate a guideline for the format of the monitoring section, including the provision of an optional statement describing its purpose and context.
  • The first edition of this section would be incorporated in the next round of the department's evaluation plan, following the issue of the Guideline. Given the annual planning and budgeting cycle, some departments may be able to accomplish this in the final drafts of their 2002/03 plans. All departments would accomplish this in their 2003/04 plans.

The proposed time-line of events for 2001/02 and 2002/03 is described below. Key events for the 18-month evaluation are also shown. Data collection and preliminary analysis from the 18 month Evaluation is intended to be complete prior to the Strategic Planning Meeting of Heads of Evaluation in the fall of 2002.

Initial Time-Line

Commitment to Gap Analysis

It is recognized that a gap in evaluation capacity exists in many departments, and that the Evaluation function in government is in a period of renewal. It is also recognized that the new Policy expands the scope of evaluation activities to include evaluation of policies, initiatives, and shared programs. In parallel, the new Treasury Board Transfer Payments Policy requiring RMAFs be in place for grants and contributions programs also has implications for workload and capacity both in the immediate future, and in the evaluation activity that will follow.

Therefore the Progress Report section of the Evaluation Plan will include an ongoing assessment of progress in closing the gap between current capacity and the capacity necessary to fully implement the policy. Also addressed will be current challenges, and likely time-lines for achieving closure of that gap.

In making this assessment, Departments may draw upon earlier gap analyses provided to Treasury Board Secretariat, through previous surveys and submissions for funding.

Departmental Indicators for Ongoing Monitoring

The following indicators are proposed for inclusion, and annual revision, in the Progress Report section of department Evaluation Plans. Where the department capacity does not exist to generate the proposed indicator, the department will either report plans and timelines for the development of such capacity, or an assessment of why such a capacity is not needed or not possible at this time. In the annual revisions to the Progress Report section, changes in indicators, and degree of success in fulfilling previous timelines will be briefly summarized. Indicators are proposed for the following areas. A guideline issued by the CEE would elaborate on each:

  • Current Infrastructure and Capacity: Proposed indicators are:
    • Membership of the department Evaluation (or Audit & Evaluation) Committee.
    • Brief description of role of the Committee, or alternate mechanism if there is no Committee.
    • Previous fiscal year's frequency of meetings of Committee, and name and position of person who chaired.
    • Number of FTE's currently devoted to evaluation function, by group and level
    • Size of consulting budgets.
  • Key Outputs. Proposed Indicators, divided by policy, program or initiative, are:
    • Number of evaluation reports completed.
    • Number of RMAFs completed.
    • Number of performance measurement studies.
    • Other special studies/reviews.

A list of the above, showing document title and relevant program/policy initiative would be provided or appended. Significant outputs not covered by the above might be addressed in the summary narrative section defined below.

  • Outcomes. Proposed indicator:
    • List of major impacts or contributions for the previous year, identified by relevant evaluation, RMAF, or other project.

Examples of impacts include accepted recommendations changing programs/policies/initiatives, validation resulting in significant renewal/extensions of mandates, or improvements/achievements in establishing ongoing performance monitoring.

  • Coverage. Proposed indicators are the % of programs, policies and initiatives covered during the previous year by each of evaluations and RMAFs:
    • A-based programs.
    • Other programs
    • Grants and Contributions
    • Policies
    • Initiatives.

It is expected that % will be measured in terms of dollar value of expenditure for programs, and grants and contributions. Recognizing the absence of a framework to enumerate the universe of policies and initiatives in many departments, the % indications for these are expected to be rough.

  • Backlog Assessment on Required Work. This item addresses work required by Treasury Board policy, commitments made by the department in Treasury Board submissions, and any other similar commitments. The backlog of commitments not met or overdue would be reported for:
    • Number of evaluations backlogged.
    • Number of RMAFs backlogged.
    • Number of other efforts (e.g. performance measurement projects) backlogged.

The above would be accompanied by a statement on expectations on when and how the backlogs are expected to be closed and/or future forecasts of the backlog under current conditions. Departments will likely wish to address the impact of the Transfer Payments Policy on RMAF requirements and future evaluation requirements under this heading.

  • Resource Gap Assessment. Proposed indicators are:
    • Number of FTEs assessed to be necessary on an ongoing basis to achieve the full implementation of the Policy.
    • Any additional FTE's needed on a temporary basis to address current or anticipated backlogs.
    • Gap, if any, in FTE's relative to current levels.
    • An assessment of the level of experience and qualifications of evaluation officers relative to the average requirements of the department in order to implement the Evaluation Policy and associated standards of practice. Departments would be asked to indicate whether current experience and qualifications were a) significantly better than required b) better than required c) adequate d) needing further upgrade; or e) needing significant upgrade.

Departments may wish to add a statement relating the resource assessment to the expectations of when any assessed backlog will be addressed.

  • Summary Narrative. A qualitative assessment by the evaluation function of the department of changes/progress in the last 12 months including:
    • Listing key achievements not covered above, such as major contributions of evaluation to improved performance monitoring, improved programs/policies/initiatives, and controlled risks.
    • An assessment of the degree to which the department had been successful in implementing the policy and embedding the discipline of evaluation into the lifecycle management of policies, programs. Key sub-topics include:
      • Department progress on integration of evaluation into management decision-making.
      • Department progress in establishing ongoing performance measurement practices.
      • Any key barriers to implementation of the Evaluation Policy.

It is recognized that some departments/agencies lack a framework to assess the degree to which programs/policies/initiatives are covered by evaluation work or ongoing performance monitoring. In these cases, it is anticipated that initial progress reports and plans will address timing and milestones for implementing this framework. It is also recognized that the degree of organization and formality of reports will vary between small and large departments.

CEE Role in Ongoing Performance Measurement of the Policy

Roll-Up of Department/Agency Assessments in Evaluation Plans

The Centre of Excellence for Evaluation will conduct an annual assessment of the health of the Evaluation Function and the implementation of the Evaluation Policy. A key input to this assessment will be a roll-up of the assessments in department/agency annual revisions to their departmental Evaluation Plans. This will be supplemented by the CEE's own activities in reviewing the quality of evaluation products, and reviewing department Evaluation Plans.

The CEE's annual report will address:

  • Current Infrastructure and Capacity. A review of current department and agency institutional infrastructure and human resources position. The infrastructure assessment will include a cross public service analysis of the membership and level of activity of Evaluation Committees, and the level of direction and content of Evaluation Plans. Also assessed will be the implications for Policy goals (e.g. integration of evaluation into department strategic decision making).
  • Outputs and Achievements of Evaluation Function. A summary of the volume of outputs, and a review of the level of contribution of Evaluation to Results for Canadians.
  • Quality of Evaluation Products. A review of current status and improvement of evaluation studies. At minimum, this will be based on random sampling at 18 months and 5 years, as proposed further below under Evaluation. Included in the review will be degree of fulfillment of the professional standards of practice established by the Policy.
  • Progress in Ongoing Performance Monitoring. An assessment of the status of departmental implementation of ongoing performance monitoring and the contributions of the Evaluation Function to progress in this area.
  • Backlog Assessment. A roll-up and analysis of departmental reports.
  • Resource Gap Assessment. A roll-up and analysis of departmental reports.
  • Summary Analysis: Health of Evaluation Function and Progress in Evaluation Policy Implementation.

To monitor performance in these areas, the CEE will employ the set of Key Performance Indicators shown in Table 2.

Monitoring the CEE's Own Activities

In the annual report, the CEE will also report on its own performance and achievements in supporting the Evaluation Function and facilitating implementation of the Evaluation Policy. Reporting will include an assessment of CEE contributions to:

  • Capacity Building
  • Strategic Advice
  • Communications & Networking

Table 2:
Key Performance Indicators for CEE Monitoring
of Policy Implementation & Health of Evaluation

Current Infrastructure and Capacity.
  • Frequency of Meetings of Evaluation Committees and level of actual individual chairing.
  • Degree of involvement of Evaluation Committees in Evaluation Plans (planning, approval, review/approval of individual evaluation studies).
  • Quality of Evaluation Plans (linked with department & government priorities, strategically focussed, based on risk assessment)
  • Number of FTE's currently devoted to evaluation function, by group and level.
  • Size of consulting budgets.

Outputs and Achievements of Evaluation Function.

  • Number of evaluation reports completed.
  • Number of RMAFs completed.
  • Number of performance measurement studies.
  • Assessment of outputs and achievements relative to previous years (Qualitative).

Quality of Evaluations.

  • Degree of adherence to standards attached to policy (e.g. consultation practices, actionable recommendations, etc.)
  • Timeliness.
  • Proportion that provide balanced reporting.
  • Average degree of credibility of results.

Progress in Ongoing Performance Monitoring.

  • Proportion of Evaluation Plans reporting achievement of adequate ongoing performance monitoring within departments.
  • Proportion of Plans reporting a supportive environment and actual progress in improving ongoing performance monitoring.

Backlog Assessment.

  • Proportion of departments reporting significant backlogs of required work, in evaluations and RMAFs.
  • Proportion of departments forecasting significant backlogs.
  • Qualitative assessment of degree of department risk left unaddressed.

Resource Gap Assessment.

  • Estimated shortfall/surplus in FTE's.
  • Number of departments reporting office qualifications and experience adequate to average requirements.
  • Estimated shortfall/surplus in total resources.
  • Change in resources since implementation of new policy.

To monitor its own performance, the CEE will:

  • Record major activities and outputs. This includes information sessions, formalized mentoring, courses, workshops, and the development of strategic plans for the community (e.g. recruitment), tools, guides, and templates.
  • Provide forms and collect client assessments for individual courses, workshops, events, and the circulation of new tools and guides.
  • Monitor achievement of milestones identified by strategic plans, such as the human resources plan.

Aggregate effectiveness of the CEE's activities will be assessed through stakeholder and client consultation in association with the 18 month and 5 year evaluation (See Section 4, further below).


EVALUATION STRATEGY - ISSUES

Ten evaluative questions, or issues, have been identified for evaluation. These are summarized in Table 3, along with sub-issues related to each question. The text below provides a capsule summary of each question and its sub-questions. Performance indicators to resolve the questions, and associated data collection, are discussed in subsequent sections.

The ten issues are lettered A to J, and are grouped according to Success, Cost-Effectiveness, and Continued Relevance. Sub-issues under each heading progress from "output" oriented questions to more difficult to assess questions about whether the desired "outcomes" are being achieved.

The issues are also structured to address the degree to which each of the partners/players in implementation of the Policy have fulfilled their roles. Separate evaluative questions address the role of the evaluation function within departments, the role of senior management, the role of program/policy/initiative managers, the CEE, and the role of other branches of TBS.

Progress/Success Issues

A. Have Departments Implemented the Policy?

This issue asks whether the basic features of the Policy have been put into place, ranging from expanded coverage of evaluation and RMAFS to an effective Evaluation Committee, to a strategic Evaluation Plan. Whether adequate coverage of policies, programs and initiatives has been achieved must be judged in the context of resources made available to the evaluation function- addressed under Issue C.

Table 3:
Summary of Issues for Evaluation Policy

PROGRESS/SUCCESS ISSUES

A. HAVE DEPARTMENTS IMPLEMENTED THE POLICY?

  1. Have departments expanded the scope of evaluation beyond traditional areas?
  2. Is there an effective Evaluation Committee or other internal process contributing to setting department priorities, planning, and decision-making?
  3. Is there an effective Strategic Evaluation Plan, with follow-up? Has capacity been established to support such planning?
  4. Is there adequate coverage of department programs, policies and initiatives by RMAFs and evaluations?

B. HAVE DEPARTMENTS MADE PROGRESS IN IMPLEMENTING ONGOING PERFORMANCE MEASUREMENT?

  1. Is there the capacity or framework to assess the degree to which performance measurement is in place?
  2. Are there ongoing performance measurement systems in place for programs, policies & initiatives that are integrated with RMAFs?
  3. Has the evaluation function contributed to the development of ongoing performance monitoring?
  4. Are department/agency environments supportive of the development of ongoing performance monitoring systems?
  5. Is performance data being generated and used?

C. HAVE DEPARTMENTS BEEN ABLE TO PROVIDE ADEQUATE EVALUATION CAPACITY?

  1. Have departments been able to increase the number of qualified personnel, within the evaluation function or elsewhere in the organization?
  2. Have previously identified gaps in resourcing been closed?
  3. Has progress on capacity been adequate relative to available resourcing? Does a relevant resource gap remain?
  4. Would a change in resourcing levels improve value for money?

D. HAS THE CEE PROVIDED A SUPPORTIVE ENVIRONMENT?

  1. Has the CEE provided appropriate tools, events, advice and other elements of a supportive environment for evaluation.
  2. Has the CEE met its Community Development Strategy goals?
  3. Is there an adequate framework in place for training and certifying evaluators?
  4. Has the CEE implemented a monitoring system?
  5. Has the CEE been effective in reporting monitoring system results and conclusions to senior agencies?
  6. Has the CEE been an effective advocate for the evaluation community in the Public Service?

E. IS THE EVALUATION FUNCTION PRODUCING TIMELY AND EFFECTIVE INSIGHT INTEGRATED WITH DEPT. DECISION MAKING?

  1. Are evaluation studies effectively using and supported by department performance monitoring systems?
  2. Is the quality of reports acceptable? Has there been an improvement? How successful has the evaluation function been in identifying potential improvements in programs, policies and initiatives?
  3. Are recommendations produced on a timely basis?
  4. To what extent has the new Evaluation Policy contributed to the above?

F. HAVE DEPARTMENTS IMPROVED THEIR USE OF EVALUATION RESULTS FOR DECISION MAKING?

  1. Has senior management confidence in the evaluation function improved?
  2. To what extent have senior managers "bought into" the revised Evaluation Policy?
  3. To what extent has a supportive department environment been established for results based management and evaluation?
  4. Is there a well functioning department/agency decision-making process for acting on evaluation study recommendations?
  5. Is there follow-up monitoring on implementation of accepted recommendations? Is senior management involved?
  6. Has the evaluation function been used well? To what extent has department senior management participated in setting evaluation priorities?
  7. To what extent have evaluation results been used in department strategic decision-making? Has there been an improvement?

G. HAS THE EVALUATION POLICY CONTRIBUTED TO IMPROVED DESIGN IMPLEMENTATION AND RESOURCING OF DEPARTMENT/AGENCY PROGRAMS, POLICIES & INITIATIVES?

  1. To what extent have program managers "bought into" the revised Evaluation Policy?
  2. Do program managers feel "safe" in working with and implementing objective monitoring and evaluation?
  3. To what extent have programs, policies, and initiatives been improved by evaluation findings and recommendations?

H. ARE SENIOR AGENCIES (TBS, PCO) USING EVALUATION AND PERFORMANCE MONITORING RESULTS FOR DECISION MAKING?

  1. Do parliamentary reporting systems support and encourage the utilization of RMAFs, performance measures, and evaluation?
  2. Are good quality and implementable RMAFs, performance measures, and evaluation results required and/or recognized in support of submissions to senior agencies?
  3. Are commitments to provide RMAFs and performance measures followed up?
  4. Are performance results and evaluation findings used to make decisions on TB Submissions, Cabinet memoranda, and other decisions?
  5. Have senior agencies provided a consistent policy regime that recognizes the impact of other policies on the evaluation function?

COST-EFFECTIVENESS

I. IS THE EVALUATION POLICY COST EFFECTIVE?

  1. Are there ways in which the evaluation function could be delivered more cost effectively?
  2. Has the CEE delivered value for money?
  3. Are there alternative methods of organizing delivery of the Evaluation Policy?

CONTINUED RELEVANCE

J. IS THERE A NEED FOR FURTHER CHANGES TO THE POLICY OR TO ITS IMPLEMENTATION?

  1. Does the evaluation function deliver results for Canadians?
  2. Are there better ways of delivering the evaluation function within the federal government?
  3. Are there improvements that can be made in the Evaluation Policy?

B. Have departments made progress in implementing ongoing performance measurement?

Performance measurement issues are identified separately for the evaluation because of the emphasis placed upon ongoing measurement in the new Evaluation Policy. This issue starts with the most basic question, do departments have sufficient knowledge and control of performance measurement to assess how much has been achieved? Actual results will vary by department. For departments that have this basic capability, the next questions address levels of achievement in sequence. Achievement begins with operational performance monitoring, integrated with RMAFs, and ends with the effective generation and use of performance data. Contributing factors to be assessed are the supportive environment provided by departments, and the role of the evaluation function itself in achieving ongoing performance monitoring.

Again, progress must be judged in the context of actual resources made available to evaluation.

C. Have departments been able to provide adequate evaluation capacity?

This is an important issue for which both departments and Treasury Board have joint responsibility. It speaks to the immediate outcomes intended by the Policy: competent evaluation workforce and adequate capacity. Measurement of capacity gaps begins with increases in personnel, and other measures that departments may have undertaken to build capacity. Here, the gap analyses to be incorporated in department annual Evaluation Plans will be very useful.

Combined with issues A, and B, the focus is on whether sufficient progress has been made relative to the resources provided. Although TBS has provided $8.6 million in Business Case funding, the actual need identified by the pre-Policy survey was substantially more. Also important is the question of value for money generated by the current two years Business Case Funding. It is under this issue that the question of funding for years three and four of the Business Case Funding will be addressed.

D. Has the CEE provided a supportive environment?

The CEE's own role is examined under this issue. Sub-issues begin with outputs such as tools and events, and then move on to broader questions of community development and the human resources initiatives. A monitoring system for the policy is another structure that the CEE is intended to achieve (beginning with this RMAF). Finally, the question may be asked whether the broader roles in advising TBS and serving as an advocate on behalf of the evaluation community have been met. This issue speaks to the CEE's contribution to the immediate outcomes intended by the Policy: improved evaluation practices, competent evaluation workforce, and adequate capacity.

E. Is the evaluation function producing timely and effective insight, integrated with department decision-making?

This issue speaks to the quality of the work of the evaluation function under the new Policy. It addresses the immediate outcomes: evidence based reporting and timely credible reporting. The issue of good quality evaluations is separate from whether departments make good use of them, the subject of issues F and G further below.

F. Have departments improved their use of evaluation results for strategic decision-making?

This issue speaks to the intermediate outcome intended by the Policy: improved use of results for department decision-making, and to the final outcome better accountability to Parliament. Assessment begins with examining the pre-conditions for improved use, increased confidence by senior management in the function, and a structure able to direct evaluation and process and use results. The final question is whether the evaluation results have been effectively used.

G. Has the Evaluation Policy contributed to improved design, implementation and resourcing of department/agency programs, policies & initiatives?

This issue speaks to the intermediate outcome intended by the Policy: improved design, implementation and resourcing of programs, policies, and initiatives; and to the final outcomes better results for Canadians, and more effective use of resources. Again, assessment begins by examining important pre-conditions - whether managers support the new policy and feel safe in cooperating with it. The final sub-issue deals with the actual outcome.

H. Are senior agencies (TBS, PCO) using evaluation and performance monitoring results for strategic decision making?

This issue examines whether TBS (and PCO where relevant) are carrying out their own role under the policy to "use products of evaluation to inform decision making at the centre". In consultations over this RMAF it was noted by many parties that if RMAFs and evaluation findings did not affect funding and policy decisions at the centre, than departments and agencies would be less motivated to fully implement the Evaluation Policy.

Cost Effectiveness and Continued Relevance Issues

I. Is the Evaluation Policy Cost Effective?

Potential cost-savings and alternative methods will be considered here, emerging from the analysis of the earlier issues. The value-for-money of the CEE will also be assessed, informed by the analysis of the success of the CEE's activities under Issue E above.

J. Is there a need for further changes to the policy or its implementation?

This is the summative question assessing the success of the Policy and its implementation, and suggesting needed changes where relevant.

Role of 18 month and 5 year Evaluations

It is usual practice to resolve formative questions about outputs of a program/policy/initiative in the interim evaluation, and address summative questions of program success in the final evaluation.

However, for the Evaluation Policy it will be important to give all the above issues some attention, however inconclusive, in the evaluation at 18 months. Consultation with department and agency representatives over the RMAF revealed two reasons for this:

  • There is a need to resolve the continuity and level of Business Case Funding for years three and four.
  • There is a need to develop baseline assessments of the situation in departments and agencies in order to better direct policy implementation and judge progress from 18 months to 5 years of implementation.

EVALUATION STRATEGY - DATA COLLECTION

Core to the data collection strategy is the annual performance monitoring through department strategic evaluation plans, discussed in Section 2. This section addresses additional data collection associated with the 18 month and 5 year evaluation studies. Table 5 at the end of this section summarizes data collection activities, frequency, and responsibility for completion.

Roll of Business Case Funding Recipients

A large number of departments/agencies are receiving additional evaluation funding through the TBS administered Business Cases for Resourcing Evaluation program. Participants have received funding for the first two years of the four-year program. Funding for the third and fourth years is contingent on results of the 18-month interim evaluation study. Data collected from participants is a key source of data for both the 18-month and 5 year evaluations.

Existing Baseline Information

There are two principal sources of baseline information on evaluation capacity and gaps:

  • Survey of Evaluation Capacity. Results of a survey of departments and agencies conducted by TBS in year 2000, prior to the implementation of the new Evaluation Policy.
  • Business Case Submissions for the Resourcing Evaluation program administered by TBS, collected in 2001. The Business Case submission guidelines requested a gap analysis.
  • Survey of Hiring Intentions. Heads of Evaluation were surveyed on their hiring intentions in November of 2001.

Data Collection at 18 Months

18 months is insufficient time to fully implement the Evaluation Policy or assess its impacts. However, given the necessity to assess whether to extend Business Case funding to a third and fourth year, some preliminary assessments will need to be made of:

  • How much progress has been made in Policy implementation
  • How much is left to do to achieve full implementation.
  • Whether value has been achieved from Business Case funding in years one and two.
  • The benefits of extending funding to years 3 and 4.
  • The risk exposures that may occur if funding for years 3 and 4 is not extended.

Data collection at 18 months will focus on progress in initial implementation steps, such as an accountable evaluation function, expanding the scope of evaluation, and narrowing of capacity gaps. Significant effort will also be made to extend baseline data for such indicators as current satisfaction with the evaluation function by deputy heads. Data collection activities are elaborated below.

Baseline Data Collection & Department Evaluation Plans

As discussed in the time-line proposed under Performance Monitoring, there will be a preliminary baseline data collection exercise completed in the first quarter of 2002/03, and ongoing data collection of the same performance information in subsequent Department Evaluation Plans. The proposed common monitoring section of the Plans themselves are expected to have at least a qualitative retrospective on changes/improvements relative to the previous year. In addition, the gap analyses may be compared to the earlier Survey of Evaluation Capacity for departments who participated in the Survey.

Business Case Progress Reports & Submissions

Recipients of Business Case funding will be asked to provide a progress report in order to support continuation of funding for the remaining two years of the program. These reports will be asked to compare progress to gap analysis and milestones originally proposed. It will be fully recognized that most departments/agencies received substantially less than the funding applied for and that this will impact the expected progress proportionately. Departments/agencies will be asked to report on what they have been able to achieve given their own resources and the funds made available through the Business Cases.

It is anticipated that the contents of some department/agency Strategic Evaluation Plans will also address most or all of the necessary information for a progress report. Where appropriate, departments/agencies may wish to append the Plan to a shorter letter or document providing additional content as needed.

Assessment of Sample Evaluation Reports

Departments and agencies file completed evaluation reports with the CEE. It is proposed that a random sample of 30 evaluation reports be taken to establish a baseline for quality of practice in the 18-month evaluation, with a follow-up to assess progress in the 5-year evaluation. Reports will be assessed for conformance to standards of practice, specificity of recommendations, etc. Status of recommendations within the department will be determined by telephone interviews with department evaluation function, the program manager, and TBS portfolio officer. The analysis will also address the degree to which department/agency ongoing performance monitoring systems were available to be integrated into the evaluation.

The intention is to repeat this exercise at year 5 of implementation to assess any relative improvements in quality of practice.

Annual Retreat & Strategic Planning Session - Heads of Evaluation

Heads of evaluation are among the most informed about progress and challenges in implementation of the Policy. The annual strategic planning retreat provides an opportune place to check in with the community on the status of evaluation issues. The timing of the fall retreat is approximately in line with the timing of an 18-month review. Plenary sessions and workgroups sessions will address progress of the policy and the state of evaluation.

Case Studies

Two other key groups to hear from are Deputy Heads of departments, and front-line managers of programs/policies/initiatives. There is also a need to more closely assess the progress of implementation in departments, and validate the broader findings collected from department annual revisions to their Evaluation Plans.

To address both these needs, a set of department/agency case studies is proposed. The experience of selected departments and agencies would be investigated twice, once at 18 months and again at 5 years. 18-month findings would largely set a baseline from which progress could be measured at 5 years, but would also capture any preliminary progress in implementation.

Key informants within the case studies would include:

  • Deputy heads
  • Managers of recently evaluated programs, or recent RMAF or performance monitoring efforts.
  • Heads of Evaluation

Data to be collected is reflected in Table 5: Issues, Indicators, & Data Sources.

A minimum eleven case studies is proposed, seven from departments/agencies receiving Business Case funding, and four others. The seven from the Business Case funding group would be drawn from three groups based on department size and the initial status/strength of the evaluation function as assessed during the allocation of Business Case funds:

Sample From Business Case Funding Recipients

  • Three from large departments with active evaluation functions.
  • Two from small departments with active evaluation functions.
  • Two from departments with only nominal evaluation functions.

Sample from Non-Business Case Funding Recipients

Two large departments

Two small departments

The above sample is intended to capture the impact of the policy on departments of different size, and different levels of development in their evaluation capacity. The sample is also intended to help assess the impact of Business Case funding, an important question for Treasury Board in reviewing funding for the remaining two years of the program.

Survey of Evaluation Clients and Stakeholders

Two other key groups to hear from are Deputy Heads of departments, and front-line managers of programs/policies/initiatives. There is also a need to more closely assess the progress of implementation in departments, and validate the broader findings collected from department annual revisions to their Evaluation Plans.

To address both these needs, a survey is proposed. Separate surveys would be conducted of a sample of each of these groups:

  • Deputy Heads
  • Senior departmental managers responsible for units recently evaluated
  • Managers of programs recently evaluated
  • Heads of Evaluation
  • Senior managers who have evaluation within the branches they are responsible for.

A baseline survey would be conducted at 18 months, and repeated for the 5-year evaluation. The purpose of the survey would be to obtain an independent assessment of how well the evaluation is doing its job under the new policy. Managers involved in evaluation are also included to permit comparison, and to provide quantitative indicators of the degree of positive self-assessments on the implementation of the new policy.

Questions would include multiple-choice assessments of degree of satisfaction or perceived performance of evaluation with respect to the indicators identified in this RMAF. This is intended to allow quantification of degree of satisfactions and relative improvement over time. The approach also allows cost-effective data collection to validate the information collected through the Progress Reports that are proposed for department Evaluation Plans.

A minimum sample of 30 for each group will be taken. Selection would be random based on reported evaluations for the previous year, but stratified according to

  • Large departments, small departments, and departments with only nominal evaluation functions at the beginning of 2001/02.
  • Departments who received Business Case funding, and those who did not.

The survey would be conducted in person for Deputy Heads. For the others, the survey would be circulated via Heads of Evaluation, and returned directly to the CEE.

Key Informant Interviews

In addition to the above exercises, interviews will be conducted to include all client/stakeholder communities. These will include:

  • Parliamentary representatives.
  • Privy Council Office
  • Program Branch of TBS
  • Additional department senior managers, evaluators, as needed.
  • Other stakeholders such as the Office of the Auditor General and the Canadian Evaluation Society.

Numbers of these interviews will be restricted at 18 months, but extended for the 5-year evaluation.

Data Collection Activities at 5 Years

The same activities will be repeated at 5 years, with more extended key informant interviews, and a more extended record of progress in department/agency annual revisions to Evaluation Plans.

Table 4: Data Collection Activities, Frequency, and Responsibility

Activity

Frequency

Responsibility

Baseline Survey of Evaluation Capacity

Completed

TBS

Business Case Submissions (Round 1)

Completed

Departments/Agencies

Baseline Data collection from Departments in preparation for first round of revised Evaluation Plans.

Once

CEE

Department Evaluation Plans

Annual

Departments/Agencies

Survey of Evaluation Clients and Stakeholders

18 months and 5 years

CEE

Annual CEE Report and Roll-up of Department Evaluation Plan information on gap.

Annual

CEE

Business Case Reports & Submissions (Round 2)

Once

Departments/Agencies withCEE guidance

Assessment of Sample Evaluation Reports

18 months & 5 years

CEE

Annual Strategic Planning Retreat

Annually, but RMAF components at 18 months and 5 years.

CEE

Case Studies

18 months & 5 years

CEE

Key Informant Interviews

18 months & 5 years

CEE


 

EVALUATION STRATEGY - PERFORMANCE INDICATORS

For each of the evaluative questions identified in Section 3, there is a need for performance indicators to provide the basis for an answer. Table 2 earlier provided a list of key indicators for ongoing performance monitoring. Table 5, at the end of this document provides an expanded set of indicators for each evaluation issue, and lists associated data collection activities. Consistent with the plan to evaluate all issues on a preliminary basis after 18 months, a distinction is drawn between the performance indicators expected at 18 months, and those expected in the full evaluation at 5 years.


REPORTING STRATEGY

Table 5 lists the reports expected by the RMAF.

Table 5: Expected Reports

Results Measurement

Product

Date

Progress Report on Implementation of the New Evaluation Policy (by each Department)

Section of annual revision to department/agency Evaluation Plan

Annual by Departments and Agencies.

Roll-up of Progress Report for the Evaluation Function, plus report on CEE outputs, performance assessment.

CEE Annual Report

Annual by CEE

Interim Evaluation & Supporting reports from data collection.

Interim Evaluation Report

Second Quarter 2002/03

Full Evaluation & Supporting reports from data collection.

Evaluation Report

2005/2006

Table 6: Issues, Performance Indicators, and Data Sources

ISSUE

PERFORMANCE INDICATORS

Data Source

@18 Months

@ 5 years

PROGRESS/SUCCESS ISSUES
A. HAVE DEPARTMENTS IMPLEMENTED THE POLICY & EXPANDED CAPACITY?
1. Have departments expanded the scope of evaluation beyond traditional areas? Use of expanded scope in Evaluation Plans.

RMAFs and evaluations underway or complete in new areas.

Same as 18 month. Baseline Data Collection & Evaluation Plans

Case Studies and Survey of Stakeholders

2. Is there an effective Evaluation Committee or similar internal process contributing to setting department priorities, planning, and decision-making? Frequency of meetings and relevance of agenda.

Committee membership in terms of level and l representation.

Chairing of Committee by senior executive.

Consistency of role and mandate of committees with Evaluation Policy requirements.

Whether Committee approves the evaluation plan.

Same as 18 month plus:

Role of committee in setting strategic direction of evaluation.

Follow up monitoring of accepted recommendations

Role of committee in ensuring integration with department/ agency performance measurement and linking evaluation results with department strategic choices.

Baseline Data Collection & Evaluation Plans

Case Studies and Survey of Stakeholders

Key Informant Interviews

3. Is there an effective departmental Evaluation Plan, with follow-up? Has capacity been established to support such planning? Existence and quality of Plans.

Evidence of process for follow-up.

Existence of a framework accounting for the potential evaluation universe.

Quality of mechanisms to track progress on previous plan goals/intentions.

Same as 18 month plus:

Degree of follow-up on plans commitments and strategy.

Year to year consistency of plans in reporting progress and tracking shifts in strategy

Baseline Data Collection & Evaluation Plans

Case Studies and Survey of Stakeholders

Key Informant Interviews

4. Is there adequate coverage of department programs, policies and initiatives by RMAFs and evaluations? Existence of department assessment on gaps in coverage.

Progress and plans/commitments for coming year in closing gaps.

Progress on RMAFs for grants and contributions

Same without allowing for future plans/commitments. Baseline Data Collection & Evaluation Plans

Case Studies and Survey of Stakeholders

Key Informant Interviews

ISSUE

PERFORMANCE INDICATORS

Data Source

@18 Months

@ 5 years

B. HAVE DEPARTMENTS MADE PROGRESS IN IMPLEMENTING ONGOING PERFORMANCE MEASUREMENT?
1. Is there the capacity or framework to assess the degree to which performance measurement is in place? Existence of sufficient information to permit Heads of Evaluation to assess progress. Same as 18 month Baseline Data Collection & Evaluation Plans

Case Studies and Survey of Stakeholders

Key Informant Interviews

2. Are there ongoing performance measurement systems in place for programs, policies & initiatives that are integrated with RMAFs? Proportion of programs that have performance monitoring system in place, and which have generated at least one report.

Links with RMAFs, evaluations.

Same as 18 month, plus:

Role of shared measurement systems in ongoing guidance of department.

Degree of integration of performance measurement systems for assessment of overall department/agency performance.

Use of results from performance measurement in department/agency reporting to Parliament.

Views of senior managers.

Baseline Data Collection & Evaluation Plans

Case Studies and Survey of Stakeholders

Key Informant Interviews

3. Has the evaluation function contributed to the development of ongoing performance monitoring? Activity reported in Evaluation Plans.

Awareness of roles by program managers.

Role and recent activity of evaluation function in establishing program & dept. performance measurement systems. Baseline Data Collection & Evaluation Plans

Case Studies and Survey of Stakeholders

Key Informant Interviews

4. Are department/agency environments supportive of the development of ongoing performance monitoring systems? Demand for RMAFs and other performance management systems.

Content of demand - is there a demonstrated intent to use?

Same as 18 months plus:

Actual use (see issue below).

Baseline Data Collection & Evaluation Plans

Case Studies and Survey of Stakeholders

Key Informant Interviews

Annual Retreat of Heads of Evaluation.

5. Is performance data being generated and used? Circulation and use of results reports by program managers and senior managers Use of results from performance measurement in department/agency reporting to Parliament. Baseline Data Collection & Evaluation Plans

Case Studies and Survey of Stakeholders

Key Informant Interviews

Annual Retreat of Heads of Evaluation.

ISSUE

PERFORMANCE INDICATORS Data Source

@18 Months

@5 Years
C. HAVE DEPARTMENTS BEEN ABLE TO PROVIDE ADEQUATE EVALUATION CAPACITY?
1. Have departments been able to increase the number of qualified personnel, either in the evaluation function or elsewhere in the organization? Staffing Actions

Numbers of staff recruited.

Success in retaining staff (relative to benchmark turnover data).

Proportion of evaluation officers assessed by department as requiring further training or experience before being fully qualified to fulfill the responsibilities of their level, outside the normal ongoing training requirements of a learning culture.

Same as 18 month, plus

Training

Existence of training programs and consistency of programs with evaluation professional standards and the Evaluation Policy.

Frequency and quality of external training

Enrollment and graduation of personnel in recognized training programs

Survey of Evaluation Capacity; Business Case Submissions; Business Case Progress Reports

Baseline Data Collection & Evaluation Plans

Case Studies and Survey of Stakeholders

Key Informant Interviews

Annual Retreat of Heads of Evaluation.

2. Have previously identified gaps in resourcing been closed? Is there still a relevant gap? Progress on closure of identified staffing gaps and demographic analysis of evaluation staff before and after.

Gap assessments in Evaluation Plans.

Same as 18 month Survey of Evaluation Capacity; Business Case Submissions; Business Case Progress Reports

Baseline Data Collection & Evaluation Plans

Case Studies and Survey of Stakeholders

Key Informant Interviews

Annual Retreat of Heads of Evaluation.

3. Has progress on capacity been adequate relative to available resourcing? Does a relevant resource gap remain? Progress on closure of gaps in coverage relative to closure in identified resource gaps.

Improvements in evaluator qualifications.

Absence of bottlenecks in training, retention etc.

Presence of good demand for evaluation output.

Same as 18 month Survey of Evaluation Capacity; Business Case Submissions; Business Case Progress Reports

Baseline Data Collection & Evaluation Plans

Case Studies and Survey of Stakeholders

Key Informant Interviews

Annual Retreat of Heads of Evaluation.

4. Would a change in resourcing levels improve value for money? Progress on closure of gaps in coverage relative to closure in identified resource gaps.

Improvements in evaluator qualifications.

Absence of bottlenecks in training, retention etc.

Presence of good demand for evaluation output.

Same as 18 month Survey of Evaluation Capacity; Business Case Submissions; Business Case Progress Reports

Baseline Data Collection & Evaluation Plans

Case Studies and Survey of Stakeholders

Key Informant Interviews

Annual Retreat of Heads of Evaluation.

ISSUE

PERFORMANCE INDICATORS Data Source

@18 Months

@5 Years
D. HAS THE CEE PROVIDED A SUPPORTIVE ENVIRONMENT?
1. Has the CEE provided appropriate tools, events, advice and other elements of a supportive environment for evaluation. Progress on delivering elements of a well functioning environment, including provision of, tools, policies, training and certification structure, and support for networking and communication.

Value of outputs to departments and agencies in achieving department/agency goals for evaluation.

Value of policy and ad hoc advice provided.

Recognition of Centre by peers and decision makers.

Same as 18 months plus:

Opinions of deputy heads, heads of evaluation, and evaluation practitioners about the events, advice and other elements of a supportive environment for Evaluation provided by the CEE.

Case studies

Baseline Data Collection & Evaluation Plans

Key Informant Interviews

Annual Retreat of Heads of Evaluation.

CEE Annual Performance Monitoring report.

2. Has the CEE met its Community Development Strategy goals? Number and importance of goals met. Same as 18 month. Case studies

Baseline Data Collection & Evaluation Plans

Key Informant Interviews

Annual Retreat of Heads of Evaluation.

CEE Annual Performance Monitoring report.

3. Is there an adequate framework in place for training and certifying evaluators? Existence of a structure.

Availability of courses etc. in time and available places.

Same as 18 month. Case studies

Baseline Data Collection & Evaluation Plans

Key Informant Interviews

Annual Retreat of Heads of Evaluation.

CEE Annual Performance Monitoring report.

4. Has the CEE implemented a monitoring system? Ability to generate a timely report on the state of evaluation in the Public Service?

Capacity of report to provide timely identification of issues in department practices of performance management, evaluation, and strategic direction of programs, policies and initiatives.

Same as 18 months, plus:

Circulation and use of reports

Case studies

Baseline Data Collection & Evaluation Plans

Key Informant Interviews

Annual Retreat of Heads of Evaluation.

CEE Annual Performance Monitoring report.

5. Has the CEE been effective in reporting monitoring system results and conclusions to senior agencies? Reports and advice generated.

Quality of responses.

Views of evaluation managers.

Views of senior agency managers.

Impact s of reports on decision making in central agencies and individual departments/ agencies Case studies

Baseline Data Collection & Evaluation Plans

Key Informant Interviews

Annual Retreat of Heads of Evaluation.

CEE Annual Performance Monitoring report.

6. Has the CEE been an effective advocate for the evaluation community? Quality and content of reports and advice provided to senior agencies.

Existence of consultation/communication events or meetings to assess community issues.

Opinions of heads of evaluation, and CEE managers

Impacts of CEE on TBS/CEE approach to evaluation.

Same as 18 month. Case studies

Baseline Data Collection & Evaluation Plans

Key Informant Interviews

Annual Retreat of Heads of Evaluation.

CEE Annual Performance Monitoring report.

ISSUE

PERFORMANCE INDICATORS Data Source

@18 Months

@5 Years
E. IS THE EVALUATION FUNCTION PRODUCING TIMELY AND EFFECTIVE INSIGHT, INTEGRATED WITH DEPARTMENT DECISION MAKING?
1. Are evaluation studies effectively using and supported by department performance monitoring systems? Proportion of recent studies able to place significant reliance on ongoing performance management systems.

 

Same as 18 month. Assessment of Sample Evaluation Reports

Case studies

Baseline Data Collection & Evaluation Plans

Key Informant Interviews

Annual Retreat of Heads of Evaluation.

CEE Annual Performance Monitoring report

2. Is the quality of reports acceptable? Has there been an improvement? How successful has the evaluation function been in identifying potential improvements in programs, policies and initiatives? Proportion of reports with clear recommendations indicating actions to be taken and time frame.

Views of senior managers.

Same as 18 month. Assessment of Sample Evaluation Reports

Case studies

Baseline Data Collection & Evaluation Plans

Key Informant Interviews

Annual Retreat of Heads of Evaluation.

CEE Annual Performance Monitoring report

3. Are recommendations produced on a timely basis? Views of senior managers. Same as 18 month. Assessment of Sample Evaluation Reports

Case studies

Baseline Data Collection & Evaluation Plans

Key Informant Interviews

Annual Retreat of Heads of Evaluation.

CEE Annual Performance Monitoring report

4. To what extent has the new Evaluation Policy contributed to the above? Role and history of Sr. Evaluation Committee (see above).

Awareness of Sr. managers of expanded scope of policy.

Views of evaluation managers and senior managers.

Same as 18 month. Assessment of Sample Evaluation Reports

Case studies

Baseline Data Collection & Evaluation Plans

Key Informant Interviews

Annual Retreat of Heads of Evaluation.

CEE Annual Performance Monitoring report

ISSUE

PERFORMANCE INDICATORS Data Source

@18 Months

@5 Years
F. HAVE DEPARTMENTS IMPROVED THEIR USE OF THE EVALUATION RESULTS FOR DECISION MAKING?
1. Has senior management confidence in the evaluation function improved? Degree of confidence in the evaluation function by deputy heads and senior managers. Same as 18 month. Case studies

Baseline Data Collection & Evaluation Plans

Key Informant Interviews

Annual Retreat of Heads of Evaluation.

CEE Annual Performance Monitoring report

Business Case Progress Reports

2. To what extent have managers "bought into" the revised Evaluation Policy? Level of commitment and support by deputy heads for the evaluation function.

Evidence of commitment to evaluation (e.g. as documented in internal memos, business plans, etc.)

Mechanisms in place to ensure dissemination of performance measurement results and evaluation results to managers.

Mind-set of executives & decision-making managers towards integrating results of evaluation reports into planning and decision-making.

Degree to which evaluation activities are considered a priority and receive appropriate resourcing.

Degree to which managers understand role and responsibilities of Heads of Evaluation.

Same as 18 month. Case studies

Baseline Data Collection & Evaluation Plans

Key Informant Interviews

Annual Retreat of Heads of Evaluation.

CEE Annual Performance Monitoring report

Business Case Progress Reports

3. To what extent has a supportive department environment been established for results based management and evaluation? Positive recognition for managers implementing performance monitoring.

Positive recognition for managers learning from both good and poor performance results.

Use of results in department performance reporting.

Sufficiency of resourcing.

Indicators for other sub-issues in this section.

Same as 18 month. Case studies

Baseline Data Collection & Evaluation Plans

Key Informant Interviews

Annual Retreat of Heads of Evaluation.

CEE Annual Performance Monitoring report

Business Case Progress Reports

4. Has the evaluation function been used well? To what extent has senior management participated in setting evaluation priorities? Role of department priorities in setting Evaluation Plan, and in Evaluation Committee decisions.

Degree to which evaluation outputs are collectively matching the current perceived needs by the Dept./agency.

Same as 18 month. Case studies

Baseline Data Collection & Evaluation Plans

Key Informant Interviews

Annual Retreat of Heads of Evaluation.

CEE Annual Performance Monitoring report

Business Case Progress Reports

5. Is there a well functioning department/agency decision-making process for acting on evaluation study recommendations? See evaluation committee issue above. Same as 18 month. Case studies

Baseline Data Collection & Evaluation Plans

Key Informant Interviews

Annual Retreat of Heads of Evaluation.

CEE Annual Performance Monitoring report

Business Case Progress Reports

6. Is there follow-up monitoring on implementation of accepted recommendations? Is senior management involved? Existence of system

Current levels of cooperation on receipt of progress reports from program managers.

Same as 18 month. Case studies

Baseline Data Collection & Evaluation Plans

Key Informant Interviews

Annual Retreat of Heads of Evaluation.

CEE Annual Performance Monitoring report

Business Case Progress Reports

7. To what extent have evaluation results been used in department strategic decision-making? Has there been an improvement? Contribution of evaluation reports to key decisions on department programs, policies and initiatives.

Implementation of results.

Results of implementation.

Same as 18 months. Case studies

Baseline Data Collection & Evaluation Plans

Key Informant Interviews

Annual Retreat of Heads of Evaluation.

CEE Annual Performance Monitoring report

Business Case Progress Reports

ISSUE

PERFORMANCE INDICATORS Data Source

@18 Months

@5 Years
G. HAS THE EVALUATION POLICY CONTRIBUTED TO IMPROVED DESIGN, IMPLEMENTATION AND RESOURCING OF DEPARTMENT/AGENCY PROGRAMS, POLICIES AND INITIATIVES?
1. To what extent have program managers "bought into" the revised Evaluation Policy? Mechanisms in place to ensure dissemination of performance measurement results and evaluation results to managers.

Mind-set of executives towards integrating results of evaluation reports into planning and decision-making.

Degree to which evaluation activities are considered a priority and receive appropriate resourcing.

Degree to which managers understand role and responsibilities of Heads of Evaluation.

Demand by managers for evaluation services.

Same as 18 months. Case studies

Baseline Data Collection & Evaluation Plans

Key Informant Interviews

Annual Retreat of Heads of Evaluation.

CEE Annual Performance Monitoring report

Business Case Progress Reports

2. Do program managers feel "safe" in working with and implementing objective monitoring and evaluation? Policies and practices in dealing with poor performance results and evaluation findings.

Method, if present, of linking performance-monitoring systems to executive compensation.

Opinions of program managers, senior managers, and heads of evaluation.

Same as 18 months. Case studies

Baseline Data Collection & Evaluation Plans

Key Informant Interviews

Annual Retreat of Heads of Evaluation.

CEE Annual Performance Monitoring report

Business Case Progress Reports

3. To what extent have programs, policies and initiatives been improved by evaluation findings and recommendations? Commitments to implementation of results from recent evaluations.

Implementation of results.

Results of implementation.

Same as 18 months. Case studies

Baseline Data Collection & Evaluation Plans

Key Informant Interviews

Annual Retreat of Heads of Evaluation.

CEE Annual Performance Monitoring report

Business Case Progress Reports

ISSUE

PERFORMANCE INDICATORS

Data Source

@18 Months

@5 Years
H. ARE SENIOR AGENCIES USING EVALUATION AND PERFORMANCE MONITORING RESULTS FOR STRATEGIC DECSISION MAKING?
1. Does implementation of parliamentary reporting systems by senior agencies support and encourage the utilization of RMAFs performance measures, and evaluations? Content of policies, practices and guidelines issued by senior agencies on parliamentary reporting.

Use evidenced in department/agency reports to parliament and TB.

Consistency of department agency/reports with major recent commitments to monitor performance through RMAFs.

Assessment by heads of evaluation, senior managers, & senior agency managers.

Degree to which evaluation and performance results reach Parliamentarians.

Same as 18 months. Case studies

Baseline Data Collection & Evaluation Plans

Key Informant Interviews

Annual Retreat of Heads of Evaluation.

CEE Annual Performance Monitoring report

Business Case Progress Reports

2. Are good quality RMAFs, performance measures, and evaluation results required and/or valued in support of submissions to senior agencies? Degree to which RMAFs and/or past performance measures are required as part of TB submissions or memoranda to cabinet.

Assessment by heads of evaluation, senior managers, & senior agency managers.

Existence of processes and reference to standards for quality.

Use of processes by senior agencies.

Assessment by heads of evaluation, senior managers, & senior agency managers.

Same as 18 months. Case studies

Baseline Data Collection & Evaluation Plans

Key Informant Interviews

Annual Retreat of Heads of Evaluation.

CEE Annual Performance Monitoring report

Business Case Progress Reports

3. Are commitments to provide RMAFs and performance measures followed up? Existence of processes by senior agencies to check previous performance measurement commitments.

Use of processes.

Assessment by heads of evaluation, senior managers, & senior agency managers.

Same as 18 months. Case studies

Baseline Data Collection & Evaluation Plans

Key Informant Interviews

Annual Retreat of Heads of Evaluation.

CEE Annual Performance Monitoring report

Business Case Progress Reports

4. Are performance results and evaluation findings used for strategic decision-making? Views of senior agency managers, senior dept/agency managers, and heads of evaluation.

Examples of negative decisions for lack of performance information or poor evaluation results.

Number of positive examples where Performance or Evaluation Results were instrumental in determining the direction of a decision.

Same as 18 months. Case studies

Baseline Data Collection & Evaluation Plans

Key Informant Interviews

Annual Retreat of Heads of Evaluation.

CEE Annual Performance Monitoring report

Business Case Progress Reports

5. Have senior agencies provided a consistent policy regime that recognizes the impact of other policies on the evaluation function? Gaps and backlogs in coverage by RMAFs and evaluations.

Evidence attributing gaps & backlogs to other senior agency policies' impact on evaluation.

Examples of other policies cross-referencing evaluation.

Examples of other policies not cross-referencing or missed opportunities to gain efficiencies and effectiveness.

Opinions of heads of evaluations, senior managers in departments, and senior agency managers.

Same as 18 months. Case studies

Baseline Data Collection & Evaluation Plans

Key Informant Interviews

Annual Retreat of Heads of Evaluation.

CEE Annual Performance Monitoring report

Business Case Progress Reports

ISSUE

PERFORMANCE INDICATORS

Data Source

@18 Months

@5 Years
COST-EFFECTIVENESS
I. IS THE EVALUATION POLICY COST EFFECTIVE?
1. Are there ways in which the evaluation function could be delivered more cost effectively? Opinions of deputy heads, senior managers, program managers, heads of evaluation, and Parliamentarians. Same as 18 months. Case studies

Baseline Data Collection & Evaluation Plans

Key Informant Interviews

Annual Retreat of Heads of Evaluation.

CEE Annual Performance Monitoring report

Business Case Progress Reports

2. Has the CEE delivered value for money? Opinions of deputy heads, senior managers, program managers and heads of evaluation. Same as 18 months. Case studies

Baseline Data Collection & Evaluation Plans

Key Informant Interviews

Annual Retreat of Heads of Evaluation.

CEE Annual Performance Monitoring report

Business Case Progress Reports

3. Are there alternative methods of organizing delivery of the Evaluation Policy? Opinions of deputy heads, senior managers, program managers and heads of evaluation. Same as 18 months. Case studies

Baseline Data Collection & Evaluation Plans

Key Informant Interviews

Annual Retreat of Heads of Evaluation.

CEE Annual Performance Monitoring report

Business Case Progress Reports

ISSUE

PERFORMANCE INDICATORS

Data Source

@18 Months

@5 Years
CONTINUED RELEVANCE
J. IS THERE A NEED FOR FURTHER CHANGES TO THE POLICY OR TO ITS IMPLEMENTATION?
1. Does the evaluation function deliver results for Canadians? Consensus among senior managers and evaluation heads on rationale for new Evaluation Policy and its relevance to overall government agenda. Same as 18 month, plus.

Success of policy and related initiatives in achieving modernization of management practices (e.g. "Results for Canadians" integrated results- based management framework)

Degree to which new Evaluation Policy is integrated with these achievements.

Opinions of selected members of Parliament.

Case studies

Baseline Data Collection & Evaluation Plans

Key Informant Interviews

Annual Retreat of Heads of Evaluation.

CEE Annual Performance Monitoring report

Business Case Progress Reports

2. Are there better ways of delivering the evaluation function within the federal government? Opinions of deputy heads, senior managers, program managers, and heads of evaluation Same as 18 month. Case studies

Baseline Data Collection & Evaluation Plans

Key Informant Interviews

Annual Retreat of Heads of Evaluation.

CEE Annual Performance Monitoring report

Business Case Progress Reports

3. Are there improvements that can be made in the Evaluation Policy? Opinions of deputy heads, senior managers, program managers and heads of evaluation. Same as 18 month. Case studies

Baseline Data Collection & Evaluation Plans

Key Informant Interviews

Annual Retreat of Heads of Evaluation.

CEE Annual Performance Monitoring report

Business Case Progress Reports

 

 

Date modified: