ACKNOWLEDGEMENT

INTRODUCTION

PROFILE OF THE POLICY

ONGOING PERFORMANCE MEASUREMENT STRATEGY

EVALUATION STRATEGY - ISSUES

EVALUATION STRATEGY - DATA COLLECTION

EVALUATION STRATEGY - PERFORMANCE INDICATORS

REPORTING STRATEGY


 

Centre of Excellence for Evaluation
Treasury Board Secretariat

Acknowledgement

This RMAF was developed by the Centre of Excellence for Evaluation in collaboration with an inter-departmental Working Group of evaluation representatives and the Evaluation Senior Advisory Committee; and with the assistance of Hara Associates Inc.

The new role for evaluation sees the principles of evaluation embedded into management practices, facilitating the move towards results-based management. This strategic plan adheres to these emerging principles.

As part of these policies, TBS created a Centre of Excellence for Evaluation and a Centre of Excellence for Internal Audit. The broad mandates of these Centres are derived from the new policies, requiring TBS to take a leadership role for the respective communities across the federal government. The Centre of Excellence for Evaluation was created on April 1, 2001, and is headed by a Senior Director, with a core group of 12 FTEs assigned to the Centre. The Centre is currently completing staffing activities and has started engaging in a series of proactive outreach activities according to its mandate.

Introduction

On April 1, 2001, Treasury Board replaced the older policy on Review with two separate policies, one for the practice of Evaluation within the federal government and one for Audit. Both policies are to be evaluated on an interim basis after 18 months (fall 2002), and fully evaluated at five years (2005/2006).

In parallel with establishing a new Evaluation Policy (the Policy), Treasury Board also funded the first two years of a proposed four-year program to assist implementation. Departments and agencies that submitted qualifying Business Cases are receiving additional resources to improve their evaluation capacity (Business Case Funding). A Centre of Excellence for Evaluation (CEE) has also been established to monitor the success of the Policy and to assist in implementation. Funding for the first two years of the program is $8.6 million. Funding for the third and fourth years is contingent on the interim evaluation of the Policy in the second and third quarters of 2002.

This RMAF defines the strategy for both the 18-month and 5 year evaluations of the Policy and associated measures, the Business Case Funding and of the Centre of Excellence. The RMAF defines the performance indicators; strategies for data collection and reporting; and evaluative questions which will be used to judge the progress and success of Policy implementation, and the actual contribution of the CEE to it.

Ongoing Performance Monitoring for the Evaluation Policy

A key undertaking accompanying the approved Policy was that Treasury Board Secretariat (TBS), in consultation with its stakeholders, would include in this RMAF a measurement strategy for tracking the ongoing performance of the policy.

Consistent with this undertaking, an annual system of performance monitoring is proposed, with roles and responsibilities for maintaining key performance indicators defined for the department/agency and Treasury Board partners who share responsibility for implementing the Policy.

Shared Responsibility

Responsibility for Policy implementation is shared among departments and agencies, whose evaluation functions are directed by the Policy, and central agencies such as Treasury Board and the Privy Council Office, who must provide the appropriate support.

In the context of shared responsibility, the CEE is responsible for the evaluations of the Policy, and for the ongoing monitoring of the function. Departments and agencies are responsible for providing the necessary data collection and self-assessments to support these efforts.

Specific roles and responsibilities for departments, the CEE, and other stakeholders are proposed within this document.

Development of the RMAF

In the spirit of partnership, this RMAF was developed in consultation with the Senior Advisory Committee (SAC) on Evaluation, and an ad hoc working group established by SAC. SAC membership is drawn from department and agency members of the evaluation community within the federal Public Service. Drafts of key elements of the RMAF, including logic model, roles and responsibilities, and evaluation questions, were also presented to the Heads of Evaluation Strategic Planning Session in October of 2001.


PROFILE OF THE POLICY

Context: Renewal of the Evaluation Function

The new Evaluation Policy occurs within a climate of renewal for the Federal Public Service. "The revised policies on internal audit and evaluation are part of the Government's ongoing commitment to continuous management improvement and accountability," (Minister Robillard, Feb. 14 Press Release). Both the Evaluation and Audit policies are the result of parallel reviews that included public and private sector consultations. Improved internal audit and evaluation supports the management agenda, as articulated in Results for Canadians: A Management Framework for the Government of Canada (Treasury Board Secretariat, 2000.).

Focus on Serving Results-based Management

The revised Evaluation Policy gives evaluation a key role in supporting managing for results by public service managers. Evaluation can support managers' efforts to track and report on actual performance and help decision makers objectively assess program or policy results. This distinguishes evaluation from internal audit - a function that provides assurances on a department or agency's risk management strategy, management control framework and information, both financial and non-financial.

The stated objective of the policy is "to ensure that the government has timely, strategically focused, objective and evidence based information on the performance of its policies, programs and initiatives to produce better results for Canadians."

Acknowledged Capacity Gap

Along with other parts of the Public Service, the evaluation function is facing the challenges of renewal. The numbers of personnel and other resources devoted to evaluation has declined significantly since the early 1990's. An administrative survey of major departments and agencies prior to the introduction of the new Policy suggests that capacity may be as low as 56% of levels needed to fully implement the new requirements (Internal communications). In addition, there is the challenge of the aging public service, and associated high departure rates among experienced evaluation staff.

Added to this capacity gap is the increased scope of work required by the new Evaluation Policy. To assist in closure of this capacity gap, Treasury Board has established the previously mentioned Business Case Funding for qualifying departments and agencies, and the Centre of Excellence for Evaluation.

Progress in closure of this capacity gap is one of the significant evaluation issues for this RMAF.

Pressure from Related Policies - Transfer Payments Policy and Active Monitoring Policy

An important environmental factor for the Evaluation Policy is the pressure on capacity created by other related policies on risk management and results-based management. Two policies are having a particularly significant impact.

Requirements of the Policy

A copy of the new Policy is provided in Appendix A. Key requirements are:

To this end, Heads of Evaluation are to work with managers to help them enhance design, delivery and performance measurement.

All of the above is to achieve the objective of the policy "to ensure that the government has timely, strategically focused, objective and evidence based information on the performance of its policies, programs and initiatives to produce better results for Canadians."

Centre of Excellence for Evaluation (CEE)

The CEE was established within the Treasury Board Secretariat (TBS) to provide leadership and support the implementation of the new Policy. The CEE was established as a focus for leadership for the evaluation function within the federal government; to take initiative on shared challenges within the community, such as devising a human resources framework for long-term recruiting, training and development needs; and to provide support for capacity building, improved practices, and a stronger evaluation community within the federal Public Service.

The CEE's activities are to include:

For the most part, the CEE will not be directly involved in the delivery of training, but may play a role in development, partnering, and coordinating, or in endorsing content.

Appendix B provides a more detailed statement of the Strategic Role for the TBS Centre of Excellence for Evaluation.

Partnership in Implementation

Responsibility for implementing the Evaluation Policy is shared among all stakeholders. It is a partnership between Treasury Board Secretariat and departments and agencies. Within departments, success is not the sole responsibility of Heads of Evaluation - departmental senior managers and program managers each have a role in ensuring a conducive environment "where managers embed the practice of evaluation into their work" and "where evaluation discipline is used in synergy with other management tools to improve decision-making" (Evaluation Policy - "Effective Deployment of Evaluation"). Within Treasury Board, the CEE and other parts of TBS must work together in supporting the evaluation community and in "using the products of evaluation to support decision making at the centre."(Evaluation Policy - "Effective Deployment of Evaluation") Finally, other senior agencies, such as Privy Council Office, also have a role in ensuring the products of evaluation are well used. To this end, Table 1 describes the roles and responsibilities for implementing the Evaluation Policy.

This partnership is also an integral part of the joint effort to implement Active Monitoring of programs, policies and initiatives by the federal government.

Table 1: Evaluation Policy Roles and Responsibilities

Departments & Agencies

TBS (& PCO)

TBS Centre of Excellence for Evaluation

Ensure a supporting environment that:
  • Values and implements Results-based Management.
  • Supports maintenance of ongoing performance monitoring for policies, programs & initiatives, as well as for evaluation practices.
  • Recognizes managers for implementing performance monitoring and for learning from both good and poor results.
  • Uses the evaluation function to support strategic decision-making
  • Provides adequate resources to fulfill the expanded scope of the policy and implement Results Based Management.
  • Maintains an objective competent evaluation workforce through retention, recruiting, and training & development.
  • Integrates performance monitoring and evaluation with other accountability and performance assurance systems.

Implement the Evaluation Policy to achieve:

  • An active monitoring and planning structure for evaluation involving senior management and integrated with department/agency strategic decision-making.
  • Integration of ongoing performance monitoring into policy, program and initiatives.
  • Adequate evaluation coverage of policies, programs and initiatives.
  • Good quality evaluations and RMAFs consistent with the Standards of Practice set by the policy and by TBS.
  • Improved design, implementation and resourcing of programs, policies and initiatives, through timely and actionable evaluation recommendations.

Work in partnership with the CEE and other departments and agencies to:

  • Actively monitor the health of the evaluation function and the implementation of the Evaluation Policy.
  • Achieve and maintain an effective evaluation function in the Public Service.
Ensure a supporting environment that:
  • Uses evidence based performance monitoring and evaluation results to make decisions (e.g. TB submissions, Memoranda to Cabinet, budgets)
  • Recognizes good planning for performance monitoring and evaluation in submissions and memoranda.
  • Provides adequate resourcing to implement Evaluation Policy requirements.
  • Ensures a consistent policy regime with other policies impacting the Evaluation function.
Be responsible to Treasury Board, in partnership with departments and agencies, for:
  • Monitoring and reporting on the implementation of the policy.
  • Monitoring and reporting on the health of the evaluation function.
  • Providing leadership to the evaluation function.
  • Providing advice to TB on evaluation related questions.

Serve as an advocate for the evaluation community of the Public Service by:

  • Advising TBS, PCO and other central agencies on consistency of vision and approach between other policies and the Evaluation Policy.
  • Communicating to departmental senior managers the Evaluation Policy and the role of evaluation in the modern management framework.
  • Communicating to Parliament and Canadians the value and contributions of evaluation.
  • Assessing and making representations on resource requirements.

Building department/agency evaluation capacity by:

  • Providing tools, guides, best practices, policies, and advice.
  • Developing a community-based human resources framework, including training and certification, competency profiles, and a human resources strategy.
  • Fostering an informed and collaborative evaluation community through consultation, workshops, information sessions, events and other networking activities.

From Activities to Outcomes

Activities and Outputs

Figure 1 illustrates how the Evaluation Policy is intended to achieve its desired outcomes. Five kinds of activities are undertaken by both departments/agencies and by TBS through the Centre of Excellence:

Immediate Outcomes

Resourcing, training, and development are expected to lead to a competent evaluation workforce and adequate capacity to carry out the needed work. The facilitation outputs, such as guides, tools and information sharing, are expected to result in improved evaluation practices. Better planning will cause a more strategic use of evaluation within departments and across government and improved integration of evaluation with the planning, design and management, and accountability needs of programs/policies/initiatives. Evaluation studies and reports are expected to provide evidence based reporting on a timely and credible basis.

Logic Model of TB Evaluation Policy

Intermediate Outcomes

A competent workforce with sufficient capacity, following improved evaluation practice integrated with the work of managers, and producing timely, credible, evidence-based reporting, is expected to result in:

Final Outcome

Improved use of evidence based results in decision making, and improved programs, programs and practices, will in turn lead to better results for Canadians, better accountability to Parliament, and more effective use of resources.


ONGOING PERFORMANCE MEASUREMENT STRATEGY

As part of the decision to implement the new Policy, there is a Treasury Board requirement to develop, in consultation with stakeholders, an ongoing performance measurement strategy for the policy.

A set of core measures of performance, is proposed for monitoring on an annual basis. They will be supplemented by one-time data collection efforts in the 18 month and 5 year evaluations, discussed in other sections further below.

Shared Responsibility

The monitoring strategy calls for joint implementation by departments and by the Centre of Excellence. Respective roles are proposed below. While some effort is required, it is recognized that accurate reporting on the health of the evaluation function is of mutual interest and benefit to all stakeholders. For example, the early results will assist TBS in assessing whether to extend Business Case Funding for an additional two years (2003/04 and 2004/05).

Departmental Role in Ongoing Performance Measurement of the Policy

Departments and agencies are where the principal impacts of the Policy will be felt. There is a need to assess progress in a way that respects:

Baseline Data Collection - 4th Quarter 2001/02

As an initial step to develop baseline data, departments would be asked to respond to a request for baseline data conducted by the CEE during the fourth quarter of 2001/02. The survey would collect basic information available or anticipated for the fiscal years 2000/01 and 2001/02. The initial survey is proposed for these reasons:

Annual Process

The Evaluation Policy itself calls for departments to develop a "strategically focused plan" for evaluation. Most departments already maintain such a plan. Heads of Evaluation are to forward such plans to the CEE annually (once they have passed through the department approval process). It is proposed that:

The proposed time-line of events for 2001/02 and 2002/03 is described below. Key events for the 18-month evaluation are also shown. Data collection and preliminary analysis from the 18 month Evaluation is intended to be complete prior to the Strategic Planning Meeting of Heads of Evaluation in the fall of 2002.

Initial Time-Line

Commitment to Gap Analysis

It is recognized that a gap in evaluation capacity exists in many departments, and that the Evaluation function in government is in a period of renewal. It is also recognized that the new Policy expands the scope of evaluation activities to include evaluation of policies, initiatives, and shared programs. In parallel, the new Treasury Board Transfer Payments Policy requiring RMAFs be in place for grants and contributions programs also has implications for workload and capacity both in the immediate future, and in the evaluation activity that will follow.

Therefore the Progress Report section of the Evaluation Plan will include an ongoing assessment of progress in closing the gap between current capacity and the capacity necessary to fully implement the policy. Also addressed will be current challenges, and likely time-lines for achieving closure of that gap.

In making this assessment, Departments may draw upon earlier gap analyses provided to Treasury Board Secretariat, through previous surveys and submissions for funding.

Departmental Indicators for Ongoing Monitoring

The following indicators are proposed for inclusion, and annual revision, in the Progress Report section of department Evaluation Plans. Where the department capacity does not exist to generate the proposed indicator, the department will either report plans and timelines for the development of such capacity, or an assessment of why such a capacity is not needed or not possible at this time. In the annual revisions to the Progress Report section, changes in indicators, and degree of success in fulfilling previous timelines will be briefly summarized. Indicators are proposed for the following areas. A guideline issued by the CEE would elaborate on each:

A list of the above, showing document title and relevant program/policy initiative would be provided or appended. Significant outputs not covered by the above might be addressed in the summary narrative section defined below.

Examples of impacts include accepted recommendations changing programs/policies/initiatives, validation resulting in significant renewal/extensions of mandates, or improvements/achievements in establishing ongoing performance monitoring.

It is expected that % will be measured in terms of dollar value of expenditure for programs, and grants and contributions. Recognizing the absence of a framework to enumerate the universe of policies and initiatives in many departments, the % indications for these are expected to be rough.

The above would be accompanied by a statement on expectations on when and how the backlogs are expected to be closed and/or future forecasts of the backlog under current conditions. Departments will likely wish to address the impact of the Transfer Payments Policy on RMAF requirements and future evaluation requirements under this heading.

Departments may wish to add a statement relating the resource assessment to the expectations of when any assessed backlog will be addressed.

It is recognized that some departments/agencies lack a framework to assess the degree to which programs/policies/initiatives are covered by evaluation work or ongoing performance monitoring. In these cases, it is anticipated that initial progress reports and plans will address timing and milestones for implementing this framework. It is also recognized that the degree of organization and formality of reports will vary between small and large departments.

CEE Role in Ongoing Performance Measurement of the Policy

Roll-Up of Department/Agency Assessments in Evaluation Plans

The Centre of Excellence for Evaluation will conduct an annual assessment of the health of the Evaluation Function and the implementation of the Evaluation Policy. A key input to this assessment will be a roll-up of the assessments in department/agency annual revisions to their departmental Evaluation Plans. This will be supplemented by the CEE's own activities in reviewing the quality of evaluation products, and reviewing department Evaluation Plans.

The CEE's annual report will address:

To monitor performance in these areas, the CEE will employ the set of Key Performance Indicators shown in Table 2.

Monitoring the CEE's Own Activities

In the annual report, the CEE will also report on its own performance and achievements in supporting the Evaluation Function and facilitating implementation of the Evaluation Policy. Reporting will include an assessment of CEE contributions to:

Table 2:
Key Performance Indicators for CEE Monitoring
of Policy Implementation & Health of Evaluation

Current Infrastructure and Capacity.
  • Frequency of Meetings of Evaluation Committees and level of actual individual chairing.
  • Degree of involvement of Evaluation Committees in Evaluation Plans (planning, approval, review/approval of individual evaluation studies).
  • Quality of Evaluation Plans (linked with department & government priorities, strategically focussed, based on risk assessment)
  • Number of FTE's currently devoted to evaluation function, by group and level.
  • Size of consulting budgets.

Outputs and Achievements of Evaluation Function.

  • Number of evaluation reports completed.
  • Number of RMAFs completed.
  • Number of performance measurement studies.
  • Assessment of outputs and achievements relative to previous years (Qualitative).

Quality of Evaluations.

  • Degree of adherence to standards attached to policy (e.g. consultation practices, actionable recommendations, etc.)
  • Timeliness.
  • Proportion that provide balanced reporting.
  • Average degree of credibility of results.

Progress in Ongoing Performance Monitoring.

  • Proportion of Evaluation Plans reporting achievement of adequate ongoing performance monitoring within departments.
  • Proportion of Plans reporting a supportive environment and actual progress in improving ongoing performance monitoring.

Backlog Assessment.

  • Proportion of departments reporting significant backlogs of required work, in evaluations and RMAFs.
  • Proportion of departments forecasting significant backlogs.
  • Qualitative assessment of degree of department risk left unaddressed.

Resource Gap Assessment.

  • Estimated shortfall/surplus in FTE's.
  • Number of departments reporting office qualifications and experience adequate to average requirements.
  • Estimated shortfall/surplus in total resources.
  • Change in resources since implementation of new policy.

To monitor its own performance, the CEE will:

Aggregate effectiveness of the CEE's activities will be assessed through stakeholder and client consultation in association with the 18 month and 5 year evaluation (See Section 4, further below).


EVALUATION STRATEGY - ISSUES

Ten evaluative questions, or issues, have been identified for evaluation. These are summarized in Table 3, along with sub-issues related to each question. The text below provides a capsule summary of each question and its sub-questions. Performance indicators to resolve the questions, and associated data collection, are discussed in subsequent sections.

The ten issues are lettered A to J, and are grouped according to Success, Cost-Effectiveness, and Continued Relevance. Sub-issues under each heading progress from "output" oriented questions to more difficult to assess questions about whether the desired "outcomes" are being achieved.

The issues are also structured to address the degree to which each of the partners/players in implementation of the Policy have fulfilled their roles. Separate evaluative questions address the role of the evaluation function within departments, the role of senior management, the role of program/policy/initiative managers, the CEE, and the role of other branches of TBS.

Progress/Success Issues

A. Have Departments Implemented the Policy?

This issue asks whether the basic features of the Policy have been put into place, ranging from expanded coverage of evaluation and RMAFS to an effective Evaluation Committee, to a strategic Evaluation Plan. Whether adequate coverage of policies, programs and initiatives has been achieved must be judged in the context of resources made available to the evaluation function- addressed under Issue C.

Table 3:
Summary of Issues for Evaluation Policy

PROGRESS/SUCCESS ISSUES

A. HAVE DEPARTMENTS IMPLEMENTED THE POLICY?

  1. Have departments expanded the scope of evaluation beyond traditional areas?
  2. Is there an effective Evaluation Committee or other internal process contributing to setting department priorities, planning, and decision-making?
  3. Is there an effective Strategic Evaluation Plan, with follow-up? Has capacity been established to support such planning?
  4. Is there adequate coverage of department programs, policies and initiatives by RMAFs and evaluations?

B. HAVE DEPARTMENTS MADE PROGRESS IN IMPLEMENTING ONGOING PERFORMANCE MEASUREMENT?

  1. Is there the capacity or framework to assess the degree to which performance measurement is in place?
  2. Are there ongoing performance measurement systems in place for programs, policies & initiatives that are integrated with RMAFs?
  3. Has the evaluation function contributed to the development of ongoing performance monitoring?
  4. Are department/agency environments supportive of the development of ongoing performance monitoring systems?
  5. Is performance data being generated and used?

C. HAVE DEPARTMENTS BEEN ABLE TO PROVIDE ADEQUATE EVALUATION CAPACITY?

  1. Have departments been able to increase the number of qualified personnel, within the evaluation function or elsewhere in the organization?
  2. Have previously identified gaps in resourcing been closed?
  3. Has progress on capacity been adequate relative to available resourcing? Does a relevant resource gap remain?
  4. Would a change in resourcing levels improve value for money?

D. HAS THE CEE PROVIDED A SUPPORTIVE ENVIRONMENT?

  1. Has the CEE provided appropriate tools, events, advice and other elements of a supportive environment for evaluation.
  2. Has the CEE met its Community Development Strategy goals?
  3. Is there an adequate framework in place for training and certifying evaluators?
  4. Has the CEE implemented a monitoring system?
  5. Has the CEE been effective in reporting monitoring system results and conclusions to senior agencies?
  6. Has the CEE been an effective advocate for the evaluation community in the Public Service?

E. IS THE EVALUATION FUNCTION PRODUCING TIMELY AND EFFECTIVE INSIGHT INTEGRATED WITH DEPT. DECISION MAKING?

  1. Are evaluation studies effectively using and supported by department performance monitoring systems?
  2. Is the quality of reports acceptable? Has there been an improvement? How successful has the evaluation function been in identifying potential improvements in programs, policies and initiatives?
  3. Are recommendations produced on a timely basis?
  4. To what extent has the new Evaluation Policy contributed to the above?

F. HAVE DEPARTMENTS IMPROVED THEIR USE OF EVALUATION RESULTS FOR DECISION MAKING?

  1. Has senior management confidence in the evaluation function improved?
  2. To what extent have senior managers "bought into" the revised Evaluation Policy?
  3. To what extent has a supportive department environment been established for results based management and evaluation?
  4. Is there a well functioning department/agency decision-making process for acting on evaluation study recommendations?
  5. Is there follow-up monitoring on implementation of accepted recommendations? Is senior management involved?
  6. Has the evaluation function been used well? To what extent has department senior management participated in setting evaluation priorities?
  7. To what extent have evaluation results been used in department strategic decision-making? Has there been an improvement?

G. HAS THE EVALUATION POLICY CONTRIBUTED TO IMPROVED DESIGN IMPLEMENTATION AND RESOURCING OF DEPARTMENT/AGENCY PROGRAMS, POLICIES & INITIATIVES?

  1. To what extent have program managers "bought into" the revised Evaluation Policy?
  2. Do program managers feel "safe" in working with and implementing objective monitoring and evaluation?
  3. To what extent have programs, policies, and initiatives been improved by evaluation findings and recommendations?

H. ARE SENIOR AGENCIES (TBS, PCO) USING EVALUATION AND PERFORMANCE MONITORING RESULTS FOR DECISION MAKING?

  1. Do parliamentary reporting systems support and encourage the utilization of RMAFs, performance measures, and evaluation?
  2. Are good quality and implementable RMAFs, performance measures, and evaluation results required and/or recognized in support of submissions to senior agencies?
  3. Are commitments to provide RMAFs and performance measures followed up?
  4. Are performance results and evaluation findings used to make decisions on TB Submissions, Cabinet memoranda, and other decisions?
  5. Have senior agencies provided a consistent policy regime that recognizes the impact of other policies on the evaluation function?

COST-EFFECTIVENESS

I. IS THE EVALUATION POLICY COST EFFECTIVE?

  1. Are there ways in which the evaluation function could be delivered more cost effectively?
  2. Has the CEE delivered value for money?
  3. Are there alternative methods of organizing delivery of the Evaluation Policy?

CONTINUED RELEVANCE

J. IS THERE A NEED FOR FURTHER CHANGES TO THE POLICY OR TO ITS IMPLEMENTATION?

  1. Does the evaluation function deliver results for Canadians?
  2. Are there better ways of delivering the evaluation function within the federal government?
  3. Are there improvements that can be made in the Evaluation Policy?

B. Have departments made progress in implementing ongoing performance measurement?

Performance measurement issues are identified separately for the evaluation because of the emphasis placed upon ongoing measurement in the new Evaluation Policy. This issue starts with the most basic question, do departments have sufficient knowledge and control of performance measurement to assess how much has been achieved? Actual results will vary by department. For departments that have this basic capability, the next questions address levels of achievement in sequence. Achievement begins with operational performance monitoring, integrated with RMAFs, and ends with the effective generation and use of performance data. Contributing factors to be assessed are the supportive environment provided by departments, and the role of the evaluation function itself in achieving ongoing performance monitoring.

Again, progress must be judged in the context of actual resources made available to evaluation.

C. Have departments been able to provide adequate evaluation capacity?

This is an important issue for which both departments and Treasury Board have joint responsibility. It speaks to the immediate outcomes intended by the Policy: competent evaluation workforce and adequate capacity. Measurement of capacity gaps begins with increases in personnel, and other measures that departments may have undertaken to build capacity. Here, the gap analyses to be incorporated in department annual Evaluation Plans will be very useful.

Combined with issues A, and B, the focus is on whether sufficient progress has been made relative to the resources provided. Although TBS has provided $8.6 million in Business Case funding, the actual need identified by the pre-Policy survey was substantially more. Also important is the question of value for money generated by the current two years Business Case Funding. It is under this issue that the question of funding for years three and four of the Business Case Funding will be addressed.

D. Has the CEE provided a supportive environment?

The CEE's own role is examined under this issue. Sub-issues begin with outputs such as tools and events, and then move on to broader questions of community development and the human resources initiatives. A monitoring system for the policy is another structure that the CEE is intended to achieve (beginning with this RMAF). Finally, the question may be asked whether the broader roles in advising TBS and serving as an advocate on behalf of the evaluation community have been met. This issue speaks to the CEE's contribution to the immediate outcomes intended by the Policy: improved evaluation practices, competent evaluation workforce, and adequate capacity.

E. Is the evaluation function producing timely and effective insight, integrated with department decision-making?

This issue speaks to the quality of the work of the evaluation function under the new Policy. It addresses the immediate outcomes: evidence based reporting and timely credible reporting. The issue of good quality evaluations is separate from whether departments make good use of them, the subject of issues F and G further below.

F. Have departments improved their use of evaluation results for strategic decision-making?

This issue speaks to the intermediate outcome intended by the Policy: improved use of results for department decision-making, and to the final outcome better accountability to Parliament. Assessment begins with examining the pre-conditions for improved use, increased confidence by senior management in the function, and a structure able to direct evaluation and process and use results. The final question is whether the evaluation results have been effectively used.

G. Has the Evaluation Policy contributed to improved design, implementation and resourcing of department/agency programs, policies & initiatives?

This issue speaks to the intermediate outcome intended by the Policy: improved design, implementation and resourcing of programs, policies, and initiatives; and to the final outcomes better results for Canadians, and more effective use of resources. Again, assessment begins by examining important pre-conditions - whether managers support the new policy and feel safe in cooperating with it. The final sub-issue deals with the actual outcome.

H. Are senior agencies (TBS, PCO) using evaluation and performance monitoring results for strategic decision making?

This issue examines whether TBS (and PCO where relevant) are carrying out their own role under the policy to "use products of evaluation to inform decision making at the centre". In consultations over this RMAF it was noted by many parties that if RMAFs and evaluation findings did not affect funding and policy decisions at the centre, than departments and agencies would be less motivated to fully implement the Evaluation Policy.

Cost Effectiveness and Continued Relevance Issues

I. Is the Evaluation Policy Cost Effective?

Potential cost-savings and alternative methods will be considered here, emerging from the analysis of the earlier issues. The value-for-money of the CEE will also be assessed, informed by the analysis of the success of the CEE's activities under Issue E above.

J. Is there a need for further changes to the policy or its implementation?

This is the summative question assessing the success of the Policy and its implementation, and suggesting needed changes where relevant.

Role of 18 month and 5 year Evaluations

It is usual practice to resolve formative questions about outputs of a program/policy/initiative in the interim evaluation, and address summative questions of program success in the final evaluation.

However, for the Evaluation Policy it will be important to give all the above issues some attention, however inconclusive, in the evaluation at 18 months. Consultation with department and agency representatives over the RMAF revealed two reasons for this:


EVALUATION STRATEGY - DATA COLLECTION

Core to the data collection strategy is the annual performance monitoring through department strategic evaluation plans, discussed in Section 2. This section addresses additional data collection associated with the 18 month and 5 year evaluation studies. Table 5 at the end of this section summarizes data collection activities, frequency, and responsibility for completion.

Roll of Business Case Funding Recipients

A large number of departments/agencies are receiving additional evaluation funding through the TBS administered Business Cases for Resourcing Evaluation program. Participants have received funding for the first two years of the four-year program. Funding for the third and fourth years is contingent on results of the 18-month interim evaluation study. Data collected from participants is a key source of data for both the 18-month and 5 year evaluations.

Existing Baseline Information

There are two principal sources of baseline information on evaluation capacity and gaps:

Data Collection at 18 Months

18 months is insufficient time to fully implement the Evaluation Policy or assess its impacts. However, given the necessity to assess whether to extend Business Case funding to a third and fourth year, some preliminary assessments will need to be made of:

Data collection at 18 months will focus on progress in initial implementation steps, such as an accountable evaluation function, expanding the scope of evaluation, and narrowing of capacity gaps. Significant effort will also be made to extend baseline data for such indicators as current satisfaction with the evaluation function by deputy heads. Data collection activities are elaborated below.

Baseline Data Collection & Department Evaluation Plans

As discussed in the time-line proposed under Performance Monitoring, there will be a preliminary baseline data collection exercise completed in the first quarter of 2002/03, and ongoing data collection of the same performance information in subsequent Department Evaluation Plans. The proposed common monitoring section of the Plans themselves are expected to have at least a qualitative retrospective on changes/improvements relative to the previous year. In addition, the gap analyses may be compared to the earlier Survey of Evaluation Capacity for departments who participated in the Survey.

Business Case Progress Reports & Submissions

Recipients of Business Case funding will be asked to provide a progress report in order to support continuation of funding for the remaining two years of the program. These reports will be asked to compare progress to gap analysis and milestones originally proposed. It will be fully recognized that most departments/agencies received substantially less than the funding applied for and that this will impact the expected progress proportionately. Departments/agencies will be asked to report on what they have been able to achieve given their own resources and the funds made available through the Business Cases.

It is anticipated that the contents of some department/agency Strategic Evaluation Plans will also address most or all of the necessary information for a progress report. Where appropriate, departments/agencies may wish to append the Plan to a shorter letter or document providing additional content as needed.

Assessment of Sample Evaluation Reports

Departments and agencies file completed evaluation reports with the CEE. It is proposed that a random sample of 30 evaluation reports be taken to establish a baseline for quality of practice in the 18-month evaluation, with a follow-up to assess progress in the 5-year evaluation. Reports will be assessed for conformance to standards of practice, specificity of recommendations, etc. Status of recommendations within the department will be determined by telephone interviews with department evaluation function, the program manager, and TBS portfolio officer. The analysis will also address the degree to which department/agency ongoing performance monitoring systems were available to be integrated into the evaluation.

The intention is to repeat this exercise at year 5 of implementation to assess any relative improvements in quality of practice.

Annual Retreat & Strategic Planning Session - Heads of Evaluation

Heads of evaluation are among the most informed about progress and challenges in implementation of the Policy. The annual strategic planning retreat provides an opportune place to check in with the community on the status of evaluation issues. The timing of the fall retreat is approximately in line with the timing of an 18-month review. Plenary sessions and workgroups sessions will address progress of the policy and the state of evaluation.

Case Studies

Two other key groups to hear from are Deputy Heads of departments, and front-line managers of programs/policies/initiatives. There is also a need to more closely assess the progress of implementation in departments, and validate the broader findings collected from department annual revisions to their Evaluation Plans.

To address both these needs, a set of department/agency case studies is proposed. The experience of selected departments and agencies would be investigated twice, once at 18 months and again at 5 years. 18-month findings would largely set a baseline from which progress could be measured at 5 years, but would also capture any preliminary progress in implementation.

Key informants within the case studies would include:

Data to be collected is reflected in Table 5: Issues, Indicators, & Data Sources.

A minimum eleven case studies is proposed, seven from departments/agencies receiving Business Case funding, and four others. The seven from the Business Case funding group would be drawn from three groups based on department size and the initial status/strength of the evaluation function as assessed during the allocation of Business Case funds:

Sample From Business Case Funding Recipients

Sample from Non-Business Case Funding Recipients

Two large departments

Two small departments

The above sample is intended to capture the impact of the policy on departments of different size, and different levels of development in their evaluation capacity. The sample is also intended to help assess the impact of Business Case funding, an important question for Treasury Board in reviewing funding for the remaining two years of the program.

Survey of Evaluation Clients and Stakeholders

Two other key groups to hear from are Deputy Heads of departments, and front-line managers of programs/policies/initiatives. There is also a need to more closely assess the progress of implementation in departments, and validate the broader findings collected from department annual revisions to their Evaluation Plans.

To address both these needs, a survey is proposed. Separate surveys would be conducted of a sample of each of these groups:

A baseline survey would be conducted at 18 months, and repeated for the 5-year evaluation. The purpose of the survey would be to obtain an independent assessment of how well the evaluation is doing its job under the new policy. Managers involved in evaluation are also included to permit comparison, and to provide quantitative indicators of the degree of positive self-assessments on the implementation of the new policy.

Questions would include multiple-choice assessments of degree of satisfaction or perceived performance of evaluation with respect to the indicators identified in this RMAF. This is intended to allow quantification of degree of satisfactions and relative improvement over time. The approach also allows cost-effective data collection to validate the information collected through the Progress Reports that are proposed for department Evaluation Plans.

A minimum sample of 30 for each group will be taken. Selection would be random based on reported evaluations for the previous year, but stratified according to

The survey would be conducted in person for Deputy Heads. For the others, the survey would be circulated via Heads of Evaluation, and returned directly to the CEE.

Key Informant Interviews

In addition to the above exercises, interviews will be conducted to include all client/stakeholder communities. These will include:

Numbers of these interviews will be restricted at 18 months, but extended for the 5-year evaluation.

Data Collection Activities at 5 Years

The same activities will be repeated at 5 years, with more extended key informant interviews, and a more extended record of progress in department/agency annual revisions to Evaluation Plans.

Table 4: Data Collection Activities, Frequency, and Responsibility

Activity

Frequency

Responsibility

Baseline Survey of Evaluation Capacity

Completed

TBS

Business Case Submissions (Round 1)

Completed

Departments/Agencies

Baseline Data collection from Departments in preparation for first round of revised Evaluation Plans.

Once

CEE

Department Evaluation Plans

Annual

Departments/Agencies

Survey of Evaluation Clients and Stakeholders

18 months and 5 years

CEE

Annual CEE Report and Roll-up of Department Evaluation Plan information on gap.

Annual

CEE

Business Case Reports & Submissions (Round 2)

Once

Departments/Agencies withCEE guidance

Assessment of Sample Evaluation Reports

18 months & 5 years

CEE

Annual Strategic Planning Retreat

Annually, but RMAF components at 18 months and 5 years.

CEE

Case Studies

18 months & 5 years

CEE

Key Informant Interviews

18 months & 5 years

CEE


 

EVALUATION STRATEGY - PERFORMANCE INDICATORS

For each of the evaluative questions identified in Section 3, there is a need for performance indicators to provide the basis for an answer. Table 2 earlier provided a list of key indicators for ongoing performance monitoring. Table 5, at the end of this document provides an expanded set of indicators for each evaluation issue, and lists associated data collection activities. Consistent with the plan to evaluate all issues on a preliminary basis after 18 months, a distinction is drawn between the performance indicators expected at 18 months, and those expected in the full evaluation at 5 years.


REPORTING STRATEGY

Table 5 lists the reports expected by the RMAF.

Table 5: Expected Reports

Results Measurement

Product

Date

Progress Report on Implementation of the New Evaluation Policy (by each Department)

Section of annual revision to department/agency Evaluation Plan

Annual by Departments and Agencies.

Roll-up of Progress Report for the Evaluation Function, plus report on CEE outputs, performance assessment.

CEE Annual Report

Annual by CEE

Interim Evaluation & Supporting reports from data collection.

Interim Evaluation Report

Second Quarter 2002/03

Full Evaluation & Supporting reports from data collection.

Evaluation Report

2005/2006

Table 6: Issues, Performance Indicators, and Data Sources

ISSUE

PERFORMANCE INDICATORS

Data Source

@18 Months

@ 5 years

PROGRESS/SUCCESS ISSUES
A. HAVE DEPARTMENTS IMPLEMENTED THE POLICY & EXPANDED CAPACITY?
1. Have departments expanded the scope of evaluation beyond traditional areas? Use of expanded scope in Evaluation Plans.

RMAFs and evaluations underway or complete in new areas.

Same as 18 month. Baseline Data Collection & Evaluation Plans

Case Studies and Survey of Stakeholders

2. Is there an effective Evaluation Committee or similar internal process contributing to setting department priorities, planning, and decision-making? Frequency of meetings and relevance of agenda.

Committee membership in terms of level and l representation.

Chairing of Committee by senior executive.

Consistency of role and mandate of committees with Evaluation Policy requirements.

Whether Committee approves the evaluation plan.

Same as 18 month plus:

Role of committee in setting strategic direction of evaluation.

Follow up monitoring of accepted recommendations

Role of committee in ensuring integration with department/ agency performance measurement and linking evaluation results with department strategic choices.

Baseline Data Collection & Evaluation Plans

Case Studies and Survey of Stakeholders

Key Informant Interviews

3. Is there an effective departmental Evaluation Plan, with follow-up? Has capacity been established to support such planning? Existence and quality of Plans.

Evidence of process for follow-up.

Existence of a framework accounting for the potential evaluation universe.

Quality of mechanisms to track progress on previous plan goals/intentions.

Same as 18 month plus:

Degree of follow-up on plans commitments and strategy.

Year to year consistency of plans in reporting progress and tracking shifts in strategy

Baseline Data Collection & Evaluation Plans

Case Studies and Survey of Stakeholders

Key Informant Interviews

4. Is there adequate coverage of department programs, policies and initiatives by RMAFs and evaluations? Existence of department assessment on gaps in coverage.

Progress and plans/commitments for coming year in closing gaps.

Progress on RMAFs for grants and contributions

Same without allowing for future plans/commitments. Baseline Data Collection & Evaluation Plans

Case Studies and Survey of Stakeholders

Key Informant Interviews

ISSUE

PERFORMANCE INDICATORS

Data Source

@18 Months

@ 5 years

B. HAVE DEPARTMENTS MADE PROGRESS IN IMPLEMENTING ONGOING PERFORMANCE MEASUREMENT?
1. Is there the capacity or framework to assess the degree to which performance measurement is in place? Existence of sufficient information to permit Heads of Evaluation to assess progress. Same as 18 month Baseline Data Collection & Evaluation Plans

Case Studies and Survey of Stakeholders

Key Informant Interviews

2. Are there ongoing performance measurement systems in place for programs, policies & initiatives that are integrated with RMAFs? Proportion of programs that have performance monitoring system in place, and which have generated at least one report.

Links with RMAFs, evaluations.

Same as 18 month, plus:

Role of shared measurement systems in ongoing guidance of department.

Degree of integration of performance measurement systems for assessment of overall department/agency performance.

Use of results from performance measurement in department/agency reporting to Parliament.

Views of senior managers.

Baseline Data Collection & Evaluation Plans

Case Studies and Survey of Stakeholders

Key Informant Interviews

3. Has the evaluation function contributed to the development of ongoing performance monitoring? Activity reported in Evaluation Plans.

Awareness of roles by program managers.

Role and recent activity of evaluation function in establishing program & dept. performance measurement systems. Baseline Data Collection & Evaluation Plans

Case Studies and Survey of Stakeholders

Key Informant Interviews

4. Are department/agency environments supportive of the development of ongoing performance monitoring systems? Demand for RMAFs and other performance management systems.

Content of demand - is there a demonstrated intent to use?

Same as 18 months plus:

Actual use (see issue below).

Baseline Data Collection & Evaluation Plans

Case Studies and Survey of Stakeholders

Key Informant Interviews

Annual Retreat of Heads of Evaluation.

5. Is performance data being generated and used? Circulation and use of results reports by program managers and senior managers Use of results from performance measurement in department/agency reporting to Parliament. Baseline Data Collection & Evaluation Plans

Case Studies and Survey of Stakeholders

Key Informant Interviews

Annual Retreat of Heads of Evaluation.

ISSUE

PERFORMANCE INDICATORS Data Source

@18 Months

@5 Years
C. HAVE DEPARTMENTS BEEN ABLE TO PROVIDE ADEQUATE EVALUATION CAPACITY?
1. Have departments been able to increase the number of qualified personnel, either in the evaluation function or elsewhere in the organization? Staffing Actions

Numbers of staff recruited.

Success in retaining staff (relative to benchmark turnover data).

Proportion of evaluation officers assessed by department as requiring further training or experience before being fully qualified to fulfill the responsibilities of their level, outside the normal ongoing training requirements of a learning culture.

Same as 18 month, plus

Training

Existence of training programs and consistency of programs with evaluation professional standards and the Evaluation Policy.

Frequency and quality of external training

Enrollment and graduation of personnel in recognized training programs

Survey of Evaluation Capacity; Business Case Submissions; Business Case Progress Reports

Baseline Data Collection & Evaluation Plans

Case Studies and Survey of Stakeholders

Key Informant Interviews

Annual Retreat of Heads of Evaluation.

2. Have previously identified gaps in resourcing been closed? Is there still a relevant gap? Progress on closure of identified staffing gaps and demographic analysis of evaluation staff before and after.

Gap assessments in Evaluation Plans.

Same as 18 month Survey of Evaluation Capacity; Business Case Submissions; Business Case Progress Reports

Baseline Data Collection & Evaluation Plans

Case Studies and Survey of Stakeholders

Key Informant Interviews

Annual Retreat of Heads of Evaluation.

3. Has progress on capacity been adequate relative to available resourcing? Does a relevant resource gap remain? Progress on closure of gaps in coverage relative to closure in identified resource gaps.

Improvements in evaluator qualifications.

Absence of bottlenecks in training, retention etc.

Presence of good demand for evaluation output.

Same as 18 month Survey of Evaluation Capacity; Business Case Submissions; Business Case Progress Reports

Baseline Data Collection & Evaluation Plans

Case Studies and Survey of Stakeholders

Key Informant Interviews

Annual Retreat of Heads of Evaluation.

4. Would a change in resourcing levels improve value for money? Progress on closure of gaps in coverage relative to closure in identified resource gaps.

Improvements in evaluator qualifications.

Absence of bottlenecks in training, retention etc.

Presence of good demand for evaluation output.

Same as 18 month Survey of Evaluation Capacity; Business Case Submissions; Business Case Progress Reports

Baseline Data Collection & Evaluation Plans

Case Studies and Survey of Stakeholders

Key Informant Interviews

Annual Retreat of Heads of Evaluation.

ISSUE

PERFORMANCE INDICATORS Data Source

@18 Months

@5 Years
D. HAS THE CEE PROVIDED A SUPPORTIVE ENVIRONMENT?
1. Has the CEE provided appropriate tools, events, advice and other elements of a supportive environment for evaluation. Progress on delivering elements of a well functioning environment, including provision of, tools, policies, training and certification structure, and support for networking and communication.

Value of outputs to departments and agencies in achieving department/agency goals for evaluation.

Value of policy and ad hoc advice provided.

Recognition of Centre by peers and decision makers.

Same as 18 months plus:

Opinions of deputy heads, heads of evaluation, and evaluation practitioners about the events, advice and other elements of a supportive environment for Evaluation provided by the CEE.

Case studies

Baseline Data Collection & Evaluation Plans

Key Informant Interviews

Annual Retreat of Heads of Evaluation.

CEE Annual Performance Monitoring report.

2. Has the CEE met its Community Development Strategy goals? Number and importance of goals met. Same as 18 month. Case studies

Baseline Data Collection & Evaluation Plans

Key Informant Interviews

Annual Retreat of Heads of Evaluation.

CEE Annual Performance Monitoring report.

3. Is there an adequate framework in place for training and certifying evaluators? Existence of a structure.

Availability of courses etc. in time and available places.

Same as 18 month. Case studies

Baseline Data Collection & Evaluation Plans

Key Informant Interviews

Annual Retreat of Heads of Evaluation.

CEE Annual Performance Monitoring report.

4. Has the CEE implemented a monitoring system? Ability to generate a timely report on the state of evaluation in the Public Service?

Capacity of report to provide timely identification of issues in department practices of performance management, evaluation, and strategic direction of programs, policies and initiatives.

Same as 18 months, plus:

Circulation and use of reports

Case studies

Baseline Data Collection & Evaluation Plans

Key Informant Interviews

Annual Retreat of Heads of Evaluation.

CEE Annual Performance Monitoring report.

5. Has the CEE been effective in reporting monitoring system results and conclusions to senior agencies? Reports and advice generated.

Quality of responses.

Views of evaluation managers.

Views of senior agency managers.

Impact s of reports on decision making in central agencies and individual departments/ agencies Case studies

Baseline Data Collection & Evaluation Plans

Key Informant Interviews

Annual Retreat of Heads of Evaluation.

CEE Annual Performance Monitoring report.

6. Has the CEE been an effective advocate for the evaluation community? Quality and content of reports and advice provided to senior agencies.

Existence of consultation/communication events or meetings to assess community issues.

Opinions of heads of evaluation, and CEE managers

Impacts of CEE on TBS/CEE approach to evaluation.

Same as 18 month. Case studies

Baseline Data Collection & Evaluation Plans

Key Informant Interviews

Annual Retreat of Heads of Evaluation.

CEE Annual Performance Monitoring report.

ISSUE

PERFORMANCE INDICATORS Data Source

@18 Months

@5 Years
E. IS THE EVALUATION FUNCTION PRODUCING TIMELY AND EFFECTIVE INSIGHT, INTEGRATED WITH DEPARTMENT DECISION MAKING?
1. Are evaluation studies effectively using and supported by department performance monitoring systems? Proportion of recent studies able to place significant reliance on ongoing performance management systems.

 

Same as 18 month. Assessment of Sample Evaluation Reports

Case studies

Baseline Data Collection & Evaluation Plans

Key Informant Interviews

Annual Retreat of Heads of Evaluation.

CEE Annual Performance Monitoring report

2. Is the quality of reports acceptable? Has there been an improvement? How successful has the evaluation function been in identifying potential improvements in programs, policies and initiatives? Proportion of reports with clear recommendations indicating actions to be taken and time frame.

Views of senior managers.

Same as 18 month. Assessment of Sample Evaluation Reports

Case studies

Baseline Data Collection & Evaluation Plans

Key Informant Interviews

Annual Retreat of Heads of Evaluation.

CEE Annual Performance Monitoring report

3. Are recommendations produced on a timely basis? Views of senior managers. Same as 18 month. Assessment of Sample Evaluation Reports

Case studies

Baseline Data Collection & Evaluation Plans

Key Informant Interviews

Annual Retreat of Heads of Evaluation.

CEE Annual Performance Monitoring report

4. To what extent has the new Evaluation Policy contributed to the above? Role and history of Sr. Evaluation Committee (see above).

Awareness of Sr. managers of expanded scope of policy.

Views of evaluation managers and senior managers.

Same as 18 month. Assessment of Sample Evaluation Reports

Case studies

Baseline Data Collection & Evaluation Plans

Key Informant Interviews

Annual Retreat of Heads of Evaluation.

CEE Annual Performance Monitoring report

ISSUE

PERFORMANCE INDICATORS Data Source

@18 Months

@5 Years
F. HAVE DEPARTMENTS IMPROVED THEIR USE OF THE EVALUATION RESULTS FOR DECISION MAKING?
1. Has senior management confidence in the evaluation function improved? Degree of confidence in the evaluation function by deputy heads and senior managers. Same as 18 month. Case studies

Baseline Data Collection & Evaluation Plans

Key Informant Interviews

Annual Retreat of Heads of Evaluation.

CEE Annual Performance Monitoring report

Business Case Progress Reports

2. To what extent have managers "bought into" the revised Evaluation Policy? Level of commitment and support by deputy heads for the evaluation function.

Evidence of commitment to evaluation (e.g. as documented in internal memos, business plans, etc.)

Mechanisms in place to ensure dissemination of performance measurement results and evaluation results to managers.

Mind-set of executives & decision-making managers towards integrating results of evaluation reports into planning and decision-making.

Degree to which evaluation activities are considered a priority and receive appropriate resourcing.

Degree to which managers understand role and responsibilities of Heads of Evaluation.

Same as 18 month. Case studies

Baseline Data Collection & Evaluation Plans

Key Informant Interviews

Annual Retreat of Heads of Evaluation.

CEE Annual Performance Monitoring report

Business Case Progress Reports

3. To what extent has a supportive department environment been established for results based management and evaluation? Positive recognition for managers implementing performance monitoring.

Positive recognition for managers learning from both good and poor performance results.

Use of results in department performance reporting.

Sufficiency of resourcing.

Indicators for other sub-issues in this section.

Same as 18 month. Case studies

Baseline Data Collection & Evaluation Plans

Key Informant Interviews

Annual Retreat of Heads of Evaluation.

CEE Annual Performance Monitoring report

Business Case Progress Reports

4. Has the evaluation function been used well? To what extent has senior management participated in setting evaluation priorities? Role of department priorities in setting Evaluation Plan, and in Evaluation Committee decisions.

Degree to which evaluation outputs are collectively matching the current perceived needs by the Dept./agency.

Same as 18 month. Case studies

Baseline Data Collection & Evaluation Plans

Key Informant Interviews

Annual Retreat of Heads of Evaluation.

CEE Annual Performance Monitoring report

Business Case Progress Reports

5. Is there a well functioning department/agency decision-making process for acting on evaluation study recommendations? See evaluation committee issue above. Same as 18 month. Case studies

Baseline Data Collection & Evaluation Plans

Key Informant Interviews

Annual Retreat of Heads of Evaluation.

CEE Annual Performance Monitoring report

Business Case Progress Reports

6. Is there follow-up monitoring on implementation of accepted recommendations? Is senior management involved? Existence of system

Current levels of cooperation on receipt of progress reports from program managers.

Same as 18 month. Case studies

Baseline Data Collection & Evaluation Plans

Key Informant Interviews

Annual Retreat of Heads of Evaluation.

CEE Annual Performance Monitoring report

Business Case Progress Reports

7. To what extent have evaluation results been used in department strategic decision-making? Has there been an improvement? Contribution of evaluation reports to key decisions on department programs, policies and initiatives.

Implementation of results.

Results of implementation.

Same as 18 months. Case studies

Baseline Data Collection & Evaluation Plans

Key Informant Interviews

Annual Retreat of Heads of Evaluation.

CEE Annual Performance Monitoring report

Business Case Progress Reports

ISSUE

PERFORMANCE INDICATORS Data Source

@18 Months

@5 Years
G. HAS THE EVALUATION POLICY CONTRIBUTED TO IMPROVED DESIGN, IMPLEMENTATION AND RESOURCING OF DEPARTMENT/AGENCY PROGRAMS, POLICIES AND INITIATIVES?
1. To what extent have program managers "bought into" the revised Evaluation Policy? Mechanisms in place to ensure dissemination of performance measurement results and evaluation results to managers.

Mind-set of executives towards integrating results of evaluation reports into planning and decision-making.

Degree to which evaluation activities are considered a priority and receive appropriate resourcing.

Degree to which managers understand role and responsibilities of Heads of Evaluation.

Demand by managers for evaluation services.

Same as 18 months. Case studies

Baseline Data Collection & Evaluation Plans

Key Informant Interviews

Annual Retreat of Heads of Evaluation.

CEE Annual Performance Monitoring report

Business Case Progress Reports

2. Do program managers feel "safe" in working with and implementing objective monitoring and evaluation? Policies and practices in dealing with poor performance results and evaluation findings.

Method, if present, of linking performance-monitoring systems to executive compensation.

Opinions of program managers, senior managers, and heads of evaluation.

Same as 18 months. Case studies

Baseline Data Collection & Evaluation Plans

Key Informant Interviews

Annual Retreat of Heads of Evaluation.

CEE Annual Performance Monitoring report

Business Case Progress Reports

3. To what extent have programs, policies and initiatives been improved by evaluation findings and recommendations? Commitments to implementation of results from recent evaluations.

Implementation of results.

Results of implementation.

Same as 18 months. Case studies

Baseline Data Collection & Evaluation Plans

Key Informant Interviews

Annual Retreat of Heads of Evaluation.

CEE Annual Performance Monitoring report

Business Case Progress Reports

ISSUE

PERFORMANCE INDICATORS

Data Source

@18 Months

@5 Years
H. ARE SENIOR AGENCIES USING EVALUATION AND PERFORMANCE MONITORING RESULTS FOR STRATEGIC DECSISION MAKING?
1. Does implementation of parliamentary reporting systems by senior agencies support and encourage the utilization of RMAFs performance measures, and evaluations? Content of policies, practices and guidelines issued by senior agencies on parliamentary reporting.

Use evidenced in department/agency reports to parliament and TB.

Consistency of department agency/reports with major recent commitments to monitor performance through RMAFs.

Assessment by heads of evaluation, senior managers, & senior agency managers.

Degree to which evaluation and performance results reach Parliamentarians.

Same as 18 months. Case studies

Baseline Data Collection & Evaluation Plans

Key Informant Interviews

Annual Retreat of Heads of Evaluation.

CEE Annual Performance Monitoring report

Business Case Progress Reports

2. Are good quality RMAFs, performance measures, and evaluation results required and/or valued in support of submissions to senior agencies? Degree to which RMAFs and/or past performance measures are required as part of TB submissions or memoranda to cabinet.

Assessment by heads of evaluation, senior managers, & senior agency managers.

Existence of processes and reference to standards for quality.

Use of processes by senior agencies.

Assessment by heads of evaluation, senior managers, & senior agency managers.

Same as 18 months. Case studies

Baseline Data Collection & Evaluation Plans

Key Informant Interviews

Annual Retreat of Heads of Evaluation.

CEE Annual Performance Monitoring report

Business Case Progress Reports

3. Are commitments to provide RMAFs and performance measures followed up? Existence of processes by senior agencies to check previous performance measurement commitments.

Use of processes.

Assessment by heads of evaluation, senior managers, & senior agency managers.

Same as 18 months. Case studies

Baseline Data Collection & Evaluation Plans

Key Informant Interviews

Annual Retreat of Heads of Evaluation.

CEE Annual Performance Monitoring report

Business Case Progress Reports

4. Are performance results and evaluation findings used for strategic decision-making? Views of senior agency managers, senior dept/agency managers, and heads of evaluation.

Examples of negative decisions for lack of performance information or poor evaluation results.

Number of positive examples where Performance or Evaluation Results were instrumental in determining the direction of a decision.

Same as 18 months. Case studies

Baseline Data Collection & Evaluation Plans

Key Informant Interviews

Annual Retreat of Heads of Evaluation.

CEE Annual Performance Monitoring report

Business Case Progress Reports

5. Have senior agencies provided a consistent policy regime that recognizes the impact of other policies on the evaluation function? Gaps and backlogs in coverage by RMAFs and evaluations.

Evidence attributing gaps & backlogs to other senior agency policies' impact on evaluation.

Examples of other policies cross-referencing evaluation.

Examples of other policies not cross-referencing or missed opportunities to gain efficiencies and effectiveness.

Opinions of heads of evaluations, senior managers in departments, and senior agency managers.

Same as 18 months. Case studies

Baseline Data Collection & Evaluation Plans

Key Informant Interviews

Annual Retreat of Heads of Evaluation.

CEE Annual Performance Monitoring report

Business Case Progress Reports

ISSUE

PERFORMANCE INDICATORS

Data Source

@18 Months

@5 Years
COST-EFFECTIVENESS
I. IS THE EVALUATION POLICY COST EFFECTIVE?
1. Are there ways in which the evaluation function could be delivered more cost effectively? Opinions of deputy heads, senior managers, program managers, heads of evaluation, and Parliamentarians. Same as 18 months. Case studies

Baseline Data Collection & Evaluation Plans

Key Informant Interviews

Annual Retreat of Heads of Evaluation.

CEE Annual Performance Monitoring report

Business Case Progress Reports

2. Has the CEE delivered value for money? Opinions of deputy heads, senior managers, program managers and heads of evaluation. Same as 18 months. Case studies

Baseline Data Collection & Evaluation Plans

Key Informant Interviews

Annual Retreat of Heads of Evaluation.

CEE Annual Performance Monitoring report

Business Case Progress Reports

3. Are there alternative methods of organizing delivery of the Evaluation Policy? Opinions of deputy heads, senior managers, program managers and heads of evaluation. Same as 18 months. Case studies

Baseline Data Collection & Evaluation Plans

Key Informant Interviews

Annual Retreat of Heads of Evaluation.

CEE Annual Performance Monitoring report

Business Case Progress Reports

ISSUE

PERFORMANCE INDICATORS

Data Source

@18 Months

@5 Years
CONTINUED RELEVANCE
J. IS THERE A NEED FOR FURTHER CHANGES TO THE POLICY OR TO ITS IMPLEMENTATION?
1. Does the evaluation function deliver results for Canadians? Consensus among senior managers and evaluation heads on rationale for new Evaluation Policy and its relevance to overall government agenda. Same as 18 month, plus.

Success of policy and related initiatives in achieving modernization of management practices (e.g. "Results for Canadians" integrated results- based management framework)

Degree to which new Evaluation Policy is integrated with these achievements.

Opinions of selected members of Parliament.

Case studies

Baseline Data Collection & Evaluation Plans

Key Informant Interviews

Annual Retreat of Heads of Evaluation.

CEE Annual Performance Monitoring report

Business Case Progress Reports

2. Are there better ways of delivering the evaluation function within the federal government? Opinions of deputy heads, senior managers, program managers, and heads of evaluation Same as 18 month. Case studies

Baseline Data Collection & Evaluation Plans

Key Informant Interviews

Annual Retreat of Heads of Evaluation.

CEE Annual Performance Monitoring report

Business Case Progress Reports

3. Are there improvements that can be made in the Evaluation Policy? Opinions of deputy heads, senior managers, program managers and heads of evaluation. Same as 18 month. Case studies

Baseline Data Collection & Evaluation Plans

Key Informant Interviews

Annual Retreat of Heads of Evaluation.

CEE Annual Performance Monitoring report

Business Case Progress Reports

 

 

Date modified: