We are currently moving our web services and information to Canada.ca.

The Treasury Board of Canada Secretariat website will remain available until this move is complete.

Results Reporting Capacity Check

Archived information

Archived information is provided for reference, research or recordkeeping purposes. It is not subject à to the Government of Canada Web Standards and has not been altered or updated since it was archived. Please contact us to request a format other than those available.

Purpose

When Can Departments Use the RRCC?

Overview of RRCC Implementation 

Step-by-Step Implementation

Timing and Estimated Level of Effort

Working with the Results of the RRCC

Annex A:   Capacity Check Template

Purpose

The Centre of Excellence for Evaluation has developed the Results Reporting Capacity Check (RRCC) to assist departments and agencies to systematically assess the capacity of select programs and initiatives to manage, achieve and demonstrate results for Canadians.

Results of the RRCC provide a level of confidence to departmental executives that a program or initiative has the capacity – as shown by its performance – to demonstrate and account for results.   Positive RRCC results provide confidence to senior management that performance information is readily available and or an evaluation could be easily executed. Poor RRCC results will result in the need for a formal program evaluation.  

It is important to note that conducting an RRCC does not preclude departmental senior executives or TB ministers from requesting that an evaluation be conducted should conditions or circumstances change (e.g., change in departmental or government priorities, change in external environment, etc.).   It is merely one tool that can assist department in assessing results reporting capacity.

When Can Departments Use the RRCC? 

The RRCC is for Transfer Payment Policy programs, policies or initiatives with the following characteristics:

  • Low financial risk (i.e., $2 million in annual expenditures or less);
  • Demonstrated capacity to delivery (i.e., clear accountabilities, demonstrated capacity to manage and monitor);
  • Low potential for public controversy/low profile; and,
  • Few or well-aligned stakeholders partners.

Overview of RRCC Implementation 

The RRCC is a rapid assessment of the state of a program, policy or initiative's capacity to manage, achieve and demonstrate results. It focuses on three areas:

  • Strategic Capacity demonstrates that the program is thinking and acting strategically with respect to the program delivery and the achievement of results;
  • Operational Capacity demonstrates that appropriate systems, practices and processes are in place to manage for results; and,
  • Results Capacity demonstrates that results are being achieved and corrective action has been taken when necessary to maintain a results focus.

The Capacity Check Template (see Annex A) guides the collection and analysis of data and information in these three key areas.  

The department or agency's Evaluation Unit should conduct the capacity check.   The CEE recommends that a senior analyst, with an in depth knowledge of evaluation and performance measurement practices across the department, conduct the assessment.

It is important to note that the RRCC provides a snapshot of a program [1]'s capacity to manage and achieve results for a single point in time.   It cannot be applied to future years.

Step-by-Step Implementation 

The following outlines a step-by-step process in completing the RRCC.   It is important to be familiar with the Capacity Check Template (Annex A) before beginning.

Step 1:    Document Review
The RRCC begins with a review of key documents: the program's Results- Based Management and Accountability Framework (RMAF) and other key strategic or policy related documents. The purpose of the document review is to ensure that the analyst has a good understanding of the original purpose and plan for the program so that he/she can effectively dialogue with program management. Particular attention should be given to understanding the program's intended results/outcomes.

Step 2:    Key Informant Interview
The analyst should meet with the Program Director to complete the Capacity Check Template.   Responses should be recorded.   For each response, documentation should be provided at the interview to substantiate responses (see Column 2 for suggested data sources).   The template can be shared with the Program Director to expedite the data collection process.

Step 3:    Consultation with Key Stakeholders
Once Step 2 is completed, the analyst should consult with key stakeholders to further validate the information presented. At a minimum, key stakeholders should include TBS' Program Sector analyst(s), the department's policy unit, and ADM responsible for the program.   If applicable and times permits, the analyst may also wish to consult third-party delivery organizations.

Step 4:    Capacity Analysis
An analysis of the data and information collected should be completed and an overall capacity rating should be presented with associated recommendation on future evaluation activities.

Because of the nature and purpose of the RRCC, it is inappropriate for recommendations to extend into program design or implementation. The RRCC merely signals the need for further evaluation work and potential evaluation issues.   The results of the capacity analysis should be shared with Program Director and signed-off by the Head of Evaluation prior to presentation to departmental senior management and TBS. 

Annex A provides guidance on how to determine the overall reporting capacity of the program.   Some elements require a mandatory "good" capacity rating others can be assessed at either "good" or "adequate" capacity.   To obtain an overall rating of "good" the program must meet the mandatory ratings as identified and not have limited capacity in any other area being assessed.

 Step 5:    Recommendation, Reporting and Action
The RRCC report should consist of a one-page summary of the findings, conclusions and the final recommendation. The detailed results template can be appended. The final recommendation as to whether an evaluation is required for the program lies with the Head of Evaluation and/or TBS. The results of the RRCC should be presented to the department's Audit and Evaluation Committee for review and approval prior to submission to TBS.

Timing and Estimated Level of Effort 

Assuming the availability of key informants, the entire capacity assessment process should take no longer than four days to complete over a two-week period of time.

Working with the Results of the RRCC 

As its use requires an exemption to the Transfer Payment Policy, a department cannot use the RRCC without prior approval by TBS Program Sector and the CEE. Should the RRCC demonstrate that a program has overall good capacity to report on results then an exemption to conducting a formal program evaluation will be possible. If, however, the RRCC proves otherwise (i.e., limited capacity to report on results), the department is required to complete a formal program evaluation to support the renewal process.   The evaluation must follow established evaluation standards as outlined in the Evaluation Policy 2001.   With this in mind, it is important to complete the RRCC on a timely basis leaving sufficient time to conduct an evaluation if it is warranted.

Annex A:   Capacity Check Template

 

Question

Potential Data Sources

 

Assessment

 

Capacity Rating

Good Capacity

Adequate Capacity

Limited or No Capacity

S
T
R
A
T
E
G
I
C

How do your program's objectives link to the strategic outcomes of the department? - Program Director

- PAA

- RMAF

Program objectives clearly link to strategic outcome(s) of department and this linkage is well understood by program management. There is reference to strategic outcomes but the linkage is not clear (i.e., difficult to make). No link to strategic outcome(s) of department (i.e., not evident in PAA) Mandatory good rating
Who are the key stakeholders and how are they engaged in the program? - Program Director

- RMAF

All key stakeholders are identified and formally engaged in the program delivery and/or monitoring. Key stakeholders are identified and are engaged in an ad hoc or informal manner (i.e. no formal engagement strategy). Some key stakeholders are identified and are not engagement in program delivery or monitoring activities. Good or adequate
What other organizations are involved in this or similar programs? - Program Director

- Other key stakeholders

- RMAF

There is good knowledge of other programs or initiatives in the federal government, other levels of government, private or voluntary sectors that are involved in similar activities. Program management has a strategy is beginning to work towards avoiding overlap and duplication.

 

There is some knowledge of other programs or initiatives in the federal government, other levels of government, private or voluntary sectors that are involved in similar activities. Program management has not yet developed a strategy to avoid overlap and duplication. There is no knowledge of other programs or initiatives in the federal government, other levels of government, private or voluntary sectors that are involved in similar activities. Mandatory good rating
What in the environment has changed since the program started and how has the program been adapted to reflect these changes?

 

- Program Director

- Environmental scan reports

- Strategy papers

- Other key stakeholders

Program management has a long-term plan in place that is grounded in strategic analysis.

 

Program management does not have a long-term plan in place but does have a long-term perspective grounded in strategic analysis. There is no long-term plan in place and no evidence of strategic analysis to support decision-making. Good or adequate

O
P
E
R
A
T
I
O
N
A
L

To what extent has the program been delivered as planned?

 

- Program Director

- RMAF

- Operational plans.

All aspects of the program have been delivered as originally planned. If changes have been required, there is rationale for these changes.

 

Most aspects of the program have been delivered as originally planned. The program is not being delivered as planned (i.e., outlined in the RMAF or Ts&Cs). Mandatory good rating.
What other delivery approaches are/have been considered?

 

 

- Program Director

- RMAF

- Operational plans

- Decks or briefings

Program management considers alternates to program delivery with cost-effectiveness one of the considerations when assessing options. Program management considers alternates to program delivery. Program management has not considered alternatives to program delivery at any time of the program planning or delivery. Mandatory good rating.
To what extent is an appropriate governance structure in place? - Program Director

- RMAF

- Governance documents (TORs, minutes, action plans, etc.)

An effective governance structure is in place and meets regularly. (i.e., a decision-making body is in place with appropriate representation, it meets and decisions are made, recorded, communicated and action taken on a timely basis)

 

A governance structure is in place with appropriate representation but does not meet regularly.   Decisions are made and action taken but not always monitored. There is no formal governance structure in place. Mandatory good rating
How do you monitor your performance?

 

- Program Director

- RMAF

- Performance reports.

There are clear and well-understood performance monitoring systems, structures, policies and practices in place. Performance monitoring is being used to support decision-making.   Changes to the program have resulted due to monitoring activities. Performance monitoring/measurement is being done on an ad hoc basis.   Performance measurement reports are sometimes used to support decision-making. There is no performance monitoring being done. Mandatory good rating.

R
E
S
U
L
T
S

What intended results have you realized?

 

 

 

 

- Program Director

- Performance reports

- Annual report

- Evaluations.  

There is evidence that results (outputs and outcome levels) are achieved as intended. There is evidence that outputs are being delivered. There is no evidence of whether results – outputs or outcomes – are being achieved. Or evidence is anecdotal.   Good or adequate rating.
OVERALL RATING

 


[1]   For the remainder of this document the word "program" is used to mean "program, policy or initiative".

Date modified: