We are currently moving our web services and information to Canada.ca.

The Treasury Board of Canada Secretariat website will remain available until this move is complete.

Models for Evaluation and Performance Measurement for Small Agencies, Summary Report

Archived information

Archived information is provided for reference, research or recordkeeping purposes. It is not subject à to the Government of Canada Web Standards and has not been altered or updated since it was archived. Please contact us to request a format other than those available.

 

Table of Contents

Acknowledgements

Executive Summary

1.0 Introduction

2.0 Overview of Approach

3.0 Findings from Interviews and Case Studies

4.0 Model Validation Exercise

5.0 Description of Validated Models

MODEL A: Straightforward Information Needs
MODEL B: Blended Information Needs
MODEL C: Complex Information Needs

Appendices:


Acknowledgements

Representatives from small agencies formed an Inter-Agency Steering Committee with the goal of developing models of how evaluation and performance measurement activities could be developed and conducted within the small agency context. We would like to thank the following participants:

  • Louise Guertin (Chair), Office of the Commissioner of Official Languages
  • Ronald Pitt, Hazardous Materials Information Review Commission
  • Annette Ducharme, Canadian Forces Grievance Board
  • Robert Sauvé, Patented Medicine Prices Review Board
  • Michael Glynn, Canadian Human Rights Tribunal
  • Guy Savard, Canadian Centre for Management Development
  • Pierre Couturier, National Parole Board

They provided guidance, reviewed drafts, provided feedback, and persevered in assisting the consulting team to understand the context of small agencies.

We are most grateful to Robert Lahey, Yolande Andrews and Kim Cronkwright of the Centre of Excellence for Evaluation, Treasury Board Secretariat, for their ongoing support as well as to the administrators of the Modern Comptrollership Innovation Fund, through which the production of this report was funded.

Members of the Inter-Agency Steering Committee worked in collaboration with Celine Pinsent, Goss Gilroy Inc.


Executive Summary

Objectives

This report presents a summary of a project that developed different potential models of evaluation and performance measurement functions within federal small agencies. The goal of the project was to explore what types of models would be most appropriate given the various characteristics, needs and resources available within small agencies. The project was not designed to produce a "how-to" implementation tool-kit for small agencies in developing evaluation and performance measurement functions. Rather, the purpose of the project was to develop models that would guide any implementation work that was to follow. Indeed, a tool-kit or implementation assistance may be appropriate next steps based on the models developed for the current project.

Background and Rationale

The small-agency community in the federal government is extremely diverse in such various dimensions as organizational structure, relationship with larger departments, nature of work, and organization size. Small agencies also have unique challenges and characteristics when compared with medium-sized or large federal departments. These dimensions and characteristics contribute to the type and nature of information that small agencies need for decisionmaking and ensuring accountability.

Most small agencies are subject to the Treasury Board Evaluation Policy. However, many have neither an evaluation function, nor significant capacity to develop and conduct performance measurement activities.1 It is anticipated that an appropriately sized evaluation function and/or performance measurement capability in small agencies will contribute to an environment that will increase accountability through improved reporting of results achieved for various policies, programs or initiatives. As well, quality information on results achieved will assist in the decision-making process among managers within small agencies.

The Modern Comptrollership Office of the Treasury Board Secretariat (TBS) funded the project. The project was overseen by representatives from a group of small agencies that made up the Project Steering Committee. The Centre of Excellence for Evaluation located within TBS also provided assistance throughout the project.

1 This lack of capacity has been noted in other reports, and was generally confirmed by the current project, which was conducted in 2003–04.

Approach to Model Development

The current project took into account both the diversity of the federal small-agency community, as well as the differences and unique challenges that small agencies encounter in comparison with medium-sized and large departments. The model development was conducted in two phases with the initial phase including data collection (key informant interviews, case studies, a literature review, and a review of approaches in other jurisdictions). The second phase involved validating the draft models with the small-agency community through a self-assessment and gap analysis exercise.

Overview of Findings from Phase One – Data Collection

The main findings from the key informant interviews, case studies, literature review and review of other jurisdictions' approaches were divided into two main areas:

  1. characteristics of the small-agency community that need to be understood and taken into account when discussing potential models of evaluation and performance measurement; and
  2. current approaches and practices in addressing evaluation and performance measurement in small agencies.

These were the main findings with regard to community characteristics.

  • There is significant diversity in agency size, which is connected to the amount and types of information needed to make decisions and report on activities.
  • The capacity to make shifts in agency culture (towards one of results-based measurement) varies greatly between agencies.
  • For small agencies, often the political appointee (the Head of the agency) does not necessarily have experience within the public service. One challenge this may cause in some agencies is that there is a lack of support or understanding from the Head of the agency with regard to issues of performance reporting and evaluation within a public service context.
  • Resource limitations of small agencies are notable. There is very little flexibility in their allocation of resources for the development of new internal processes that are not directly part of their mandated business line.
  • With regard to human resource considerations, there is difficulty in attracting internal capacity, even where positions exist.

These were the main findings with regard to current approaches.

  • There is a large amount of variation within the small-agency community with regard to how advanced agencies are in the areas of performance measurement and evaluation.
  • Agencies that had developed the greatest capacity for performance measurement and evaluation often had an identifiable "champion" in the agency who understood and worked consistently at explaining the benefits of performance measurement and evaluation to other members of the organization.
  • Most agencies identified senior management support as being necessary to produce the overall cultural shifts required in an organization as it integrates the concepts and processes of performance measurement and evaluation within the day-to-day activities of the organization.

Development of Models

The findings from Phase One contributed to the development of three different models of approaches to evaluation and performance measurement in small agencies. The choice of which model would be most appropriate for a particular agency is based on the answers to two underlying questions that form the rationale for evaluation and performance measurement:

  • What types of decision-making must managers in agencies perform?
  • What types of information do they need to make those decisions?

Some types of decision-making may require very complex information, while other decision-making requires relatively straightforward information. The three models are based on the level of complexity of information required from the evaluation and performance measurement activities within an organization (straightforward, blended, and complex). Determination of information needs was based on some common characteristics such as number of business lines, level of risk associated with decisions, number and types of stakeholders, predictability of workload, agency size, nature of legislation associated with agency, proportion of budget allocated to grants and contributions, number of offices, and the balance of emphasis on process and/or impacts of the agency.

Each of the three models has three main components, namely:

  • Rationale for the evaluation and performance measurement function based on information needs;
  • Design and delivery of the evaluation and performance measurement function within a particular agency, such as evaluation planning, indicator development, internal capacity vs. the use of external resources for performance measurement, and integration of the function's activities with other management activities; and,
  • Outcomes of the evaluation and performance measurement function, such as frequency-of-evaluation studies, frequency of internal reporting and integration with planning, integration of activities with external reporting, and using information to support decision-making.

Overview of Findings from Phase Two – Model Validation

The objective of Phase Two was to validate the models within the smallagency community. The majority of those who participated in the validation exercise indicated that they had a blend of both relatively straightforward information needs combined with one or two aspects that may have required more specialized or complex information. As would be expected from the findings in Phase One, most respondents indicated that there were gaps between what their organizations were currently doing and the activities outlined in the appropriate models.

Feedback received from the respondents with regard to the models was positive, with the vast majority of respondents indicating that the models were appropriate for both their specific agency, and the small agency community in general. One comment that was received from a number of agencies was that the models were a reasonable start, but the exercise did not indicate the next steps to ameliorate gaps or to implement the activities described in the models. As previously noted, this was not within the scope of the current project. However, it is a good indication that the next step in this process is to assist small agencies in the implementation process.


1.0 Introduction

This report presents a summary of a project that developed different models of how issues of accountability and performance measurement could be addressed among the different types of federal small agencies. The current project took into account both the diversity of the federal small agency community, as well as the differences and unique challenges that small agencies encounter in comparison with medium-sized and large departments.

The project was designed to address the modern comptrollership area of developing, strengthening and implementing better accountability frameworks among small agencies by strengthening the evaluation and performance measurement functions in small agencies. Most small agencies are subject to the Treasury Board Evaluation Policy, but many do not have an evaluation function or significant capacity to develop and conduct performance measurement activities. It is anticipated that an appropriately sized evaluation function and/or performance measurement capability in small agencies will contribute to an environment that will increase accountability through improved reporting of results achieved for various policies, programs or initiatives. As well, quality information on results achieved will assist in the decision-making process among managers.

The small agency community in the federal government is extremely diverse when it comes to such dimensions as organizational structure, relationship with larger departments, nature of work, and organization size. These dimensions, in addition to others, contribute to the type and nature of information that agencies need for decision-making and ensuring accountability.

Small agencies also have unique challenges and characteristics when compared with medium-sized or large federal departments. One difference is that small agencies often have one or two business lines in comparison with medium-sized and large departments that often have multiple business lines containing many programs. Another difference is that, in comparison with departments, small agencies often have more limited flexibility in financial resources. Given the differences, models of meeting accountability and performance requirements in medium-sized and large departments may not be as applicable to small agencies.

The project was initiated and implemented by a group of small agencies led by the Office of the Commissioner of Official Languages. The group received funding for the project from the Modern Comptrollership Office, Treasury Board Secretariat of Canada (TBS). Goss Gilroy Inc., a management consulting firm, was hired to work in conjunction with the group of small agencies to develop appropriate models.

The report is divided into five main sections. This brief introduction to the project is followed by a description of the overall approach used to develop the models (Section 2.0). Section 3.0 contains the findings from the different methods used to collect information to develop the models. Section 4.0 describes the exercise that was conducted to validate the models with small agencies. Finally, Section 5.0 contains descriptions of the three models and their various components.


2.0 Overview of Approach

The project team used a two-phase approach for the project. The initial phase consisted of key informant interviews, case studies, and a review of approaches used for addressing accountability and performance measurement requirements in other jurisdictions. Using the information collected from these sources, the project team developed draft models of how various types of small agencies could address accountability and performance measurement requirements.

Phase Two consisted of developing and conducting a validation exercise for the models developed in Phase One, and of compiling the information from the two phases in this summary report.

Figure 1 depicts the overall approach employed for the project.

Figure 1

Fig. 1

3.0 Findings from Interviews and Case Studies

The purpose of the key informant interviews and case studies was to collect information that would assist in the development of the draft models for addressing accountability and performance measurement requirements in small agencies. For details on the participants in key informant interviews and the case studies, data collection instruments, and brief descriptions of the 15 individual case studies, please refer to Appendix A.

The type of information that was collected during these activities was of two main types.

  • Community characteristics – These included the identification of specific characteristics of the small agency community. This was an important type of information to collect given that the overall purpose of the project was to develop models that would be appropriate for the community, and not just adaptations of models used by large or medium-sized departments. Some characteristics that were considered included information needs, organizational structure, agency mandates, agency size, flexibility in resources, current challenges, and legislation requirements.
  • Current approaches to addressing accountability and performance measurement requirements – This type of information was particularly useful in identifying any best practices or lessons learned among the agencies that participated in the case studies, as well as any gaps or areas that were particularly challenging to address. This information was used directly in the development process to ensure that the models captured the approaches that had been successfully used in some agencies and that they addressed the gaps identified in some agencies.

3.1 Characteristics of the community

The key informant interviews and case studies produced information on various characteristics of the small-agency community. These are highlighted briefly below.

Agency size
The small-agency community is extremely diverse in terms of size. Depending on the definition of "small," the size of organizations can range from fewer than ten full-time equivalent staff (FTEs) to up more than 1000 FTEs. It is apparent that the amount and types of information needed to make decisions and report on activities within an agency with seven staff is significantly different than that needed to manage an agency with more than 800 staff.

Making shifts in agency culture
For some agencies, the turnover and rate of movement of staff is reduced compared to many large or medium-sized departments. For these agencies, there may be a significant challenge in implementing approaches that are not already part of agency culture. However, many also noted that one advantage to being small is that often a change in culture can be implemented relatively quickly.

Mandated independence
Many agencies are legislated as independent bodies (e.g., quasi-judicial). They are mandated to remain at arms' length from departments or other agencies in conducting the work of the agency. This makes the potential for partnerships and resource sharing as somewhat limited for many agencies.

Support from the Head of the agency
For small agencies, often the political appointee (the Head of the agency) does not necessarily have experience within the public service. One challenge this may cause in some agencies is that there is a lack of support or understanding from the Head of the agency with regard to issues of performance reporting and evaluation within a public service context. In organizations where the Head is knowledgeable and supportive of performance measurement development activities, not surprisingly the agency is relatively advanced in its ability to meet accountability and performance measurement requirements.

Limitations on resource flexibility
The resource limitations of small agencies are notable. There is very little flexibility in their allocation of resources to develop new internal processes that are not directly part of their mandated business line. Unlike mediumsized and large departments—where there is often opportunities present to implement pilot projects, contract external expertise, or make changes to information systems—small agencies often do not have flexibility in allocating resources.

Human resource considerations
During the case studies and key informant interviews, a number of comments were made with regard to human resource issues that must be taken into account when developing models to meet accountability and performance measurement requirements.

  • Difficulties exist in attracting internal capacity, even where positions exist. There is a recognized shortage overall within the federal community for trained evaluators and performance measurement specialists. Small agencies have an even more difficult time attracting these professionals when competing with medium-sized and large departments.
  • Individuals hired within small agencies likely have to have more advanced skill levels on many fronts. The reason for this is that individuals are often required to play multiple roles (e.g., evaluation, performance measurement, strategic planning, audit, etc.). Recruiting this level of professional into a small agency can be problematic (e.g., opportunities for professional advancement, opportunities to work with other evaluators, etc.).

3.2 Current approaches and gaps

Information was collected from the case studies and key informant interviews about current approaches to addressing accountability and performance measurement requirements and about some of the current gaps that exist within the community.

Diversity within the community
There is a large amount of variation within the small agency community with regard to how advanced agencies are in the areas of performance measurement and evaluation. Some agencies are just starting to approach the concepts of performance measurement and see how they apply to their agency. Within these agencies, performance measurement and evaluation are not familiar concepts for managers. These agencies recognize the need to begin the process of understanding their information needs and the ways they can meet accountability and performance measurement requirements. At the other end of this developmental continuum, there are agencies that are well advanced in applying the concepts of performance measurement and evaluation within their agencies. There are accountability frameworks developed and implemented, ongoing monitoring using established performance indicators, and periodic evaluations occurring.

Mandated external review process
For some agencies, there is an external review process that is mandated within the legislation that governs their agency. It was recognized that any model used to meet accountability and performance measurement should take this process into account. This could mean ensuring that quality performance data is available to external reviewers or that there is a reduced need for internal periodic evaluations.

Need for a "champion"
In the case studies, the project team found that those agencies that had developed the greatest capacity for performance measurement and evaluation often had an identifiable person in the agency who understood and worked consistently at explaining the benefits of performance measurement and evaluation to other members of the organization. With an identified "champion," agencies appear to be able to not only increase the acceptance of these concepts, but also to develop significant levels of internal capacity through working groups, planning sessions, etc. In some cases this "champion" was at a director/manager level; in other instances the champion was the Head of the agency.

Support from senior management
While the Head of the agency does not necessarily need to be identified as the "champion" in the agency, most agencies participating in the case studies indicated that there needs to be support from senior management in order to integrate performance measurement and evaluation activities with overall agency activities. The senior management support was often cited as being necessary to produce the overall cultural shifts required in an organization as it integrates the concepts and processes of performance measurement and evaluation within the day-to-day activities of the organization.

Emphasis on process
For many of the agencies that participated in the case studies, there was a large emphasis placed on the need to understand, assess, and report on various processes within the agencies. The project team attributed this to the nature of the work of many of these agencies, which is often judicial, quasi-judicial, or investigative. This may be different from many large and medium-sized departments, where the emphasis is on delivering programs or interventions. The types of performance measurement and evaluation that are most appropriate for these special types of work may differ significantly from the types of information that is required to manage "programs."

Difficulty in evaluating "rationale"
The concept of evaluating "rationale" is difficult for many agencies. For many agencies, they have one business line that is mandated by a specific piece of legislation. Their rationale is defined by the legislation. This is in contrast to evaluating rationale for a specific program or policy within a large or medium-sized department, where there is the possibility of re-focusing the program, cutting it, expanding it, and so on. These options are not available to many of the small agencies that have one business line mandated by legislation. As a result, these agencies tend to place more emphasis on evaluating and reporting on process (see the point above).

Performance measurement as a first step
Many of the agencies reported that, in addressing the overall requirements for accountability, they are focussing initially on developing performance measurement within their agencies. This includes the development of a framework that links activities to expected outcomes, performance indicators and data collection strategies. Once a solid performance measurement system is in place, they plan to then address the need for periodic evaluations for different aspects of their organization.

Approaches to addressing capacity building
Participants in the interviews and case studies suggested different approaches to addressing capacity issues within their agencies. Many indicated that capacity building around areas of performance measurement and evaluation must be done within the overall modern management agenda. Being separated from an overall management approach makes it difficult for individuals to understand the rationale for the processes and makes them more likely to dismiss performance measurement and evaluation as "the flavour of the month." Successful approaches to building capacity cited by participants included the following.

  • Continuing to identify and communicate best practices and lessons learned within the small-agency community. Many pointed to the need of avoiding "reinventing the wheel" over and over again for each small agency.
  • Developing demonstration projects that allow various small agencies to observe and participate. This participative learning approach allows agencies with different levels of capacity to observe how different performance measurement and evaluation processes are designed and implemented in other small agencies.
  • Developing the capacity of staff requires more than the delivery of workshops. Assistance is often needed in actually helping them to do something the first few times.
  • Ensure that staff and managers see results from performance measurement and evaluation quickly (e.g., facilitates planning, reporting, etc). If there is not a relatively quick demonstration of some "pay-off," the interest and capacity developed will more likely fade.

Access to expertise
During the case studies and interviews, many participants indicated that one of the main challenges they have in attempting to develop approaches to performance measurement and evaluation in their organizations is easy accessibility to expertise in these areas. This expertise also needs to understand and adapt to the context of small agencies.


4.0 Model Validation Exercise

One main component of Phase Two of the project was the development and implementation of a model validation exercise. Once information was collected from the research on approaches used in other jurisdictions, key informant interviews, and case studies, the project team worked with the Steering Committee to develop three draft models of approaches to performance measurement and evaluation within the small agency community. Phase Two required that a validation exercise for the draft models be developed and implemented.


4.1 Description of exercise

The validation exercise consisted of four main components. The initial component consisted of a brief description of the project and the draft models. The second component briefly assessed their organization's information needs to determine which model was most appropriate. Once an appropriate model was chosen, the participant moved to the third component, which consisted of a brief gap analysis to determine which aspects of the model currently existed within the agency, which were planned, and which were not applicable. Finally, the fourth component encouraged the participant to provide feedback to the project team on the appropriateness of models, the assessment of information needs, and the identification of gaps.

The Chair of the Steering Committee sent an initial letter to notify small agencies that did not participate in the case studies of the validation exercise. Approximately three days later, these agencies were sent a validation package (see Appendix B for copy of validation package). Each small agency that received a validation package was then contacted by phone to ascertain that they had received the package and to offer assistance in completing the package if required.

As of the end of July 2003, the project team received completed validation exercises from 22 small agencies. This resulted in a response rate of approximately 50 percent.


4.2 Overview of findings from the validation exercise

In this section, an overview of the findings from the validation exercise is presented. For specific detailed data tables, please refer to Appendix C.

Those who completed the validation exercise varied in many ways. Agencies ranged from fewer than 50 FTEs to more than 150 FTEs. Self-assessed level of knowledge of performance measurement and evaluation ranged from "novice" to "advanced." There was a mixed response with regard to the emphasis on process vs. impact of organization activities.

The majority of those who participated in the exercise indicated that they had a blend of both relatively straightforward information needs combined with one or two aspects that may have required more specialized or complex information (Model B). Most respondents indicated that there were gaps between what their organization was currently doing and the activities outlined in the appropriate models. Where gaps did exist, in most cases, plans were made with regard to implementing the specific activity within three years.

Feedback received from the respondents with regard to the models was positive, with the vast majority of respondents indicating that the models were appropriate for both their specific agency and the small agency community in general. With regard to the usefulness of the information needs assessment and the gap analysis, those who had relatively well established performance measurement and evaluation activities found the exercise less useful, but did tend to comment that it was a good confirmation of their own analysis. Those who had less developed performance measurement and evaluation activities tended to provide feedback that the exercise was useful.

One comment that was received from a number of agencies was that the models were a reasonable start, however, the exercise did not indicate the next steps to ameliorate gaps or to implement the activities described in the models. While it should be noted that the implementation portion was not within the purview of the current project, it is likely the essential next step as small agencies continue to develop these activities within the models provided by the project.


5.0 Description of Validated Models

The resulting three models developed were:
  • Model A – Straightforward Information Needs
  • Model B – Blended Information Needs
  • Model C – Complex Information Needs
The three models all contain three main components:
  • Rationale for Evaluation and Performance Measurement Activities
  • Design and Delivery of Evaluation and Performance Measurement Activities
  • Outcomes from Evaluation and Performance Measurement Activities.
Component 1:
Rationale for Evaluation and Performance Measurement Activities

Why would an agency invest in evaluation and performance measurement activities? What is the rationale or raison d'être for their presence within an agency? The quick answer is that if these activities are designed and delivered appropriately, they provide management with information needed to make decisions.

As a result, the two underlying questions that were consistently asked during the development of the models of evaluation and performance measurement were:

  • What types of decision-making must managers in agencies perform?
  • What types of information do they need to make those decisions?

Useful evaluation and performance measurement activities are those that can address the information needs of managers in a results-based environment. In order to understand the type of information that is required from these activities, an agency needs to clearly delineate the types of decision-making that managers must perform. Some types of decision-making may require very complex information, while other decision-making requires relatively straightforward information. The models are based on the level of complexity of information required from the evaluation and performance measurement activities within an organization.

Through the initial phase of the project, the project team found the main characteristics of agencies that were related to the complexity of information that they required for decision-making. These included:

  • Number of business lines or programs that the organization managed;
  • Both the number and types of stakeholders that were involved with the organization, such as partner agency/departments, delivery organizations, clients, etc.;
  • Level of risk associated with the various decisions and/or activities of the organization;
  • Centrality of the organization measured through the number of regional/national offices involved;
  • Fluctuations in budget and/or resources available;
  • Predictability of workload or demand for services of the organization;
  • Agency size (e.g., FTEs);
  • Proportion of budget that is allocated to grants and contributions;
  • Balance of emphasis on process within the agency or impacts of agency's activities; and
  • Nature of legislation associated with the agency (e.g., mandated external reviews, mandated activities/rationale, etc.).

By systematically assessing each of these characteristics, an agency should be able to determine to what extent they have overall complex information needs, straightforward information needs, or information needs that are positioned somewhere in between. Depending on their information needs, they will need to design and deliver their evaluation and performance measurement activities accordingly.

Component 2:
Design and Delivery of Evaluation and Performance Measurement Activities

 

Once an organization has determined the type of information that it requires (e.g., complex, straightforward), it can then be determined how the evaluation and performance activities could be designed and delivered within the organization. Some areas of consideration include:

  • Evaluation planning;
  • Logic model development;
  • Indicator development;
  • Development of a performance measurement system;
  • Internal capacity for activities;
  • Use of external resources; and
  • Integration of evaluation and performance measurement with other management activities.

How each of these areas is addressed should be linked directly to the actual type of information required from these activities. For example, indicator development could range from multiple level indicators with various levels of roll-up for complex information needs, to the systematic collection of data on three or four good indicators for relatively straightforward information needs.

Component 3:
Outcomes of Evaluation and Performance Measurement Activities

The final component of each of the models is the actual outcomes from the evaluation and performance measurement activities. These should relate directly to the original rationale for the activities (Component 1) and the assessment of management's information needs for decision-making. The main areas to be considered in this component are:

  • Frequency of evaluation studies;
  • Frequency of internal reporting and integration with planning;
  • Integration of activities with external reporting; and
  • Using performance measurement and evaluation information to support decision-making.

The following diagrams contain the detailed descriptions of each of the models, including the three components for each model.


Model A
Straightforward Information Needs

Component one - Rationale

External Considerations
  • Subject to TBS Evaluation Policy
  • Need information to produce RPPs/DPRs/Annual Reports
  • Primary stakeholders have moderate to high expectations for performance
  • Lower levels of public visibility
  • Must remain independent and/or arms' length from other agencies and departments
Internal Considerations
  • Mission statement or core/organizational values contain statements of performance required
  • Agency objectives contain measurable outcomes at different levels at various time periods
Factors Associated with Information Needs for Management Decision-Making
Number of business lines: single business line or single program

Number/type of stakeholders: primary stakeholder groups are well defined and limited in number (fewer than 5 distinct groups)

Risk associated with decisions: overall there is a relatively low level of risk associated with decisions

Centrality: one office, one setting

Fluctuations in budget/resources: small fluctuations in budget (less than +/- 15% over 3 years)

Fluctuations in workloads/demand: fluctuations in workloads are relatively predictable 12 months ahead

Size: fewer than 50 FTEs

Proportion of budget as grants and contributions: less than 5%
Greater emphasis on process or impacts:
Main emphasis within agency is on knowing that the processes within the organization are efficient and follow a prescribed approach

Legislation governing activities or agency:
  • Organization is mandated by legislation to perform certain activities with little or no flexibility in how they are to be conducted
  • Legislation contains a mandated external review of agency

Assessment of Information Needs
The information needs for management decision-making are relatively simple overall. The types of evaluation and performance measurement activities undertaken by these types of agencies should match this need in being relatively straightforward, not complicated, and very clear with respect to outcomes produced by these activities.


Straightforward Information Needs

Component two – Design and Delivery

External Considerations
  • Skill sets and competition for potential new employees are increasing, which may make it difficult for small agencies to attract and retain new employees with the proper skill sets to work in a results-based environment.
  • There will likely be a need to have some reliance on external expertise and resources when designing and implementing these types of activities.
Internal Considerations
  • Current staff will need to understand the concepts of PM and evaluation and how it applies to their work.
  • Senior level support for the activities is required in order for the activities to be consistently undertaken and the resulting information used in decision-making.
  • While external resources can be used for design and implementation, it is important to have at least one individual in the organization that knows enough about the concepts and process to monitor the activities, ensure that the design and implementation are appropriate for the organization, etc. This will likely only be one part of the individual's job at the organization.
Design and Delivery Requirements for PM/Evaluation Activities
Evaluation planning: periodic evaluation studies on an as needed basis with focus on process issues

Logic model development: linking of activities, outputs, outcomes on an overall agency perspective (results chain)

Indicator development: one or two levels of indicators according to different levels of need for information

Development of performance measurement system:
  • Clear plan describing which type of information is needed by who and when
  • Relatively simple information system needed to track a small number of indicators
Internal capacity for activities:
  • Internal capacity to plan and manage periodic evaluation studies focussed on process
  • Internal capacity to monitor activities, and recognize need to adjust performance measurement strategy
Use of external resources:
  • Periodic external resources required to implement and conduct focussed evaluation studies
  • Periodic external resources required providing expertise in development and adjustment of frameworks, measurement strategies, development of TORs for RFPs, etc.

Integration of activities with other management activities: basic understanding of how ongoing performance measurement and periodic evaluation information can be integrated into other management activities

Assessment of Gaps in Design and Delivery of Evaluation and Performance Measurement Activities
Gaps in the extent to which an agency currently has a sufficient system of evaluation and performance measurement can be assessed against the requirements stated above. Emphasis on each of the requirements will vary from agency to agency depending on a number of both internal and external considerations.


Straightforward Information Needs

Component three - Outcomes

External Considerations
  • External reporting requirements with regard to schedule, timing and content
  • Requirements for reporting to stakeholder groups
Internal Considerations
  • Reporting according to different levels and needs (senior management, Board)
  • Need for integration of information from PM/evaluation activities with other management information
Key Outcomes from Evaluation and Performance Measurement Activities
Frequency of evaluation studies:
  • Opportunities to identify periodic evaluation studies as needed
  • Reports from evaluation studies are relevant and timely in meeting the information needs of the management
Frequency of internal reporting and integration with other management activities:
  • Performance measurement information fed at least annually into the planning activities in agency
  • Performance reporting on a scheduled basis (at least annually)
Integration of PM/evaluation activities and external reporting:
  • Evidence of solid PM and evaluation information being used in external reporting requirements (DPRs, RPPs, Annual Reports)
Use of PM and evaluation information to support internal decision-making:
  • Clear identification, tracking and annual reporting of immediate outcomes for the agency
  • Evidence of solid PM and evaluation information being used in internal decision-making processes

Assessment of Gaps in Evaluation and Performance Measurement Outcomes
Gaps in the extent to which an agency currently has a sufficient system of evaluation and performance measurement can be assessed against the key outcomes stated above. Emphasis on each of the outcomes will vary from agency to agency depending on a number of both internal and external considerations.


Model B

Blended Information Needs

Component one – Rationale

External Considerations
  • Subject to TBS Evaluation Policy
  • Need information to produce RPPs/DPRs/Annual Reports
  • Primary stakeholders have high expectations for performance
  • Medium levels of public visibility
  • Limited partnerships/joint initiatives with other organizations
Internal Considerations
  • Mission statement or core/organizational values contain statements of performance required
  • Agency objectives contain measurable outcomes at different levels at various time periods
Factors Associated with Information Needs for Management Decision-Making
Number of business lines: small number of business lines or programs (2 or 3)

Number/type of stakeholders: some primary stakeholder groups (between 5 and 10) and one or two of these groups are partners in the delivery of your business lines/programs

Risk associated with decisions: overall there are relatively medium levels of risk associated with decisions

Centrality: 1 to 3 offices in addition to national office

Fluctuations in budget/resources: budget has experienced some fluctuation within the past 3 years (+/- 15% to 30%)

Fluctuations in workloads/demand: the workload or demand for services is somewhat predictable; usually can forecast 6-12 months in advance

Size: between 50 and 150 FTEs

Proportion of budget as grants and contributions: small proportion of budget allocated to grants and contributions (5% to 25%)

Greater emphasis on process or impacts: equal emphasis within agency on knowing processes are efficient and follow a prescribed outline, and on knowing the impact of organization's activities on primary stakeholders

Legislation governing activities or agency: organization has some flexibility in performing mandated activities according to legislation. Organization is not necessarily mandated to undergo an external review process

Assessment of Information Needs
The information needs for management decision-making range between straightforward and complex. The types of evaluation and performance measurement activities undertaken by these types of agencies should match this need in being ready to respond to the types of information that managers will require, working in an environment that can be at times straightforward and/or complex.

Blended Information Needs

Component two – Design and Delivery

External Considerations
  • Skill sets and competition for potential new employees are increasing, which may make it difficult for small agencies to attract and retain new employees with the proper skill sets to work in a results-based environment.
  • There will likely be a need to have some reliance on external expertise and resources when designing and implementing these types of activities.
Internal Considerations
  • Current staff will need to understand the concepts of PM and evaluation and how it applies to their work.
  • Senior level support for the activities is required in order for the activities to be consistently undertaken and the resulting information used in decision-making.
  • While external resources can be used for design and implementation, it is important to have at least some capacity in the organization to monitor the activities, ensure the design and implementation is appropriate for the organization, etc. This capacity is likely across a few individuals in the organization who contribute to these activities as a portion of their overall time.
Design and Delivery Requirements for PM/Evaluation Activities Evaluation planning:
  • Evaluation planning that links into strategic planning for the agency (may be done on a cycle or annually)
  • Planned periodic evaluation studies that focus on both impacts and process
  • Some limited ability to respond to small ad hoc evaluation requests is required
Logic model development:
  • Linking of activities, outputs, outcomes on an overall agency perspective
  • Linking of activities, outputs, outcomes for each individual business line/program that can be rolled into the agency perspective

Indicator development: three to four levels of indicators according to different levels of need for information (e.g. program managers, directors, management committee, executive committee)

Development of performance measurement system:
  • Clearly defined roles and responsibilities for different aspects of information collection and compilation across a small group of people, a few units/branches and a few partner organizations
  • Solid, planned information system that feeds a roll-up of information on a periodic basis according to various levels of indicators
Internal capacity for activities:
  • Internal capacity to design, plan and manage evaluation studies focussed on either impacts or process
  • Internal capacity to develop and monitor activities associated with the performance measurement strategy
Use of external resources:
  • Periodic external resources required to implement and conduct focussed evaluation studies
  • Periodic external resources required to provide expertise in development and adjustment of frameworks and measurement strategies

Integration of activities with other management activities: identified, planned process for periodic integration of performance measurement and evaluation information with other management activities

Assessment of Gaps in Design and Delivery of Evaluation and Performance Measurement Activities
Gaps in the extent to which an agency currently has a sufficient system of evaluation and performance measurement can be assessed against the requirements stated above. Emphasis on each of the requirements will vary from agency to agency depending on a number of both internal and external considerations.

Blended Information Needs

Component three - Outcomes

External Considerations
  • External reporting requirements with regard to schedule and timing
    • Schedule/timing
    • Content
  • Requirements for reporting to stakeholder groups
  • Coordination and alignment of reporting with partner organizations
Internal Considerations
  • Reporting according to different levels and needs (senior management, Board)
  • Need for integration of performance measurement activities with evaluation activities
  • Need for integration of information from PM/evaluation activities with other management information
Key Outcomes from Evaluation and Performance Measurement Activities Frequency of Evaluation Studies:
  • Cycle or multi-year plan for evaluation activities
  • Some limited ability to respond to small ad hoc needs for evaluation studies
  • Evaluation reports are relevant and timely in meeting the information needs of the management
Frequency of internal reporting and integration with other management activities:
  • Evaluation plan is linked to strategic planning for organization
  • Performance measurement information fed on a periodic basis (quarterly) into the management activities on various levels

Integration of PM/evaluation activities and external reporting: evidence of solid PM and evaluation information being used in external reporting requirements (DPRs, RPPs, Annual Reports)

Use of PM and evaluation information to support internal decision-making:
  • Evidence of solid PM and evaluation information being used and integrated into decision-making processes
  • Clear identification, tracking and reporting of immediate outcomes on a periodic basis for the agency

Assessment of Gaps in Evaluation and Performance Measurement Outcomes
Gaps in the extent to which an agency currently has a sufficient system of evaluation and performance measurement can be assessed against the key outcomes stated above. Emphasis on each of the outcomes will vary from agency to agency depending on a number of both internal and external considerations.


Model C

Complex Information Needs

Component one – Rationale

External Considerations
  • Subject to TBS Evaluation Policy
  • Need information to produce RPPs/DPRs/Annual Reports
  • Primary stakeholders have high expectations for performance
  • Higher levels of public visibility
  • Likely have a number of partnerships/joint initiatives with other organizations
Internal Considerations
  • Mission statement or core/organizational values contain statements of performance required
  • Agency objectives contain measurable outcomes at different levels at various time periods
Factors Associated with Information Needs for Management Decision-Making
Number of business lines: multiple business lines or programs (4 or more)

Number/type of stakeholders: multiple primary stakeholder groups (more than 10) and more than two of these groups are partners in the delivery of your business lines/programs

Risk associated with decisions: overall there are relatively high levels of risk associated with decisions

Centrality: more than 3 offices in different settings

Fluctuations in budget/resources: budget has fluctuated substantially within the past 3 years (less than +/- 30%)

Fluctuations in workloads/demand: the workload or demand for services is not predictable; difficult to forecast further than 3 to 4 months ahead

Size: more than 50 FTEs

Proportion of budget as grants and contributions: more than 25% of budget is allocated to grants and contributions

Greater emphasis on process or impacts: the main emphasis within agency is on knowing the impact of the organization's activities on primary stakeholders

Legislation governing activities or agency: organization is not necessarily mandated by legislation to perform certain activities. Organization is not necessarily mandated to undergo an external review process

Assessment of Information Needs
The information needs for management decision-making are relatively complex. The types of evaluation and performance measurement activities undertaken by these types of agencies should match this need in being ready to respond to the types of information that managers will require working in a complex environment.


Complex Information Needs

Component two – Design and Delivery

External Considerations
  • Skill sets and competition for potential new employees are increasing, which may make it difficult for small agencies to attract and retain new employees with the proper skill sets to work in a results-based environment.
  • There will likely be a need to have some reliance on external expertise and resources when designing and implementing these types of activities.
Internal Considerations
  • Current staff will need to understand the concepts of PM and evaluation and how it applies to their work./li>
  • Senior level support for the activities is required in order for the activities to be consistently undertaken and the resulting information used in decision-making.
  • While external resources can be used for design, the organization with complex information needs will have at least one individual in the organization who is knowledgeable and who is dedicated full-time to monitoring, planning and implementing the activities, ensuring the design is appropriate for the organization, etc.
Design and Delivery Requirements for PM/Evaluation Activities Evaluation Planning:
  • Annual evaluation planning that links into strategic planning for the agency
  • Planned evaluation studies that focus on both impacts and process
  • Ability to respond to ad hoc evaluation requests as required
Logic model development:
  • Linking of activities, outputs, outcomes on an overall agency perspective
  • Linking of activities, outputs, outcomes for each individual business line/program that can be rolled into the agency perspective

Indicator development: multiple levels of indicators according to different levels of need for information (e.g. program managers, directors, management committee, executive committee)

Development of performance measurement system:
  • Clearly defined roles and responsibilities for different aspects of information collection and compilation across numerous people, units/branches and various partners
  • Relatively complex information system that feeds a roll-up of information on an ongoing basis according to various levels of indicators
Internal capacity for activities:
  • Internal capacity to design, plan and deliver/manage evaluation studies focussed on either impacts or process
  • Internal capacity to develop and monitor activities, and adjust performance measurement strategy
Use of external resources:
  • Periodic external resources required to implement and conduct evaluation studies
  • Periodic external resources required to provide expertise in development and adjustment of frameworks and measurement strategies

Integration of activities with other management activities: identified formal process for ongoing integration of performance measurement and evaluation with other management activities

Assessment of Gaps in Design and Delivery of Evaluation and Performance Measurement Activities
Gaps in the extent to which an agency currently has a sufficient system of evaluation and performance measurement can be assessed against the requirements stated above. Emphasis on each of the requirements will vary from agency to agency depending on a number of both internal and external considerations.


Complex Information Needs

Component three - Outcomes

External Considerations
  • External reporting requirements with regard to schedule, timing and content
  • Requirements for reporting to stakeholder groups
  • Coordination and alignment of reporting with partner organizations
Internal Considerations
  • Reporting according to different levels and needs (senior management, Board)
  • Need for integration of performance measurement activities with evaluation activities
  • Need for integration of information from PM/evaluation activities with other management information
Key Outcomes from Evaluation and Performance Measurement Activities
Frequency of Evaluation Studies:
  • Annual plan for evaluation activities
  • Need for integration of performance measurement activities with evaluation activities
  • Need for integration of information from PM/evaluation activities with other management information
Frequency of internal reporting and integration with other management activities:
  • Annual evaluation plan is linked to strategic planning for organization
  • Performance measurement information fed on an ongoing basis (at least monthly) into the management activities on various levels
Integration of PM/evaluation activities and external reporting:
  • Evidence of solid PM and evaluation information being used in external reporting requirements (DPRs, RPPs, Annual Reports)
Use of PM and evaluation information to support internal decision-making:
  • Evidence of solid PM and evaluation information being used and integrated into internal decision-making processes
  • Clear identification, tracking and ongoing reporting of immediate outcomes for the agency
  • Evidence of use of PM/evaluation for decisions about resource allocation

Assessment of Gaps in Evaluation and Performance Measurement Outcomes
Gaps in the extent to which an agency currently has a sufficient system of evaluation and performance measurement can be assessed against the key outcomes stated above. Emphasis on each of the outcomes will vary from agency to agency depending on a number of both internal and external considerations.


Appendix A

List of Key Informant Interviews

Agency:
Office of the Commissioner for Official Languages
Louise Guertin
Director General, Corporate Services

Agency:
Canadian Forces Grievance Board
Annette Ducharme
Consultant

Agency:
Canadian Human Rights Commission
Mike Glynn
Registrar

Agency:
Patented Medicine Prices Review Board
Robert Sauvé
Director, Corporate Services

Agency:
Canadian Centre for Management Development
Guy Savard
Manager

Agency: Treasury Board Secretariat
Randy Platt
Portfolio Manager 
Comptrollership Modernization

Agency: Treasury Board Secretariat
Robert Lahey
Senior Director, Centre of Excellence for Evaluation
Results-based Management Directorate

Agency:
Treasury Board Secretariat
Yolande Andrews
Senior Analyst, Centre of Excellence for Evaluation
Results-based Management Directorate

Agency:
Treasury Board Secretariat
Elaine MacKay
Coordinator, Small Agencies
GOS – Assistant Secretary's Office

Agency:
Transportation Safety Board of Canada
Jean Laporte
Director, Corporate Services
Corporate Services Branch
 

Case Studies

The following 15 agencies participated in the case studies.

Correctional Services of Canada (CSC) – With a staff of approximately 283 FTEs, the agency serves a quasi-judicial function. The Board has an established evaluation function and coordinates program delivery with CSC and RCMP. The agency submits annual performance monitoring reports to Parliament and provides such reports for the public on the NPB Web site.

National Round Table on the Environment and the Economy – The National Round Table on the Environment and the Economy (NRTEE) is an independent advisory body that provides decision makers, opinion leaders and the Canadian public with advice and recommendations for promoting sustainable development. The NRTEE, legislated by Parliament in 1994, explains and promotes sustainable development. Working with stakeholders across Canada, the NRTEE identifies key issues with both environmental and economic implications, examines these implications and suggests how to balance economic prosperity with environmental preservation. The agency employs 28 FTEs, working in the field of policy development.

Office of the Commissioner of Official Languages – The Commissioner of Official Languages is the spokesperson for the Office of the Commissioner. As an Officer of Parliament, she plays several key roles in promoting and achieving the objectives of the Official Languages Act. These include ensuring that federal institutions comply with the Act, upholding the language rights of Canadians and promoting linguistic duality and bilingualism. The Office operates with approximately 162 FTEs.

Office of the Correctional Investigator of Canada – The Correctional Investigator is mandated by Part III of the Corrections and Conditional Release Act as an ombudsman for federal offenders. The primary function of the Office is to investigate and bring resolution to individual offender complaints. The Office also reviews and makes recommendations on the Correctional Services' policies and procedures associated with the areas of individual complaints to ensure that systemic areas of concern are identified and appropriately addressed. The Office works with approximately 27 FTEs.

Office of the Tax Court of Canada – The Tax Court of Canada is a court of law. The Court was established in 1983 pursuant to the Tax Court of Canada Act (Section 3) with a view to dispensing justice in tax matters. The Court is independent of the Canada Customs and Revenue Agency and all other departments of the Government of Canada. The Tax Court of Canada is a superior court to which individuals and companies may appeal to settle disagreements with the Government of Canada on matters arising under legislation over which the Court has exclusive original jurisdiction. Most of the appeals made to the Court relate to income tax, the goods and services tax or employment insurance. There are approximately 124 FTEs employed by the Office of the Tax Court of Canada.

Patented Medicine Prices Review Board – PMPRB is an independent quasi-judicial body, created in 1987 under the Patent Act to protect consumer interests in light of increased patent protection for pharmaceuticals. With a staff of approximately 35 FTEs, PMPRB contributes to canadian health care by ensuring that prices of patented medicines are not excessive.

Social Sciences and Humanities Research Council – The Social Sciences and Humanities Research Council of Canada (SSHRC) is an arms' length federal agency that promotes and supports university-based research and training in the social sciences and humanities. Created by an Act of Parliament in 1977, SSHRC is governed by a 22-member council that reports to Parliament through the Minister of Industry. The Council has approximately 160 FTEs.


Models/Approaches for Meeting Accountability and Performance-Reporting for Small Agencies

Guide for Initial Committee Interviews

The purpose of this initial set of interviews is to gain additional information for the background to the study, clarify the expectations for the study, and potentially provide information on key criteria to be considered in selection of case studies and identification of best practices.

Background and Expectations for Study

1. What do you see as the main objective(s) of the current study? Why is this study being commissioned at this time?

2. From your perspective, how will you determine if the study has been successful in meeting its objectives?

3. How will the results of the study be used by the Committee and/or by your organization:
  1. In the short-term? (e.g., 12 months)
  2. In the medium-term? (e.g., 12 months +)

Challenges for Small Agencies

4. What are the specific challenges small agencies currently contend with in meeting accountability and performance reporting requirements? How do these challenges differ from those of medium/large departments? Which requirements are most challenging to your agency?

Choice of Case Studies

5. What should be the primary set(s) of criteria in choosing case studies for the current study?
  1. Categorization work by Committee to date
  2. Best practices
  3. Challenging examples
  4. Capacity issues
  5. Assessment of risk
  6. Governance/reporting structure
  7. Other considerations
6. Which agencies would you recommend we consider for case studies? Why?

Information for Research Paper

7. Are you aware of any work that has been previously conducted examining models of accountability in smaller agencies? Approaches to performance measurement? Approaches to reporting results?

Feedback on Proposed Approach

8. Do you have any specific feedback or guidance to offer the team when you examined our proposed approach (refer to proposal or attached overview diagram)?

9. From your perspective, what will be the most challenging aspects of the proposed study?
 

Models/Approaches for Meeting Accountability and Performance-Reporting for Small Agencies

Interview Guide for Case Studies

The purpose of these interviews is to provide the study team with information about your agency that will then be used to develop individual case studies (4-5 pages in length). The findings from the individual case studies will then be summarized in an overall document that will be used to develop appropriate models of how evaluation and performance measurement functions could be developed in various types of small agencies.

The interviews will be combined with a document review for each agency. If answers to some of the questions can be easily found in your agency's documents that you will be providing for the document review, please make note and we will skip through those during the interview process. Not all questions will be appropriate for all interviewees, or for all agencies.

Organizational Profile

1. How would you describe your organization according to the following categories?
  1. Judicial
  2. Quasi-judicial
  3. Regulatory
  4. Policy development
  5. Investigative
  6. Parliamentary
  7. Other
2. What is the current number of FTEs employed at your agency?

3. Please describe the following processes at your agency according to who is responsible, what type of information is available during the process, and the overall approach used for each process:
  1. Planning processes
  2. Decision-making processes
  3. Results reporting processes
4. To what extent are the three groups of processes linked or related?

5. To what extent have management processes changed within the past 3 to 4 years with regard to the following:
  1. Performance measurement/Reporting results
  2. Evaluation of programs, policies or initiatives
  3. Accountability of managers
  4. Conceptualization or characterization of risk
6. Why have these changes occurred? What have been the primary incentives for change?

7. From your perspective, what have been the main challenges in making these changes?

8. What have been the most successful solutions in addressing these challenges?

9. With regard to changes in management processes related to evaluation and performance measurement, where do you see the most likely changes occurring within the next two to three years?

10. What characteristics of your organization present the most challenge in meeting accountability and performance reporting requirements? How do these challenges differ from those of medium/large departments? Which requirements are most challenging to your organization?

11. Do you have relationships or partnerships with other agencies? Other departments? What restrictions are currently placed on your organization with regard to partnerships with other organizations? (e.g., legislation)

Current Status of Implementation of Evaluation Function and Performance Measurement

12. How does the management of your organization know whether or not it is meeting its overall objectives? Operational objectives?

13. Does the agency currently allocate any resources towards performance measurement? Evaluation? If yes, what level of resources is available?

14. Which of the following best describes how the evaluation function is currently positioned in your organization:
  1. We have not conducted any evaluation work to date; we do not have any evaluation work planned for the upcoming fiscal year;
  2. We have not conducted any evaluation work to date; we do have evaluation work planned in the upcoming fiscal year.
  3. We conduct evaluation work when there is an external requirement;
  4. We conduct evaluation work in varying ways – occasionally integrated into a measurement strategy;
  5. Evaluation is integrated with the rest of measurement strategy; accepted as a management aid;
  6. Regular evaluation is an integral part of policy and program management; or
  7. Evaluation is a fully recognized part of policy and program activities.
15. To what extent is your agency aware of the TB Evaluation Policy? Where would you situate your organization in terms of the implementation of the TB Evaluation Policy? If there are gaps, what are they?

16. Does the organization currently have a performance measurement strategy? Does the organization collect any information on results in a systematic fashion? How do managers use the information that is collected? What gaps in information currently exist?

Key Risks and Challenges

17. Has the organization recently conducted a risk analysis? If yes, what were determined to be the highest risk areas? Has a plan been developed to address these areas? Has the plan been implemented?

18. What have been the primary challenges in developing and implementing a plan to address risk areas?

 

Capacity Issues

19. Do you currently have sufficient capacity to meet the accountability and performance reporting requirements for your organization? In which areas are the greatest capacity gaps?

20. What have you attempted to date to address these capacity issues? To what extent has this worked? Not worked?

21. How do you perceive that these capacity issues could best be addressed by the small agency community? Other organizations?
 

Appendix B

Validation Exercise Package

Models of Evaluation and Performance Measurement in Small Agencies Background

Recently, a group of small agencies proposed to the Modern Comptrollership Office of TBS that they receive funds for a project focused on developing models of how evaluation and performance measurement activities could be developed and conducted within the small agency context. It was assumed that while some of the agencies may be able to easily adapt approaches used in the medium/large departments, many agencies, given their special contexts, may find this adaptation approach less appropriate. The current project attempted to develop more appropriate models by:

  • Taking into account some of the unique characteristics of small agencies;
  • Understanding the actual information needed by managers within different types of small agencies; and
  • Involving small agencies throughout the model development process (e.g., case studies, key informant interviews, and validation exercise).

Purpose of Validation Exercise

The current model validation phase of the study involves presenting draft models to the members of the small agency community to collect feedback on:
  • The extent to which the models reflect the special characteristics of small agencies;
  • The appropriateness of model components; and
  • The usefulness of having models in determining the most appropriate pathways for the development of evaluation and performance measurement activities within specific agencies.

The study team and Steering Committee felt that one of the most appropriate ways to validate the draft models was to ask the community to review them and actually assess their individual agencies using the models.

This practical application of the models quickly brings them out of the realm of ideas to one of useful tools. The study team will integrate the feedback received through the validation exercise into the final version of the models.

Description of Model Components

The draft models all contain three main components:
  • Rationale for Evaluation and Performance Measurement Activities
  • Design and Delivery of Evaluation and Performance Measurement Activities
  • Outcomes from Evaluation and Performance Measurement Activities
Component 1:
Rationale for Evaluation and Performance Measurement Activities

Why would an agency invest in evaluation and performance measurement activities? What is the rationale or raison d'être for their presence within an agency? The quick answer is that if these activities are designed and delivered appropriately, they provide management with information needed to make decisions.

As a result, the two underlying questions that were consistently asked during the development of the models of evaluation and performance measurement were:
  • What types of decision-making must managers in agencies perform?
  • What types of information do they need to make those decisions?

Useful evaluation and performance measurement activities are those that can address the information needs of managers in a results-based environment. In order to understand the type of information that is required from these activities, an agency needs to clearly delineate the types of decision-making that managers must perform. Some types of decision-making may require very complex information, while other decision-making requires relatively straightforward information. The draft models are based on the level of complexity of information required from the evaluation and performance measurement activities within an organization.

Through the case studies during the initial phase of the project, the study team found ten main characteristics of agencies that were related to the complexity of information that they required for decision-making. These included:
  • Number of business lines or programs that the organization managed;
  • The number and types of stakeholders that were involved with the organization, such as partner agency/departments, delivery organizations, clients, etc.;
  • Level of risk associated with the various decisions and/or activities of the organization;
  • Centrality of the organization measured through the number of regional/national offices involved;
  • Fluctuations in budget and/or resources available;
  • Predictability of workload or demand for services of the organization;
  • Agency size (e.g., FTEs);
  • Proportion of budget allocated to grants and contributions;
  • Balance of emphasis on process within the agency or impacts of agency's activities; and
  • Nature of legislation associated with the agency (e.g., mandated external reviews, mandated activities/rationale, etc.).

By systematically assessing each of these characteristics, an agency should be able to determine to what extent it has, overall, complex information needs, straightforward information needs, or information needs that are positioned somewhere in between. Depending on their information needs, the agency will need to design and deliver its evaluation and performance measurement activities accordingly.

Component 2:
Design and Delivery of Evaluation and Performance Measurement Activities

Once an organization has determined the type of information that it requires (e.g., complex, straightforward), it can then be determined how the evaluation and performance activities could be designed and delivered within the organization. Some areas of consideration include:

  • Evaluation planning;
  • Logic model development;
  • Indicator development;
  • Development of a performance measurement system;
  • Internal capacity for activities;
  • Use of external resources; and
  • Integration of evaluation and performance measurement with other management activities.

How each of these areas is addressed should be linked directly to the actual type of information required from these activities. For example, indicator development could range from multiple level indicators with various levels of roll-up for complex information needs, to the systematic collection of data on three or four good indicators for relatively straightforward information needs.

Component 3:
Outcomes of Evaluation and Performance Measurement Activities

The final component of the models is the actual outcomes from the evaluation and performance measurement activities. These should relate directly to the original rationale for the activities (Component 1), and the assessment of management's information needs for decision-making. The main areas to be considered in this component are:

Frequency of evaluation studies;
  • Frequency of internal reporting and integration with planning;
  • Integration of activities with external reporting; and
  • Using performance measurement and evaluation information to support decision making.

Validation Exercise

The validation exercise for the three models of performance measurement and evaluation activities in small agencies has been broken down into four main steps. We anticipate that it will take you approximately 30 to 45 minutes to walk through the exercise and record your responses directly on the document. Once completed, please return to the study team in the envelope provided.

Steps Required for Validation Exercise

The main steps required in completing the validation exercise are:
  • STEP ONE: Complete the attached questionnaire to describe and understand your organization's information needs for management decision-making.
  • STEP TWO: Choose the most appropriate model based on the assessment of the complexity of your information needs.
  • STEP THREE: Analyze the gap between your organization's current evaluation and performance measurement activities and those that would be considered fully developed according to the chosen model.
  • STEP FOUR: Provide feedback on utility of the exercise, suggested improvements, and preferred capacity-building options.
Please complete the following information:

Name: ___________________________________________________
Position: _________________________________________________
Organization: ______________________________________________
Number of years with current organization: ______________________

Self-rated level of knowledge of Performance Measurement and/or Evaluation:
(circle most appropriate)
 
No knowledge Novice Intermediate Advanced


STEP ONE: Assessing the Complexity of Information Needs for Management Decision-Making

Each of the questions below is based on the dimensions that the study team found were related to the complexity of information needed by management to make decisions. Please check the most appropriate response for your specific agency. If none of the responses are appropriate, please write your response next to the other response categories.

Number of business lines or programs
1. How many business lines or programs does your organization manage?
  1. 1
  2. 2 to 3
  3. 4 or more
Number of primary stakeholders
2. Approximately how many different groups of primary stakeholders does your organization have?
  1. less than 5
  2. 5 to 10
  3. more than 10
Types of primary stakeholders
3. Do you have partnerships with any of your primary stakeholders? That is, do you have agreements with other organizations or groups to deliver/manage any aspect of your service or organization's activities (e.g., shared databases, 3rd party delivery, co-management of program)?
  1. No partners
  2. 1-2 partners
  3. more than 2 partners
Level of risk associated with organization's activities and/or decisions
4. What is the level of risk associated with your organization's activities and/or decisions (may have been assessed in Capacity Assessment by TBS)?
  1. Low risk levels overall
  2. Medium risk levels overall
  3. High risk levels overall
Centrality of organization
5. How many regional offices are there in your organization?
  1. No regional offices – 1 national office
  2. 1-3 regional offices in addition to national office
  3. more than 3 regional offices in addition to national office
Fluctuations in budget and/or resources available
6. To what extent has the organization's budget fluctuated within the past three years?
  1. less than +/- 15%
  2. +/- 15 to 30%
  3. more than +/- 30%
Predictability of workload or demand for services
7. To what extent is the organization's workload or demand for services predictable?
  1. Workload/demand is very predictable; easily forecast 12 months+
  2. Workload/demand is somewhat predictable; usually can forecast 6-12 months
  3. Workload/demand is not predictable; difficult to forecast
Agency size
8. What is the current size of your agency according to FTEs?
  1. less than 50 FTEs
  2. 50 to 150 FTEs
  3. more than 150 FTEs
Proportion of budget that is allocated to grants and contributions
9. What proportion of your current budget is allocated to grants and contributions?
  1. less than 5%
  2. 5% to 25%
  3. more than 25%
Balance of emphasis on process within the agency or impact of agency's activities
10. Which of the following best describes your organization's main emphasis?
  1. Our main emphasis is on knowing that our processes within the organization are efficient and follow a prescribed outline;
  2. We place equal amounts of emphasis on knowing our processes within the organization are efficient and follow a prescribed outline, and on knowing the impact of our organization's activities on our primary stakeholders.
  3. Our main emphasis is on knowing the impact of our organization's activities on our primary stakeholders.
Nature of legislation
11. Is your organization currently mandated by legislation to perform certain activities that make up your main business line(s)?
  1. Yes, completely with very little or no flexibility
  2. Yes, somewhat with some flexibility
  3. No
12. Is your organization currently mandated by legislation to be subject to an external review process?
  1. Yes
  2. No


STEP TWO: Choosing a Model

The assessment of complexity of information needs will assist in determining the model of activities that will likely be most useful to your organization. Using the table below, count the number of times in Step One you indicated each response (A, B or C). Choose the model for the category where most of your responses occur. Then move on to the model indicated for Step Three.

Response Type # indicated Description For Step Three go to
"A" Responses       It is likely that your organization has relatively straightforward information needs for decision-making Model A
Straighforward Information Needs
"B" Responses      It is likely that some of your information needs are relatively straightforward, while their are some charactersitics of your organization that require more complex types of information. Model B
Blend of Straightforward and Complex Information Needs
"C" Responses   It is likely that your organization has charactersitics that make the information needs relatively complex. Model C
Complex Information Needs
Total 12      

If your scores tended to fall relatively evenly across the three categories, or were evenly split between two categories, it will be important to review all the models in Step Three to determine which is the most likely fit for your agency.


 

STEP THREE: Assessing the Gap Between Information Needs and Current Activities

MODEL A – Straightforward Information Needs

For each area, indicate whether your organization is currently at the level described, whether the area is currently under consideration, or whether there are no current plans to include that area in the organization's evaluation and performance measurement (PM) activities.

  Under Consideration With Plans for Full Development Within...
  Current Situation 12 Months 24 Months 36 Months Not Under Consideration Comments
Design and Delivery
Evaluation planning that links into strategic planning for the agency (may be done on a cycle or annually)            
Planned periodic evaluation studies that focus on both impacts and processes            
Some limited ability to respond to small ad hoc evaluation requests as required            
Linking of activities, outputs, outcomes on an overall agency perspective (results chain, logic model)            
Linking of activities outputs outcomes for each individual program/business line that can be rolled up into the agency perspective (results chain, logic model)            
Clearly defined roles and responsibilities for different aspects of information collection and compilation across a small group of people, a few units/branches and a few partner organizations            
Three to four levels of indicators according to different levels of need for information (e.g., program managers, directors, management committee, etc.)            
Solid, planned information system that feeds a roll-up of information on a periodic basis according to various levels of indicators            
Internal capacity to design, plan and manage evaluation studies focused on either impacts or processes            
Internal capacity to develop and monitor activities associated with the performance measurement strategy             
Periodic external resources required to implement and conduct evaluation studies            
Periodic external resources required to provide expertise in development and adjustment of frameworks and measurement strategies            
Identified, planned process for periodic integration of performance measurement and evaluation information with other management activities             
Outcomes
Cycle or multi-year plan for evaluation activities            
Some limited ability to respond to small ad hoc evaluation requests              
Evaluation reports are timely and relevant in meeting the information needs of management            
Evaluation plan is linked to strategic planning for organization            
Performance measurement information is fed on a periodic basis (at least quarterly) into the management activities on various levels            
Evidence of solid performance measurement and evaluation information being used in external reporting requirements (DPRs, RPPs, Annual Reports)            
Evidence of solid PM and evaluation information being used in internal decision-making processes            
Clear identification, tracking and reporting of immediate outcomes on a periodic basis for the organization            

 

STEP THREE: Assessing the Gap Between Information Needs and Current Activities

MODEL B – Blend of Straightforward and Complex Information Needs

For each area, indicate whether your organization is currently at the level described, whether the area is currently under consideration, or whether there are no current plans to include that area in the organization's evaluation and performance measurement (PM) activities.

  Under Consideration With Plans for Full Development Within...
  Current Situation 12 Months 24 Months 36 Months Not Under Consideration Comments
Design and Delivery
Evaluation planning that links into strategic planning for the agency (may be done on a cycle or annually)            
Planned periodic evaluation studies that focus on both impacts and processes            
Some limited ability to respond to small ad hoc evaluation requests as required            
Linking of activities, outputs, outcomes on an overall agency perspective (results chain, logic model)            
Linking of activities outputs outcomes for each individual program/business line that can be rolled up into the agency perspective (results chain, logic model)            
Clearly defined roles and responsibilities for different aspects of information collection and compilation across a small group of people, a few units/branches and a few partner organizations            
Three to four levels of indicators according to different levels of need for information (e.g., program managers, directors, management committee, etc.)            
Solid, planned information system that feeds a roll-up of information on a periodic basis according to various levels of indicators            
Internal capacity to design, plan and manage evaluation studies focused on either impacts or processes            
Internal capacity to develop and monitor activities associated with the performance measurement strategy             
Periodic external resources required to implement and conduct evaluation studies            
Periodic external resources required to provide expertise in development and adjustment of frameworks and measurement strategies            
Identified, planned process for periodic integration of performance measurement and evaluation information with other management activities             
Outcomes
Cycle or multi-year plan for evaluation activities            
Some limited ability to respond to small ad hoc evaluation requests              
Evaluation reports are timely and relevant in meeting the information needs of management            
Evaluation plan is linked to strategic planning for organization            
Performance measurement information is fed on a periodic basis (at least quarterly) into the management activities on various levels            
Evidence of solid performance measurement and evaluation information being used in external reporting requirements (DPRs, RPPs, Annual Reports)            
Evidence of solid PM and evaluation information being used in internal decision-making processes            
Clear identification, tracking and reporting of immediate outcomes on a periodic basis for the organization            

 

STEP THREE: Assessing the Gap Between Information Needs and Current Activities

MODEL C – Complex Information Needs

For each area, indicate whether your organization is currently at the level described, whether the area is currently under consideration, or whether there are no current plans to include that area in the organization's evaluation and performance measurement (PM) activities.

  Under Consideration With Plans for Full Development Within...
  Current Situation 12 Months 24 Months 36 Months Not Under Consideration Comments
Design and Delivery
Annual evaluation planning that links into strategic planning for the organization            
Planned periodic evaluation studies that focus on both impacts and processes            
Ability to respond to most ad hoc evaluation requests as required            
Linking of activities, outputs, outcomes on an overall agency perspective (results chain, logic model)            
Linking of activities outputs outcomes for each individual program/business line that can be rolled up into the agency perspective (results chain, logic model)            
Clearly defined roles and responsibilities for different aspects of information collection and compilation across numerous people, units/branches and a various partner organizations            
Multiple levels of indicators according to different levels of need for information (e.g., program managers, directors, management committee, etc.)            
Relatively complex information system that feeds a roll-up of information on an ongoing basis according to various levels of indicators            
Internal capacity to design, plan and manage evaluation studies focused on either impacts or processes            
Internal capacity to develop and monitor activities associated with the performance measurement strategy             
Internal capacity to develop and monitor activities associated with the performance measurement strategy            
Periodic external resources required to implement and conduct evaluation studies            
Periodic external resources required to provide expertise in development and adjustment of frameworks and measurement strategies             
Identified, planned process for periodic integration of performance measurement and evaluation information with other management activities            
Outcomes
Annual plan for evaluation activities            
Readily able to respond to small ad hoc evaluation requests              
Evaluation reports are timely and relevant in meeting the information needs of management            
Annual evaluation plan is linked to strategic planning for organization            
Performance measurement information is fed on a periodic basis (at least quarterly) into the management activities on various levels            
Evidence of solid performance measurement and evaluation information being used in external reporting requirements (DPRs, RPPs, Annual Reports)            
Evidence of solid PM and evaluation information being used and integrated into internal decisionmaking processes            
Clear identification, tracking and ongoing reporting of immediate outcomes for the organization            
Evidence of use of PM/evaluation information for decisions about resource allocation            

 

STEP FOUR: Feedback on Exercise and Models

Please provide your comments to the questions below.

1. To what extent did the exercise assist you in any of the following:

  1. Identifying the complexity of your organization's information needs.
  2. Identifying appropriate evaluation and performance measurement activities for your organization.
  3. Identifying gaps in your organization's current evaluation and performance measurement activities.

2. Taking into account the wide diversity within the small agency community, do the models proposed accurately reflect the activities in evaluation and performance measurement that could be carried out in different types of small agencies?

3. What are your suggestions for changes to either the exercise or the models themselves to make them more useful for the small agency community?

4. How could TBS best assist you within the upcoming 12-18 months in meeting the evaluation/performance measurement needs of your organization?

Please check off from the list below the three capacity building options that you feel would most benefit your organization at this time.

  • General tools and a toolkit that could include: manuals, templates for commonly used forms, training and workshops for staff, etc.
  • "Demonstration" project that could have community-wide benefit and would be transferable to other agencies. Small agencies could also "partner" on a specific project to share costs, common goals/needs, etc. This could involve clustering "like" agencies.
  • Projects to help an individual small agency to better understand its needs and put in place an action plan for the implementation of a strategy for performance measurement and evaluation.
  • Assistance from TBS: resource from TBS who could provide guidance with respect to preparing Requests for Proposals (RFP), contract definitions, etc. related to evaluation/ performance measurement.
  • Medium/large departments could assist small agencies in building their evaluation skills. (This does not necessarily mean shared projects).
  • Shared service solution: a shared resource that is mutually beneficial or necessary, but not monetarily feasible for one individual small agency.
  • Other. Please specify:
Other comments:



 

Thank you very much for your participation.

Please place the completed validation exercise in the envelope provided and call the consultant group at 230-5577 for pick-up service.



 

Appendix C

Detailed Data Tables from Validation Exercise

  • Self-rated knowledge of PM/evaluation

     
     
    Level #(%)
    No knowledge 0(0%)
    Novice 4(18%)
    Intermediate 13(59%)
    Advanced 5(23%)


     
  • Number of Business Lines

     
     
    #of lines #(%)
    One 13(59%)
    Two or Three 6(27%)
    Four or more 3(14%)


     
  • Associated level of risk

     
     
    Risk Level #(%)
    Low 12(55%)
    Medium 9(40%)
    High 1(5%)


     
  • Predictability of workload

     
     
    Predictability #(%)
    Very 2(9%)
    Somewhat 13(59%)
    Not at all 7(32%)


     
  • Size of agency

     
     
    Number of FTEs #(%)
    Less than 50 9(40%)
    50 to 150 7(32%)
    More than 150 6(27%)


     
  • Percentage of Grants and Contributions

     
     
    Percentage Gs&Cs #(%)
    Less than 15% 19(86%)
    15% to 30% 1(5%)
    More than 30% 2(9%)


     
  • Balance of emphasis on process vs. impacts

     
     
    Emphasis #(%)
    More emphasis on process 2(9%)
    Equal emphasis on process and impacts 16(73%)
    More emphasis on impacts 4(18%)


     
  • Typology of Respondents According to Models

     
     
    Model #(%)
    Model A: Straightforward Information Needs 8(36%)
    Model B: Blend of Straightforward and Complex Information Needs 13(59%)
    Model C: Complex Information Needs 1(5%)


     
  • Gap Analysis - All Agencies

     
     
    Gaps #(%)
    Few or no gaps (85%-100% current situation for elements) 3(14%)
    Some gaps (50%-84% current situation for elements) 5(23%)
    Many gaps (less than 50% current situation for elements 14(64%)


     
  • Potential solutions

     
     
    Capacity Building Options #(%)
    General tools and toolkit 9(41%)
    Demonstration projects 5(23%)
    Individual agency projects 8(36%)
    TBS assistance: prep of RFPs, contracts, etc 10(45%)
    Assistance from medium/large departments 2(14%)
    Shared service solution 7(32%)


 
Date modified: