We are currently moving our web services and information to Canada.ca.

The Treasury Board of Canada Secretariat website will remain available until this move is complete.

Guide on the Program Evaluation Function - May 1981

Archived information

Archived information is provided for reference, research or recordkeeping purposes. It is not subject à to the Government of Canada Web Standards and has not been altered or updated since it was archived. Please contact us to request a format other than those available.


 

Guide on the Program Evaluation Function

Program Evaluation Branch

May 1981


TABLE OF CONTENTS    
     
     
  Page  
Foreword iv  
Acknowledgement

v

 
Preface 1  
     
Chapter 1 THE TREASURY BOARD POLICY ON EVALUATION OF PROGRAMS 3  
1.0 Guidelines 3  
1.1 Introduction 3  
1.1.1 The Need for Program Evaluation 3  
1.1.2 General Approach to Evaluation 4  
1.2 The Policy and The Basic Program Evaluation Issues 5  
1.2.1 The Policy 5  
1.2.2 The Basic Program Evaluation Issues 6  
1.3 Main Features of the Policy Circular 9  
     
Chapter 2 AN OVERVIEW OF PROGRAM EVALUATION IN DEPARTMENTS 11  
2.0 Guidelines 11  
2.1 Program Evaluation and the Management Process 11  
2.1.1 The Departmental Management Process 11  
2.1.2 Relation to the Policy and Expenditure    
Management System 12  
2.2 Program Evaluation: Concepts and Basic Terms 14  
2.3 The Program Evaluation Function: An Overview 17  
2.4 Program Evaluation, Internal Audit and Other    
Management Controls 18  
     
Chapter 3 DEPARTMENTAL PROGRAM EVALUATION POLICY AND ORGANIZATION    
CONSIDERATIONS 23  
3.0 Guidelines 23  
3.1 Departmental Policy on Program Evaluation 24  
3.2 Departmental Responsibilities for Program    
Evaluation 26  
3.2.1 Responsibilities of the Deputy Head 26  
3.2.2 Responsibilities for Managing the    
Program Evaluation Function 26  
3.2.3 Responsibilities for Follow-Through Action . 26  

i


      Page  
         
3.3 The Role of Line Management in Program      
Evaluation     29  
3.4 Departmental Organization for Program        
Evaluation   . 30  
3.4.1 Managing the Program Evaluation Function 30  
3.4.2 Reporting Relationships to the Deputy      
Head     31  
3.4.3 Committees for Program Evaluation   33  
3.4.4 Organizational Structures for Conducting    
Program Evaluations     33  
3.4.5 Relationships to Other Corporate      
Management Functions     34  
         
Chapter 4 DEPARTMENTAL PROGRAM EVALUATION PLAN ....   37  
4.0 Guidelines     37  
4.1 An Overview and General Principles     38  
4.2 Program Evaluation Components     40  
4.2.1 What is a Program Evaluation Component?   40  
4.2.2 The Program Evaluation Component Profile.. 42  
4.3 Establishing Priorities for Program Evaluations..... 45  
4.4 Departmental Program Evaluation Plans   46  
4.4.1 The Long-Term Program Evaluation Plan   46  
4.4.2 The Annual Program Evaluation Plan   49  
4.5 Updating Departmental Program Evaluation Plan   50  
4.5.1 Revisions Due to a New Year     50  
4.5.2 Revisions Due to Changing Circumstances   50  
         
Chapter 5 PERSONNEL FOR CARRYING OUT EVALUATION WORK   53  
5.0 Guidelines     53  
5.1 Professional Skills for Evaluation     53  
5.1.1 The Unique Nature of Each Evaluation Study 54  
5.1.2 Independence     54  
5.1.3 Subject-Matter Knowledge     55  
5.1.4 Analytical Skills     55  
5.1.5 Interpersonal Skills     56  
5.1.6 Project Management Skills     56  
5.2 The Use of Consultants     56  

ii


        Page  
           
Chapter 6 CONDUCTING PROGRAM EVALUATIONS......   59  
6.0 Guidelines     59  
6.1 Introduction and Overview     60  
6.2 Evaluation Assessment     62  
  6.2.1 The Evaluation Assessment Study   62  
  6.2.2 The Evaluation Study Terms of Reference   64  
  6.2.3 Other Outcomes from Evaluation      
  Assessment     64  
6.3 The Program Evaluation Study     64  
6.4 Evaluation Reporting     68  
6.5 Taking Decisions Based on Program Evaluations   69  
           
Chapter 7 EVALUATION REQUIREMENTS FOR NEW PROGRAMS   71  
7.0 Guidelines     71  
7.1 Requirements for Evaluating New Programs   71  
  7.1.1 New Program Evaluation Components   72  
  7.1.2 A Component Profile     72  
  7.1.3 An Evaluation Framework     73  
7.2 Developing Evaluation Requirements   73  
           
Chapter 8 THE ROLE OF THE OFFICE OF THE COMPTROLLER GENERAL..   77  
8.1 OCG Responsibilities for Program Evaluation   77  
8.1.1 The OCG Relationship with Departments and Agencies 78    
  8.1.2 OCG Assistance     78  
  8.1.3 OCG Comments     79  
8.2 Treasury Board Expectations for Program Evaluation   80  
           
GLOSSARY OF TERMS     83  
           
PROGRAM EVALUATION TERMS: ENGLISH-FRENCH .....   85  

iii


FOREWORD

The program evaluation function being established in federal departments and agencies is an essential part of the government's initiative to improve management practices and controls. Program evaluation allows for the questioning on a periodic basis of the rationale for each government program. It involves the systematic gathering of verifiable information on a program and demonstrable evidence on its results and cost-effectiveness in order to provide more and better information for decision-making.

With the introduction of the new Policy and Expenditure Management System, a significant step was made towards increasing expenditure control and accountability within government as well as improving policy-making and priority setting by government. Key to the success of this initiative is the information base on which such decisions are made. Officials and Ministers must have reliable, relevant and objective information available if real improvements in the decision-making process are to be achieved. Program evaluation is one important source of this information.

This Guide, which was recently approved by the Treasury Board of Canada, describes the systems and procedures departments and agencies in the federal government are to have in place to ensure a useful and relevant program evaluation function, and explains the general approaches and principles to be used in carrying out evaluations of programs. The successful implementation of these guidelines by departments and agencies is necessary in order to provide better information for decision-making in government.

The Guide is addressed to deputy heads of departments and agencies and to those other officials responsible for program evaluation. I hope it will also prove useful to other levels of government and other organizations interested in the evaluation of government programs.

Donald J. Johnston

President of the Treasury Board of Canada

iv


ACKNOWLEDGEMENT

This Guide is the result of close collaboration with many departments and agencies. An early draft was reviewed in detail by a working group of representative departments, and a subsequent revised draft was reviewed by the general evaluation community in the federal government, as well as by several outside evaluation experts. We found this process very useful and to all those involved in this effort I wish to express my appreciation.

The final consultation on development of this Guide was with deputy heads who are responsible for program evaluation in their departments and agencies. The Guide was sent for review to deputy heads of the 29 largest departments who provided comments which were very useful in preparing the final draft. Their support for the program evaluation function and of the Guide is most gratifying. Without their support we could not have proceeded and I would like to express to them my special appreciation.

H.G. Rogers

Comptroller General of Canada

v


PREFACE

Scope and Coverage of the Guide

      This guide covers the establishing and the ongoing operation of a program evaluation function in all departments and agencies of the Government of Canada whose programs are subject to review by the Treasury Board, as required by the Treasury Board Policy (1977-47) on "Evaluation of Programs by Departments and Agencies". In addition, the principles underlying the guide should prove useful for any other evaluations of programs being carried out, as well as in the evaluation of policies, projects and other non-program evaluations which may not be covered under the Treasury Board Policy (1977-47).

      This guide will be updated as needed to reflect changing circumstances, to include additional subjects which are in need of elaboration and to incorporate experience gained in the conduct and use of program evaluations in departments and agencies.

      A companion document entitled Principles for the Evaluation of Programs by Federal Departments and Agencies , deals with the process of conducting an evaluation, offering suggestions in more detail on how evaluations might be carried out.

Purpose of the Guide

      This guide serves several purposes. First, it is to provide an explanation and elaboration of the Treasury Board Policy (1977-47). When the policy was issued in 1977, there was little formal program evaluation as envisaged in the policy being carried out by departments. Since then, the Office of the Comptroller General (OCG) has been created to see to the improvement of management practices and controls, including the establishment of a program evaluation function in each department and agency. The experience gained during this time, coupled with the evolution of related review and monitoring functions, such as internal audit, has resulted in refinement in the interpretation of the policy. All these factors are reflected in this guide, which is built on the experience achieved to date in departments and agencies in the area of program evaluation.

      A second purpose of this guide is to provide departments and agencies with assistance in establishing and maintaining a program evaluation function. As such, this guide serves as a general reference document on the program evaluation function in departments and agencies.

1


      Third, this guide should be considered as a statement of the expectations of the Treasury Board of Canada and the OCG in the area of program evaluation, expectations which are tempered by the realities faced by each department. While the OCG will be looking to see if the basic principles outlined here are in fact being followed in departments and agencies, the explicit form of the processes and procedures followed by each department and agency will, of necessity, be influenced by its particular situation and its state of development in the area of program evaluation.

      Lastly, this guide outlines the responsibilities of the OCG in the area of evaluations of programs.

Format of The Guide

      The guide consists of a number of chapters which cover various aspects of the program evaluation function. Each chapter, except the last, begins with a statement of specific guidelines and is followed with an explanation and discussion of them and their underlying principles. The guidelines summarized at the beginning of each chapter are numbered to correspond to the section of that chapter where they are described. The guidelines are in fact short summaries of the text. Reading the text, however, is essential to a clear understanding of the guidelines.

2


CHAPTER I

      THE TREASURY BOARD POLICY ON THE EVALUATION OF PROGRAMS

1.0 GUIDELINES

1.1

Program evaluation in federal departments and agencies should involve the systematic gathering of verifiable information on a program and demonstrable evidence on its results and cost-effectiveness. Its purpose should be to periodically produce credible, timely, useful and objective findings on programs appropriate for resource allocation, program improvement and accountability.

The evaluation of programs should be concerned with all of the basic program evaluation issues (Table 1. I).

1 . 2

1.1 INTRODUCTION

1.1.1 The Need for Program Evaluation

      The review and evaluation of existing programs has always been a part of managing in government. However, the nature and content of the reviews and evaluations have evolved from focusing mainly on the resources used by programs (the dollars spent and numbers of people employed) to examining how the resources are used, the purposes of programs and their impacts and effects on society.

      This change of emphasis has been necessary. The government has been called upon over the past three decades to provide an increasing array of goods and services to the public. Many current government programs reflect significant and sophisticated attempts to improve social and economic conditions in an increasingly complex society. As government expenditures have grown, the objectives and results of such programs have come under more public scrutiny as the increased use of public funds is questioned.

      This growth in both the number and complexity of government programs has meant an increased need for relevant and objective information on program results in order to improve policy decisions. This is especially true in the present environment of restraint. Decision-makers are increasing their demands for information on the actual, as opposed to the expected, achievements of government programs. While not

3


providing all the answers, program evaluation can be an important source of more and better information on what is being achieved through public expenditures and government regulation.

      The benefits from program evaluation are many. Its wider implementation will lead to a better understanding of the achievements of programs, thereby enhancing the ability of the government - departments, central agencies, and the Cabinet - to allocate resources in a more effective manner.

1.1.2 General Approach to Evaluation

      Program evaluation has been an identifiable activity for the last 15-20 years, with much of the early efforts being carried out in the United States. The experiences gained there, elsewhere, and in Canada, have been valuable in developing the approach outlined in these guidelines 1 . Nevertheless, the approach being encouraged in the Canadian federal government is unique, in several ways substantially different from many other endeavours in this area.

      Much early work in program evaluation considered the activity to be akin to scientific research, as an undertaking designed to unambiguously identify and measure the results of government interventions in society. 2 Controlled experimental designs were common in an attempt to determine scientifically whether or not the program had a particular effect. These designs were accepted as the model for program evaluation.

      Program evaluation, as it is developing in the federal government of Canada, has a different approach in mind. It is viewed as an aid to decision making and management; that is, as a source of information for resource allocation, program improvement and accountability in government. As such it involves the systematic gathering of both verifiable information on a program and demonstrable evidence on its results and cost-effectiveness. Program evaluation is one means of providing relevant, timely, and objective findings -information, evidence and conclusions - and recommendations on the performance of government programs, thereby improving the information base on which decisions are taken. In this view, program evaluation, as part of this decision making and management process, should not be seen as an exercise in scientific research aimed at producing definitive "scientific" conclusions about programs and their results. 3 Rather it should be seen as input to the complex, interactive process that is government decision making, with the aim of producing objective but not necessarily conclusive evidence on the results of programs. While credible analysis is

4


always required in program evaluation, a strict research model for evaluation is often inappropriate because of timing constraints and an inability to adequately take into account the multiple information needs of the client and users of the evaluation.

      Program evaluation in the federal government recognizes the many factors that enter into management in government and recognizes the need for a variety of kinds of information depending on the particular situation. Judgement by decision-makers on the relevance and interpretation of program evaluation findings and recommendations is always required. The general approach taken is a flexible and responsive one, structured to the needs of the deputy head as both the client of the evaluation and the senior official responsible for the overall management of and accountability for his or her programs, while still producing objective, demonstrable evidence and information, and credible conclusions on programs and their results. This guide offers principles, practices and procedures for successful implementation of program evaluation.

1.2 THE POLICY AND THE BASIC PROGRAM EVALUATION ISSUES

1.2.1 The Policy

      During the 1970s there was an increase in evaluation activity throughout the government across a broad spectrum. This ran from program forecast and review, and operational auditing, to in-depth, formal, quantitative evaluations of the effectiveness of specific programs. These latter formal evaluations were carried out on a highly selective basis, normally in response to major policy planning priorities or issues, rather than as part of the ongoing examination of existing programs by departmental management. Thus while management in government has been carrying out evaluation studies for some time, formal, systematic, and regular evaluation of the full range of government programs was initiated in 1977 with the issuing of the Treasury Board Policy Circular 1977-47, "Evaluation of Programs by Departments and Agencies". To a considerable extent this policy was built upon the best practices already in existence in several departments and agencies. The general statement of the policy is that:

Departments and agencies of the federal government will periodically review their programs to evaluate their effectiveness in meeting their objectives and the efficiency with which they are being administered.

5


More recently the importance to the government Of program evaluation has been reinforced by the close links it has with the new Policy and Expenditure Management System. Departmental program evaluation plans are seen as elements of departmental and agency strategic and operational plans, and should reflect the priorities determined by the Policy Committees and the Treasury Board of Canada in addition to the priorities of the departments and agencies (see Section 2.1.2).

      The purpose of program evaluation in the federal government undertaken pursuant to the Treasury Board policy is to assist in ensuring that deputy heads of departments and agencies have the appropriate information on the results of their programs in order to be able:

- to make more informed decisions on the management and resourcing of their programs;

- to be accountable for the programs for which they are responsible; and

- to provide quality advice to Ministers.

While the focus for program evaluation is with the deputy head, who is seen as the client of the evaluations, program evaluation will provide an opportunity for line managers to obtain more in-depth information on their programs and to explore more fundamental evaluation issues of interest to them than is possible during day-to-day management.

1.2.2 The Basic Program Evaluation Issues

The focus of the policy is on the evaluation of programs, as opposed to, in particular, the evaluation of systems and procedures which are, in general, examined in internal audit (see Section 2.4 for a discussion of the distinction between program evaluation and internal audit). As envisaged in the policy, program evaluation is considered to cover a number of basic program evaluation issues:

- Program Rationale : Does the program make sense?
   
- Impacts and Effects : What has happened as a result of the
  program?
   
- Objectives Achievement : Has the program achieved what was
  expected?
   
- Alternatives: Are there better ways of achieving the
  results?

6


Table 1.1 lists these four general classes of evaluation issues, along with seven more specific basic evaluation questions which should be considered in a program evaluation. These questions define program evaluation and can serve as a general guide to the kinds of questions which should be considered in the evaluation of a program.

  Table 1.1
   
BASIC PROGRAM EVALUATION ISSUES
   
       Classes of Evaluation Issues Basic Evaluation Questions
   
       PROGRAM RATIONALE - To what extent are the objectives
       (Does the program make sense?) and mandate of the program still
  relevant?
   
  - Are the activities and outputs of the
  program consistent with its mandate and
  plausibly linked to the attainment of
  the objectives and the intended impacts
  and effects?
   
       IMPACTS AND EFFECTS - What impacts and effects, both
       (What has happened as a intended and unintended, resulted
       result of the program?) from carrying out the program?
   
  - In what manner and to what extent does
  the program complement, duplicate,
  overlap or work at cross-purposes with
  other programs?
   
       OBJECTIVES ACHIEVEMENT - In what manner and to what extent
       (Has the program achieved were appropriate program objectives
       what was expected?) achieved as a result of the program?
   
       ALTERNATIVES - Are there more cost-effective alter-
       (Are there better ways of native programs which might
       achieving the results?) achieve the objectives and intended
  impacts and effects?
   
  - Are there more cost-effective ways of
  delivering the existing program?
 

7


      Program rationale issues focus on the continued relevance of the program in light of present social and economic conditions and government policy. Here the very existence of the program is to be questioned by asking (a) if the program is still needed for current government policy, even assuming it is producing as expected; (b) whether the program continues to be accurately focused on the problem or issue it is addressing; and (c) whether the mandate and objectives are adequately stated. The focus here is on the program's rationale, not the rationale of the policy from which the program evolved. Program evaluation must take some level of policy, such as the department's long-term objectives, as given in order to have a basis on which to compare the program.

      A good understanding of the rationale of the program should be developed, by comparing the current program activities with the mandated activities and examining the continued plausibility of the links between the program's outputs and both its objectives and intended impacts and effects. (These terms are elaborated upon in Section 4.2.2 and in particular Table 4.1, as well as in the Glossary.)

      Consideration of the impacts and effects of a program implies a broad view, an attempt to determine what has happened as a result of the program. It is concerned with all the results that are attributed to the program both intended and - often of more interest - unintended, regardless of the stated or claimed objectives of the program. This includes impacts and effects on and by other related programs. This point is often of primary interest when the objectives of the program are unclear or there is little agreement on what the precise objectives should be.

      Objectives achievement issues are concerned with determining the manner and the extent to which appropriate objectives are achieved as a result of the program. This point has often been narrowly taken as the only focus for program evaluation. Determining the achievement of objectives would normally involve investigating a number of the impacts and effects of the program.

      Finally, the consideration of alternatives is a yardstick for assessing the relative worth of the program. The objectives may have been met and there may have been no negative impacts or effects, but there may be better ways of achieving the objectives or intended impacts and effects; achieving the same for less cost, obtaining more or better results for the same cost; or achieving proportionally more or better results with increased costs. The program would then be more

8


effective and be delivered more efficiently. Better alternatives could include, as appropriate, both alternative ways of delivering the program and alternative programs or types of programs (for example, a regulatory or tax expenditure program as opposed to an expenditure program) to achieve the objectives and intended impacts and effects. The level of detail undertaken in the consideration of alternatives would vary, but it is not expected that in-depth thorough analysis of alternatives would normally be part of an evaluation study. Typically, the analysis carried out would indicate promising alternatives which could be further examined by an appropriate planning group.

1.3 MAIN FEATURES OF THE POLICY CIRCULAR

      The Treasury Board Policy Circular 1977-47 refers to a number of features of the program evaluation function which is to be established in all departments and agencies. The main features are outlined below. This guide discusses and elaborates on these and provides additional guidelines on program evaluation in departments and agencies.

Deputy head responsibility : The deputy head of each department and agency is responsible for establishing the program evaluation function in his or her organisation, for ensuring that appropriate program evaluation studies are being carried out in an objective manner, the findings of which are communicated to the deputy head and other relevant levels of management, and for taking appropriate decisions as a result of the findings. He or she is the client of the evaluations.

Coverage : All programs in each department and agency should be evaluated. For certain administrative support functions in a department, internal audit may provide all or most of the pertinent evaluation information and evidence, in which case the program evaluation unit may not have to carry out an additional study. (See Section 2.4 for more on this issue.)

Cyclic evaluations : Programs should be evaluated on a periodic basis. While the policy suggests a 3-5 year cycle, it is recognized that in some cases a longer time frame may be required. In each department an appropriate cycle should be established, depending on the nature and maturity of the programs.

Objectivity : Program evaluation studies should be designed and carried out in an objective manner. The need for objectivity and, in particular, being seen to be objective is the main reason behind

9


the requirement that responsibility for the program evaluation function be independent of line management. Objectivity is also enhanced by the explicit delineation of appropriate reporting levels and by the development and authorization of terms of reference for individual evaluations.

Comprehensiveness : Each individual evaluation should provide for a thorough review of the program and its results. All the basic evaluation issues should be seriously considered. These are discussed in Section 1.2.

Departmental evaluation plan : Departments and agencies should prepare a plan for evaluating their programs. Appropriate evaluation plans are discussed in Chapter 4.

Appropriateness : The evaluation processes set up, the personnel used, and the evaluation studies undertaken, should all be appropriate to the individual situation. Appropriate processes, personnel and studies are discussed in Chapters 3, 5 and 6 respectively.

Identification of evaluation requirements in new programs : Future evaluation requirements should be identified for all new programs and existing programs where appropriate, in order that subsequent evaluation studies can be adequately carried out. Appropriate evaluation frameworks are discussed in Chapter 7.

Notes to Chapter 1

1.
  
For a discussion of some of this experience, see "Program Evaluation: An Introduction", Office of the Comptroller General of Canada, Ottawa, February, 1981.
 
2.
  
Most books and articles on program evaluation present elements of this viewpoint. One early book is E.A. Suchman's, Evaluative Research . New York: Sage, 1967.
 
3.
  
For two discussions of this viewpoint see M. Guttentag, "Evaluation and Society", in M. Guttentag and S. Saar (eds.), Evaluation Studies Review Annual , vol. 2. Beverly Hills: Sage, 1977, pp. 52-56, and C.E. Lindblom and D.K. Cohen, Usable Knowledge . New Haven: Yale University Press, 1979.
 

10


CHAPTER 2

AN OVERVIEW OF PROGRAM EVALUATION

IN DEPARTMENTS

2.0 GUIDELINES

2.1

Program evaluation should be an integral part of the management review and monitoring function in departments and agencies, providing input into planning and budgeting.

2.1 PROGRAM EVALUATION AND THE MANAGEMENT PROCESS

2.1.1 The Departmental Management Process

Management in departments and agencies can be discussed in terms of three interrelated activities:

- planning and budgeting (decision-making); - implementing (directing); and - reviewing and monitoring (evaluating).

      Planning and budgeting involves setting goals and objectives, developing general strategies and operational plans for achieving them in light of past results, and committing resources to these ends. Implementing involves carrying out these plans, and the ongoing direction of the resulting operations.

Reviewing and monitoring involves the determining of the performance and results of the operations against expectations, objectives and plans.

      Reviewing and monitoring provide the necessary feedback between intentions and actual results, linking results with planning and directing. The review and monitoring function of a deputy head involves at least three complementary processes (discussed later in this chapter):

- program evaluation; - internal audit; and

- other management review and information processes (including financial reporting, performance measurement, management review and quality review).

11


      Program evaluation is an integral part of this review and monitoring function, providing the deputy head with independent, objective information and evidence on the results of his or her programs. This information provides feedback which can be used both to improve current operations and to provide a basis for future strategic planning. Program evaluation completes the package of formal review and monitoring mechanisms which are essential for good management today. Figure 2.1 illustrates this feedback and compares it to that provided by the other aspects of the review and monitoring function. (Section 2.4 of this chapter discusses the similarities and differences among these various aspects of the review and monitoring function.)

      The important point to note is that the more established review and monitoring processes- internal audit and management review processes- tend to concentrate heavily on feedback between operational plans, operations and operational outputs. Program evaluation on the other hand extends beyond this to look in a systematic way at both the results of programs in the external environment and the basic rationale of the program, and to use this information in strategic planning and other management processes.

2.1.2 Relation to the Policy and Expenditure Management System

      Program evaluation is also an important element in the government-wide management systems of concern to central agencies. The Policy and Expenditure Management System now being implemented by the government represents a significant step towards improving the government's control over the allocation of resources. With the introduction of policy envelopes, policy decisions in the future should be made with a better knowledge of the resources and opportunity cost required to implement initiatives, and will be taken with a better understanding of the constraints on resource availability. The Policy and Expenditure Management System calls for and relies on program evaluation in two ways; informally, through its emphasis on objectives, on the contribution of programs to objectives, and on the results of programs; and formally, through the calling for summaries of findings of evaluations and departmental program evaluation plans to be submitted to the appropriate policy committee and to the Treasury Board.

The Guide to the Policy and Expenditure Management System 1 clearly emphasizes the need for departments and central agencies to plan in terms of objectives, how to achieve them and what the alternatives are, to know what the results have been of existing programs, and to ensure that new programs can be adequately evaluated. This

12


Management Review and Monitoring Functions


kind of information is critical to successful management. Program evaluation can play an important role since this type of information is what should be forthcoming from program evaluation studies. It can be expected that, as the Policy and Expenditure Management System becomes increasingly operational, the demand for the findings of evaluations will increase.

      More specifically, program evaluations and departmental program evaluation plans are to be formally part of three of the principal instruments in the new process: the departmental Strategic Overview, the Multi-Year Operational Plans and the Budget-Year Operational Plans. 2

      The Strategic Overview, submitted annually by March 31, should contain, in part, "a summary of the findings of program evaluations and the changes proposed as a result of these findings." This should include findings from both evaluation studies and, where appropriate, evaluation assessments (see Section 2.2).

      Departmental program evaluation plans are to be developed by departments and agencies in consultation with the Policy Committees secretariats and Treasury Board Canada (see Chapter 4). The departmental long term program evaluation plan (see Section 4.4.1) is to be submitted along with the Multi-Year Operational Plan by March 31, and the departmental annual program evaluation plan (see Section 4.4.2) is to be submitted along with the Budget-Year Operational Plan by October 31. The Policy Committees, assisted by the Treasury Board of Canada, will review these plans and may direct departments and agencies with respect to any changes which may be desired. Through these means, program evaluation becomes an integral part of expenditure management in the government and hence will be better able to contribute to improved management in government.

2.2 PROGRAM EVALUATION: CONCEPTS AND BASIC TERMS

There is no widely accepted terminology in the field of program evaluation. As a result, much of the confusion surrounding program evaluation is due to different people using similar terms to mean different things or different terms to mean similar things. There are no "correct" definitions but there is a great need for a commonly accepted terminology. The terms defined below have been found useful in discussing the various concepts and aspects of program evaluation which have been developed at the Office of the Comptroller General, in conjunction with departments and other central agencies, in order to implement the Treasury Board policy on the evaluation of programs.

14


      The term program is used to describe any group of resources and activities, 3 and their related direct outputs, undertaken pursuant to a given objective or set of related objectives and administered by a department or agency of the government. Activities are taken here to include any related powers or functions, for example, those with direct outputs in the form of regulations or provisions in tax legislation. An Estimates Program is a program found in the government's annual Estimates. A program evaluation component is a group of activities of a department - usually a part of an Estimates program -with a common objective (or set of related objectives) which is suitable to the department for evaluation purposes (see Section 4.2.2). While evaluations called for by the Treasury Board policy are typically carried out on program evaluation components rather than on the larger Estimates programs, the (generic) term 'program " is used throughout this Guide interchangeably with the term "program evaluation component". Furthermore, the term "department" when used alone, is meant in this Guide to cover both departments and agencies of the federal government.

      Figure 2.2 illustrates a useful way to view the structure of a program. Resources are used to provide for the activities undertaken by program personnel, and these activities produce direct outputs, which in turn result in impacts and effects by which the objectives can be achieved. The term objective is used to refer to a purpose statement which indicates what is to be accomplished in terms of impacts and effects, not outputs or work-processes. This latter type of purpose statement might better be referred to as a goal. The objectives of interest in program evaluation are these impacts and effects-orientated types of objectives. By describing the various elements of any program structure, in particular, the activities, the outputs and the impacts and effects, as well as the linkages among them (how, for example, specific outputs result in certain impacts and effects), and the objectives to be achieved, a complete description of the program for evaluation purposes is provided.

      The general area of concern in this guide is that of assessing government programs. Both existing and future or planned programs can be assessed and the term evaluation, as used in this guide, refers to assessing ongoing, existing programs. 4 The terms "appraisal" or "analysis" are often used to refer to the assessment of future planned programs.

      Program evaluation, as a management function, is the formal assessment of programs and their results and involves investigating and analysing some or all of the basic program evaluation issues (see

15


Basic Program Structure


Table 1.1). A program evaluation , as a procedure, is the evaluation of an individual program and normally consists of both an evaluation assessment study and a program evaluation study. The program evaluation process entails the conducting of an evaluation assessment and an evaluation study, as well as the taking of decisions based on the findings and recommendations of the study.

      Evaluation assessment is the front-end planning part of program evaluation. It involves analysing the program and its environment, identifying the specific evaluation questions to be considered and the nature of these questions and the extent to which they can be and will be addressed in a particular evaluation study. Such factors as the needs of the client, the resources available and possible evaluation methods are considered. Often some of the program rationale issues are investigated, at least tentatively, at this stage. The output from the evaluation assessment process is the terms of reference for an evaluation study, or documented reasons for not doing such a study at this time(see Section 6.2).

      An evaluation assessment study is an analysis of the nature and extent to which evaluation issues can, and perhaps should, be addressed and would typically consider options - different sets of issues and different evaluation methods and procedures - for actually carrying out the ensuing evaluation study. An evaluation assessment report is a report documenting the findings of the evaluation assessment stage.

      A program evaluation study is a study of a particular program which formally examines specified evaluation issues. A program evaluation report is a report documenting the program evaluation study and presenting the findings and conclusions of the evaluation of the program as addressed by the study.

The carrying out of evaluations - both the assessment studies and evaluation studies - is covered in Chapter 6.

2.3 THE PROGRAM EVALUATION FUNCTION: AN OVERVIEW

      In order to evaluate their programs, each department or agency should establish a program evaluation function. The Office of the Comptroller General provides, as needed, advice and assistance in this work (see Chapter 8). While it may be desirable for different organizations to set up this function differently, the general features of each should be similar and all should reflect certain principles. Subsequent chapters discuss these features in more detail and provide specific guidelines.

17


      In order to establish a program evaluation function, departments and agencies should formulate and promulgate their own policies on program evaluation, tailored to their own specific situations. Chapter 3 discusses the important features of such policies.

      The program evaluation function in each department and agency is concerned with three main activities: periodic evaluation of existing programs, responding to other demands identified by, or placed on, the deputy head for evaluation information, and development of evaluation frameworks for new programs.

      The main activity of the program evaluation function is to ensure that all programs are formally evaluated by the deputy head over a given period of time in such a manner that all evaluation issues are seriously considered. To do this, departments and agencies should develop departmental program evaluation policies (Chapter 3), develop and maintain formal plans for evaluating their programs (Chapter 4),acquire appropriate staff (Chapter 5) and should carry out evaluations and take action on evaluation findings and recommendations(Chapter 6).

      A second area of activity is to respond to demands for evaluation information on departmental or agency programs identified by the deputy head or placed on the deputy head from the Cabinet or Parliament. To the extent possible such evaluation work should be integrated into the formal periodic evaluation of programs. Furthermore, the deputy head may, on occasion, identify the need for additional evaluation information. Such ad hoc evaluation activity, however, would not be expected to excessively interfere with the regular evaluation of programs.

      The third area of concern of the function is the development of an appropriate evaluation framework for all new or renewed programs, so that a future evaluation of the program can be properly carried out. This activity is discussed and guidelines given in Chapter 7.

2.4 PROGRAM EVALUATION,INTERNAL AUDIT AND OTHER MANAGEMENT

CONTROLS

      Program evaluation provides senior management with information to improve programs, to provide quality advice to Ministers on resource allocation and policy matters, and to justify public moneys spent. Program evaluation involves an in-depth and thorough assessment of what a program is accomplishing. Other management systems and reviews complement the program evaluation function and, as shown in

18


Figure 2.1, are part of the general review and monitoring function: internal audit, performance measurement, management reviews, quality reviews, etc.

      Of these review and monitoring functions, internal audit 5 is the one most closely related to program evaluation. To clarify the distinction between these two activities, the Office of the Comptroller General issued in August, 1979, a paper entitled "Internal Audit and Program Evaluation in the Government of Canada: A Clarification of Roles, Responsibilities and Relationships", which stated that:

Program evaluation is the periodic, independent and objective review and assessment of a program to determine, in light of present circumstance, the adequacy of its objectives, its design and its results both intended and unintended. Evaluations will call into question the very existence of the program. Matters such as the rationale for the program, its impact on the public, and its cost effectiveness as compared with alternative means of program delivery are reviewed.

Internal audit is the systematic, independent review and appraisal of all departmental operations, including administrative activities, for purposes of advising management as to the efficiency, economy and effectiveness of the internal management practices and controls.

      Internal audit is similar to program evaluation in that it is the responsibility of the deputy head, must be and appear to be objective, and is done on a periodic basis. It differs from program evaluation in its subject matter: program systems and management controls as opposed to program structure and results. Internal audit includes assessing the effectiveness of organizational structures and relationships, operating procedures and systems, and personnel requirements and utilization. 6 In terms of Figure 2.2, internal audit is focused internally on the program, examining the resources, activities and outputs and the relationships among these elements. Program evaluation, on the other hand, focuses on the structure of the program as a whole, on its impacts and effects and on the relationships between the impacts and effects and both the outputs produced and the resources used.

      Nevertheless there can be an overlap between the two functions especially on questions concerning the outputs of the program, and it may not always be possible or desirable to draw a line between them. This will tend to be the case, for example, with many administrative support functions, and programs which are procedural or repetitive in

19


nature, producing a well-specified good or service. When in such cases the program has little direct impact on society, or its operations are mandated by law, a well developed internal audit function may, in some departments, address most of the issues for that program that would normally be part of a program evaluation, such as level and quality of service issues. That is, internal audit would consider all questions which were appropriate to it, and in some cases these would cover many of the "evaluation" questions sensibly asked of the program. However, internal audit would not normally cover the evaluation questions of alternative delivery mechanisms and, of course, any question as to the continued existence of the program.

      On the other hand, where internal audit coverage of issues which are of interest to the program evaluation function has been inadequate from an evaluation point of view, the program evaluation unit may have to conduct further study. Each case will have to be decided on its merits, depending on the nature of the internal audit function in a department, the specific nature of audit and evaluation issues that should be addressed, the appropriate grouping of activities for audit versus evaluation purposes, and the nature of the program under consideration. It is expected that in each department and agency the division between the activities of the internal audit and program evaluation units will become clear over time.

      The program evaluation function, as mentioned earlier, is responsible for ensuring that all relevant program evaluation issues have been seriously considered for all programs, even though in selected cases, as discussed above, findings of internal audit studies may provide sufficient information for the relevant program evaluation issues.

      In addition to internal audit, there are three other management review and information processes other than the regular financial reporting mechanisms which complement program evaluation:

Performance measurement is a term used to describe a manager's routine measurement of the ongoing performance of program operations in terms of the economy with which resources are acquired, the efficiency with which operations are conducted, the quality of the products or services provided and the extent to which operational objectives are achieved. Measurement can involve trends, standards and comparisons of indicators or other performance information on a program's operations.

20


Management Review is a term used to describe an element of a manager's job which may involve a wide range of monitoring and problem solving activities designed to ensure that operations are functioning satisfactorily. In many cases, it constitutes a vital element in the control process of the manager which reflects his personal style and may be part of any formal management control system.

Quality Review is a term which covers a variety of monitoring, inspecting and investigating activities that may exist in an organization. Their primary purpose is to ensure reasonable adherence to a specific set of norms or standards and as such form an integral part of the control system of the organization. The intent is to identify on a timely basis deviations which may result in immediate corrective action or indicate a need for in-depth follow-up.

      Each of these can provide valuable information for a program evaluation (as well as, of course, for an internal audit). Performance measurement, because it involves the ongoing collection of information on the operations of programs, can be particularly useful to program evaluation when the processes of program delivery are being analyzed. Furthermore, with appropriate planning, a performance measurement system should include information on the impacts and effects of programs where it is reasonable to collect this information on an ongoing basis.

      In summary, the distinctive features of program evaluation are that this function:

- focuses on the impacts and effects of programs, not on the ongoing operations of programs; and

- does not take the program as given but questions its very existence and considers alternatives.

      Like internal audit but unlike other management review and information processes, program evaluation:

- is carried out primarily for the deputy head, not line management; and

- is carried out on a periodic basis, not on an ad hoc, as-needed or continual basis.

21


Notes to Chapter 2

1.
  
Government of Canada, Guide to the Policy and Expenditure Management System . (Supply and Services, Ottawa, 1980.)
 
2.
  
The overall Fiscal Plan completes the package of principal instruments associated with the Policy and Expenditure Management System.
 
3.
  
The terms "function", "activity", "output" and "process" are used here as ordinary English words, not as they are formally defined by the government in, for example, the Planning Programming and Budgetary Guide .
 
4.
  
This includes the evaluation of pilot programs where the evaluation is an integral part of and the main reason for the program.
 
5.
  
Financial audit, management audit, operational audit, composite audit and comprehensive audit are all either elements of, or synonymous with, the term "internal audit".
 
5.
  
Guidelines for internal audit, entitled Standards for Internal Financial Audit , were issued by the Office of the Comptroller General in 1978, and are currently being revised to reflect a broader view of the internal audit mandate, encompassing all departmental operations rather than solely the financial function.
 

22


CHAPTER 3

DEPARTMENTAL PROGRAM EVALUATION POLICY AND

ORGANIZATION CONSIDERATIONS

3.0 GUIDELINES

3.1 Each department and agency should have a program evaluation policy, compatible with Treasury Board Policy (1977-47),which sets out the department's approach to program evaluation, outlining the evaluation organization, and the roles, responsibilities, authorities and accountability of those involved. The policy should also outline procedures for carrying out evaluations and for acting on findings, and indicate the general level of resources which will ensure the maintenance of a program evaluation function useful to the deputy head. Departmental and agency program evaluation policies should be submitted to the OCG for review.

3.2 The responsibilities of the deputy head (Table 3.1) and of the management of the program evaluation function (Table 3.2)should be clearly delineated in the departmental program evaluation policy. The policy should also indicate how the findings and recommendations of program evaluations are to be considered and acted upon by senior management of the department or agency.

3.3 The process and procedures for carrying out program evaluations should be established and operated in such a manner as to ensure adequate participation of appropriate program or line managers.

3.4 In organizing for program evaluation, the responsibilities for managing the program evaluation function should be assumed by one, or at most two, persons. One of these persons, the departmental manager for program evaluation, should report either directly to the deputy head or to the next senior departmental officer accountable for program evaluation - the second person in this case - who in turn reports directly to the deputy head. Furthermore, a departmental program evaluation unit should be established to support the departmental manager for program evaluation and to carry out the corporate program evaluation activities. In addition, if a decentralized approach is used, branch

23


program evaluation units should be established as appropriate in order to further support the function and to carry out program evaluations not undertaken by the departmental unit.

The departmental program evaluation unit should have a close working relationship with internal audit and corporate planning units to ensure effective coordination among these activities, but a separate organizational unit for program evaluation should be formed.

3.1 DEPARTMENTAL POLICY ON PROGRAM EVALUATION

      A formal program evaluation function should be established in each department and agency by the deputy head. This will normally require a departmental policy to be issued which translates the Treasury Board policy on evaluation into a meaningful and workable policy in the departmental context.

      While each department may express its policy in its own way, certain principles, outlined below, should be common to all such policies. (The names of the committees, organizations and positions used below are meant to be generic in nature. They are described in more detail in ensuing sections of this chapter and defined in the Glossary. Departments and agencies should adopt terms suitable to their own situation.)

An appropriate departmental policy should:

(i)
  
indicate that the department is regularly to carry out program evaluations as part of its ongoing management process;
 
(ii)
  
point out the deputy head's responsibility for the program evaluation function and his or her role as the client for program evaluations;
 

(iii) indicate the role and responsibilities of line management in program evaluation;

(iv)
  
describe the mandate and role program evaluation will play in the management of the department as well as its purpose, and distinguish program evaluation from internal audit,
 

24


corporate planning, performance measurement, management review and other types of review and monitoring carried out in the department;

(v)
  
outline the organizational structure established, or to be established, for carrying out the required activities and identify the participants in the program evaluation process including line management. In addition to designating a departmental manager of program evaluation, it will frequently be desirable to establish a senior management evaluation committee, and program evaluation advisory committees for individual studies;
 
(vi)
  
describe the roles, responsibilities, and authorities of each participant in the program evaluation process;
 

(vii)outline the procedures that will be followed in the department in implementing the program evaluation function. These should cover the development and maintenance of evaluation plans, the conduct of evaluations, and the procedures by which senior management will consider and act upon the findings and recommendations of program evaluations;

(viii)outline the flow and distribution of evaluation documents -terms of reference, drafts, final reports, recommendations and implementation plans resulting from evaluations - both within and outside the department;

(ix)
  
assign responsibility and outline procedures for ensuring that all new or renewed programs include an appropriate evaluation framework;
 
(x)
  
provide for the assignment of resources - both dollars and persons for the program evaluation unit and a mechanism for funding or resourcing individual studies; and
 

(xi)indicate any other important feature of the program evaluation function in the department.

      Departments and agencies should submit their evaluation policies to the Office of the Comptroller General for review. As discussed in Chapter 8, the OCG will comment upon, and provide advice and assistance in the development of, program evaluation policies.

25


3.2

DEPARTMENTAL RESPONSIBILITIES FOR PROGRAM EVALUATION

      In developing the program evaluation function, a number of responsibilities can be identified. Of particular concern are the responsibilities of the deputy head, the responsibilities for the management of the program evaluation function, and the responsibilities of the department or agency for actions taken as a result of decisions based on program evaluation findings and recommendations.

3.2.1 Responsibilities of the Deputy Head

      The deputy head is the pivotal figure for the success of the program evaluation function. His or her main responsibilities in this area are outlined in Table 3.1.

      In point 3 in Table 3.1, it is recognized that the deputy head may only have time to be involved once during the evaluation assessment phase. If, for example, he or she is significantly involved in the selection of an evaluation option, then the approving of terms of reference by the deputy head based on the selected option may be quite perfunctory. On the other hand, if he or she has not been significantly involved in the selection process, then it would be expected that the deputy head carefully consider the terms of reference and be fully aware of the kind of study he or she is being asked to approve.

3.2.2 Responsibilities for Managing the Program Evaluation Function

      In order to develop and maintain a successful program evaluation function in departments and agencies, the management of the function must exercise a number of responsibilities. These responsibilities, which are in addition to any others assigned by the deputy head, are outlined in Table 3.2.

3.2.3 Responsibilities for Follow-Through Action

      A key aspect associated with the evaluation process is the follow-through action taken as a result of senior management decisions based on a consideration of the findings and recommendations of an evaluation. This is the link to the on going management process and is essential if program evaluation is to make useful contributions to the management and planning of government programs. There are numerous ways a department or agency may wish to implement senior management decisions. Follow-through action from decisions based entirely or impart on evaluation findings is no different from action resulting from any senior management decision. However carried out, the basic responsibilities of the department in this area remain the same: to prepare, when required, an implementation plan; and to monitor and report to the deputy head on the completion of the implementation.

26


Table 3.1

PROGRAM EVALUATION RESPONSIBILITIES OF THE DEPUTY HEAD

1.
  
Establish and issue the departmental program evaluation policy.
 
2.
  
Approve the departmental program evaluation plans and amendments.
 
3.
  
Be significantly involved in the evaluation assessment phase of each evaluation by: - considering evaluation study options and approving the selection of the specific issues to be examined and general approaches to be used in each program evaluation study; and - approving the terms of reference for each program evaluation study.
 
4.
  
At the end of each evaluation study:
- assess program evaluation study findings and recommendations and decide on appropriate action; - approve the implementation actions to be taken as a result of decisions based on findings and recommendations of individual studies; and approve the program evaluation reports for release.
 
S.
  
Approve evaluation frameworks for new programs.
 
6.
  
Be accountable for the effects of implementing the program evaluation
 

function.

27


Table 3.2

RESPONSIBILITIES FOR MANAGING THE PROGRAM EVALUATION FUNCTION

1.
  
Prepare, update and recommend the departmental program evaluation plans.
 
2.
  
Approve terms of reference for all evaluation assessment studies.
 
3.
  
Attest to the quality of all evaluation assessment studies.
 
4.
  
Identify and address areas of concern of outside parties (Treasury Board, Policy Committees, Parliament, etc.).
 
5.
  
Recommend terms of reference for all program evaluation studies.
 
6.
  
Attest to the quality of all program evaluation studies and reports.
 
7.
  
Endorse or comment upon recommendations based on program evaluations, and ordinarily make additional recommendations reflecting a departmental perspective.
 
8.
  
Keep informed on the implementation actions resulting from decisions based on program evaluation studies.
 
9.
  
Develop evaluation frameworks for new programs in consultation with the program designers, or comment upon such frameworks proposed by others.
 
10.
  
Exercise functional authority with respect to program evaluation.
 
11.
  
Manage the departmental program evaluation unit.
 

28


      The departmental manager of program evaluation should keep informed on the progress and keep track of the results of follow-through action based on evaluation findings and recommendations.

3.3 THE ROLE OF LINE MANAGEMENT IN PROGRAM EVALUATION

      The involvement of line managers is critical to ensuring that the realities of the program operations and environment are incorporated in the program evaluation process. Line managers can and should contribute to all phases of the program evaluation process. A program evaluation advisory committee for specific studies may be a useful vehicle to facilitate this participation (Section 3.4.3). In particular, line managers may:

(i)
  
assist the departmental program evaluation unit in the development and updating of the departmental program evaluation plans and in the
 

development of evaluation frameworks;

(ii) provide members for the evaluation teams when appropriate;

(iii) identify, during the evaluation assessment, program evaluation issues of importance to them;

(iv)
  
assist in the gathering of information;
 
(v)
  
review evaluation assessment reports, study terms of reference and evaluation findings;
 
(vi)
  
comment upon recommendations;
 

      (vii) develop, where required, an implementation plan based on decisions taken as a result of evaluation studies; and

(viii)implement any action required as a result of evaluation studies.

      Such involvement of line managers will both bring to the evaluation activity appropriate program experience and provide these managers with a good understanding of the evaluation function in their department or agency. It will also allow line managers to obtain, from program evaluation, information of interest to them on their programs.

29


      Program evaluation should clearly be organizationally independent from line management functions. Indeed this is essential for the objectivity of the function. It is also essential to establish linkages to such line management processes and functions as performance measurement, planning, management review, and quality review. In particular, these other functions often provide valuable information and data for program evaluation studies and often facilitate the identification of issues - especially from the line manager's viewpoint - for program evaluation.

3.4 DEPARTMENTAL ORGANIZATION FOR PROGRAMEVALUATION

      In deciding on the most appropriate way to organize for the evaluation function, departments should consider a number of factors. On the one hand, the existing organization structure of the department will play a role: the program evaluation organization ordinarily should seek to be as compatible as possible with existing structures and reporting relationships to the deputy head and with his or her personal management style. On the other hand, the organizational structure of the function will significantly influence the quality of the resulting evaluation process and evaluation results. Thus in selecting among various organizational structures, a number of factors should be kept in mind. The organizational structure chosen in a department or agency should:

(i)
  
enhance the objectivity of the function, both real and perceived (this is primarily achieved through establishing organizational independence from line management);
 
(ii)
  
provide for the deputy head to have significant input to the evaluation process;
 

(iii) provide for the coordination of the evaluation activities; and

(iv)
  
ensure that the program evaluation function is an integral part of the ongoing management system.
 

3.4.1 Managing the Program Evaluation Function

      In each department or agency, the responsibilities outlined in Section 3.2.2 for managing the program evaluation function should be assumed by one, or at the most two, persons. One of these persons should be the focal point for the departmental or agency program evaluation activity, should assume all or many of the responsibilities

30


listed in Table 3.2 and should devote a significant part of his or her time to the program evaluation function. For the purposes of this Guide, this person is called the departmental (or agency) manager of program evaluation (DMPE). If the management of the function is to be shared by two people, the second person would either report directly to the DMPE or be the person to whom the DMPE directly reports, and would assume those program evaluation management responsibilities not assumed by the DMPE. The level of the DMPE would depend on the particular organizational structure chosen by the department or agency. Figure 3.1 illustrates three possible situations. In (a) the DMPE assumes all the management responsibilities. In (b) and (c) these responsibilities are shared: in (b) with the head of the program evaluation unit and in (c) with the responsible corporate assistant deputy minister(ADM) or equivalent.

      A departmental program evaluation unit should be established to support the management of the program evaluation function. This is the organizational entity which carries out the corporate program evaluation activities of the department or agency. This unit is operationally managed either by the DMPE (cases (a) and (b)) or a person reporting directly to the DMPE (case (c)).

3.4.2 Reporting Relationships to the Deputy Head

      It is essential that the departmental manager of program evaluation (DMPE) have ready access to the deputy head, and that the deputy head provide reasonable direction and strong personal support to the program evaluation function. Figure 3.1 illustrates possible reporting relationships to the deputy head.

      A direct reporting relationship of the DMPE to the deputy head is preferable (cases (a) and (b)), where the DMPE is the senior departmental officer for program evaluation. In large departments this may not be possible because of organizational hierarchies and span of control considerations, in which case the DMPE should report to a senior staff manager who, in turn, reports to the deputy head (case (c)). In this case the senior departmental officer for evaluation is the person to whom the DMPE reports. To place the DMPE elsewhere in the organizational hierarchy compromises his or her effectiveness and makes it too difficult to obtain the deputy's direction and support. The full potential of program evaluation is not likely to be realized unless a close working relationship develops between the deputy head and the DMPE.

31


Managing the Program Evaluation Function: Alternative Approaches


3.4.3 Committees for Program Evaluation

      In order to assist the deputy head with his or her program evaluation responsibilities, departments and agencies may wish to establish a senior program evaluation committee. Such a committee could meet regularly or as needed to discuss program evaluation matters, either exclusively or perhaps in conjunction with other corporate matters. The specific functions of each such committee would vary depending on the role the deputy head sees for the committee. The existence of such a committee emphasizes the importance of program evaluation in the department. The DMPE should sit on the senior management evaluation committee where such a committee exists. This would be one means of enhancing his or her reporting relation to the deputy head.

      In addition to this ongoing committee, departments and agencies may wish to establish, for each or for most program evaluations, a program evaluation advisory committee to assist in the managing of the particular program evaluation and to provide a forum for discussion by all those concerned with the evaluation of the program. The DMPE should be a member of the committee, probably the chairman. The head of the program evaluation team conducting the evaluation and line managers whose programs may be affected by the evaluation would also normally be members of such a committee. Such a study-specific committee would provide a useful means through which the concerns and interests of line managers can be expressed and formally taken into account. Where appropriate this committee could include members from outside the government, such as client group representatives or representatives from provincial governments, when the program understudy is part of a federal-provincial agreement.

3.4.4 Organizational Structures for Conducting Program Evaluations

      The preferred organizational structure is to have program evaluations conducted centrally, with the departmental program evaluation unit responsible for the evaluations of all program components of the department. Members of the departmental program evaluation unit would direct evaluation teams for each program evaluation study. Team make-up would vary from study to study. Use could be made of a seconded person from the program being evaluated, of contracted personnel, of other departmental personnel, and of other members of the departmental program evaluation unit. Care must, of course, be taken to ensure the objectivity of the team (see Section 5.1.2). This organizational structure enhances the objectivity of the exercise, and should enhance its credibility with line management.

33


      It has been argued that such a centralized approach may not be always possible, particularly where responsibility in the department or agency is decentralized and that in such cases a decentralized approach is required. This would involve, in addition to the departmental program evaluation unit, the establishment of branch program evaluation units, which would report administratively to the head of each branch(e.g. regional or branch ADMS) but would be under the functional direction of the DMPE. The management of the program evaluation function - the DMPE plus perhaps one other person - would still retain all the responsibilities outlined in Section 3.2.2. A further decentralization of the evaluation function would inhibit its influence as a senior management function and give rise to questions about its objectivity. The difference in the decentralized approach would be that branch program evaluation units, not the departmental program evaluation unit, would carry out many of the evaluations.

3.4.5 Relationship to Other Corporate Management Functions

      Internal audit and corporate planning (strategic, operational and program planning) are other corporate management functions that interface with program evaluation.

      Internal audit and program evaluation are complementary functions (see Section 2.4) and as such should enjoy close working relationships. The audit and evaluation plans for each function should be developed so as to facilitate overall planning and program component identification in the audit and evaluation areas. This will minimize both the duplication of efforts and the demand on line managers for information.

      Despite the need for this close working relationship, internal audit and program evaluation should be separate organizational units producing separate reports and findings. Otherwise, there may be a tendency for the more well-established internal audit function to absorb resources that should be going into program evaluation efforts. In addition, as outlined in Chapter 5, the skills and experience required for the two functions differ, and without an identifiable separation between the two, both may suffer.

      As Figure 2.1 illustrated, program evaluation should provide input into strategic planning. Program evaluation should be organizationally independent but have a good working relationship with this function. Program evaluations will likely relate program objectives to departmental objectives established by the strategic planning function, and identify difficulties in establishing linkages between program and

34


departmental objectives. Thus program evaluation reports should be made available to the strategic planning function so that the information and analysis gained from a program evaluation may be disseminated through the department to those persons with a need to know such material. Conversely, program evaluation should take into account strategic planning initiatives and published reports, and also be cognizant of changes in the corporate program planning process.

35




CHAPTER 4

DEPARTMENTAL PROGRAM EVALUATION PLANS

4.0 GUIDELINES

4.1 Departments and agencies should develop formal plans for evaluating all of their programs on a periodic basis and update them regularly.

4.2 As a necessary step to developing departmental program evaluation plans, the activities of the department should be grouped into program evaluation components, namely groups of activities considered suitable for evaluation purposes. The deputy head should be involved and approve the results. Appropriate profiles (Table 4.1) of these components should be developed.

4.3 The program evaluation components should be set in order of priority as to their importance for evaluation, taking into account the concerns and interests of the department, its minister, and central agencies, as well as the capabilities of the departmental evaluation unit, the resources required and technical difficulties that may be encountered. The deputy head must finally set the priorities for evaluation, both for the long term and the annual plans.

4.4 The departmental program evaluation plan should include two parts: the departmental long-term program evaluation plan consisting of the departmental program evaluation profile (the individual component profiles and an explanation of the overall departmental component structure) and the program evaluation schedule indicating when each component is to be evaluated over the evaluation cycle; and the annual program evaluation plan indicating in more detail the specific evaluation activities to be undertaken in the next 12-18 months. The reasons for excluding any component from the long-term plan should be documented. Departmental program evaluation plans should be considered and approved by the deputy head and should reflect the priorities and concerns of the department, its minister and of the central agencies. Departmental program evaluation plans should be submitted to the OCG for review.

37


4.5 Departmental program evaluation plans should be kept up to date to reflect the passage of time, changing evaluation priorities of the deputy head, slippage or ahead-of-schedule performance, and the work completed to date.

4.1 AN OVERVIEW AND GENERAL PRINCIPLES

      Planning is essential to the successful establishment and the continuing operation of the program evaluation function. The Treasury Board policy calls for all programs in a department or agency to be evaluated periodically. In order to ensure such coverage, departmental evaluation plans are required showing which activities are planned to be evaluated and when.

      Given the fact that in a department or agency there are many, often heterogeneous, activities being carried out and that the evaluation of these activities will be spread over a period of several years, preparation of a plan requires at least two main tasks to be carried out:

-

the programs of the department or agency will have to be arranged into appropriate groupings for evaluation, namely program evaluation components; and

- the evaluations of these components will have to be set in order of priority so that an evaluation schedule over an appropriate evaluation cycle can be developed.

In carrying out both tasks, the deputy head should be involved. As the client of the evaluation studies, he or she should determine what is to be evaluated and when.

      Figure 4.1 illustrates a representative model of the process that a department or agency might go through in developing initial plans. The figure is only an example of the steps that could be followed and it is recognized that different departments and agencies might proceed differently by following a different sequence or by carrying out additional steps. Indeed, a department or agency may decide for strategic reasons to conduct an evaluation or two while developing a comprehensive long-term plan in order to demonstrate what can be produced through program evaluation. Nevertheless, each of the general steps shown will undoubtedly have to be carried out at some time in order to produce appropriate departmental evaluation plans. The diamond-shaped decision points in the figure are the major decision points in the process where the deputy head will normally play a key

38


Developing an Initial Departmental Program Evaluation Plan


role, namely in agreeing to the list of program components and to the evaluation plans. The overall departmental program evaluation plan is composed of two separate plans: a long-term plan which presents both the departmental program evaluation profile - a n explanation of the department's component structure and a description of each individual component - and the program evaluation schedule, and an annual plan. Each of these is discussed in ensuing sections of this chapter.

      This ongoing planning process to produce a revised long-term plan (updated component profiles and a revised schedule) and a revised annual plan is discussed in Section 4.5.

4.2 PROGRAM EVALUATION COMPONENTS

4.2.1 What is a Program Evaluation Component?

      A program evaluation component is a group of resources and activities, and their related outputs, which is suitable to the department or agency for evaluation purposes. It is usually a subset of an Estimates Program. The group of activities within a program component:

- has a common objective (or set of related objectives) established at the level of concern of the deputy head;

- i s o f appropriate size or importance to be a focus of and support for program decision making at the deputy ministerial level; and

- i s a logical part of an overall departmental program evaluation component structure, contributing to the department's long-term objectives.

      In any major program, a complete hierarchy of objectives may be defined from improving the well-being of Canadians to establishing specific work standards. The objective of a program evaluation component should be an objective of direct concern to deputy heads, rather than an objective or a goal which may be of more concern to line managers. Associated with any objective is a group of resources, activities and outputs which contribute to the achievement of the objective. A program evaluation component comprises those resources, activities and outputs contributing to the achievement of an objective of concern to the deputy head.

40


      This higher level focus, however, may have to be weighed against the criteria of being an appropriate grouping for actual decision making on programs in the department. Deputy heads will be concerned with quite broad departmental objectives which cover a wide group of departmental activities, but may still make decisions on smaller groupings of activities which are aimed at achieving a narrower objective. Program evaluation components should relate to these relatively lower level groupings on which managerial decisions are normally made.

      Finally, the collection of program evaluation components should reflect and outline the department's various strategies for achieving the departmental wide long-term objectives. The components should fit together in a logical, consistent and comprehensive manner and each component should be shown contributing to the department's long-term objectives.

      Program evaluation components are thus seen to be impacts- and effects-oriented, typically built around the intended effects of the department's or agency's activities. They may not be identical with either existing organisational structures or to existing Program Activity Structures but it is desirable to be able to reconcile information from the Program Activity Structure with the program evaluation components. This may entail changing the Program Activity Structure to be compatible with the department's program component structure. 1 The raison d'être of the components is to provide an appropriate focus for evaluating the results of departmental activities in relation to the government's objectives and to the external environment, and not to evaluate, in particular, existing organisations. Interest is on program efficacy rather than managerial accountability. In developing program evaluation components, departments and agencies should identify the intended major results their activities are trying to accomplish, and delineate those activities which are directed towards each result or group of related results. Such a collection of activities should be a suitable program evaluation component.

      It should be obvious that there is no unique way to divide a department or agency into evaluation components. Any one division of the department into components is likely to miss certain possibly important aspects for evaluation. This suggests that components should not be seen as rigid. While the component structure would form the basis for program evaluation studies in departments and agencies, this does not exclude other units of evaluation being used from time to time for particular studies. That is, a study might be conducted that cuts across several components or combines several or parts thereof (see

41


Section 4.4.1). Furthermore, it is expected that over time the composition of evaluation components will change due to the experience gained in dealing with a department's or agency's activities and decision making process, and with the changes in the governments policies and objectives.

4.2.2 The Program Evaluation Component Profile

      Having identified suitable program evaluation components, a department or agency should prepare a description of each component in terms of what the component is supposed to do, how it is delivered and the resources devoted to it. Such a program component profile could be written with varying degrees of detail, but should include relevant information on the component for evaluation purposes. In particular, it should include a description of the program component structure (see Section 2.2). The basics of such a profile are shown in Table 4.1. Component profiles will allow others in the department, in central agencies or elsewhere to gain some understanding of what elements of the departmental programs are going to be evaluated.

      The profile in this table is composed of two parts. Part A is the basic description of the component and may be all that is needed in order to develop the initial departmental program evaluation plans although an initial specification of the component's elements would seem to be required to fully identify the component. Part B describes the component's structure and involves a more in-depth analysis of each component resulting in a diagram of the component's structure - a program model - which captures the interrelationships among a component's activities, outputs and results. Part B may often be completed only during the evaluation assessment phase of an evaluation, but departments may wish to develop part B of their component profiles as an independent exercise. The information in part B would certainly help in defining and setting in order the components for evaluation. Over time, and as plans are revised, complete up-to-date component profiles should be available for all components and be part of the departmental long-term program evaluation plan.

Departments may of course wish to compile more information than is shown in Table 4.1 on each component. In effect they may wish to prepare a Part C. Such information as the evaluation history of the component, likely evaluation issues, expected difficulties in doing the evaluation, known relationships to other programs, data that is available, etc., all would be quite useful - if available - as background information for future planning and conducting of evaluations. Given the suggested 3-5 year cycle for evaluation and the normal turnover of

42


personnel that can be expected, whatever useful information that has been developed on the component would otherwise likely be lost. However, such information would not be expected as part of the formal departmental evaluation plans.

      A few of the elements and terms in the profile shown in Table 4.1 require some elaboration. The component description envisaged here is a short description normally involving several paragraphs rather than several pages. It should describe in a narrative fashion what the component involves, how it operates, whom it serves, and what it tries to do, so that someone not familiar with the component could have a basic understanding of what is being evaluated. Departments and agencies may wish to develop more detailed descriptions of each component but a short summary would be all that is expected in the formal plans.

      Similarly, the statement of the component resources should not necessitate a lengthy analysis. When components and Program Activity Structures are compatible, this information should be available as part of the normal strategic and operational planning. When components are not compatible with the Program Activity Structure, precise resource information may be difficult to obtain. In this case, estimates of the resources devoted to the component would be adequate. Over time it is expected that the Program Activity Structure will evolve to become compatible with the program components structure. The resources part of the profile serves to indicate the size of the component in terms of inputs and is not intended as a financial accounting statement.

      The term impacts and effects is meant to cover any relevant good or service (other than the direct program outputs) or behavioural or institutional change that results from the component. No strong distinction is made between "impacts" and "effects", although these terms are often distinguished on a temporal basis or as to the diffuseness of the particular result. Typically, several levels of impacts and effects can be usefully identified between the outputs and the ultimate results. In particular, it is useful for evaluation purposes to distinguish between the direct impacts and effects - those comprising the next level of results beyond the outputs - and other impacts and effects. Direct impacts and effects are more readily associated with the component in terms of cause and effect than are other impacts and effects.

43


Table 4.1

BASICS OF A PROGRAM EVALUATION COMPONENT PROFILE

Part A: Background

1.
  
Component Mandate : A statement of both the legal basis of the component and of what the component must and may do.
 

2 . Component Objective : A statement of what impacts and effects the component is specifically designed to accomplish or contribute to.

3. Component Description : A short narrative explaining what the component involves: how it is delivered; the environment it operates in; the population served; and what it is to accomplish.

4. Relation to Estimates Program : The Estimates program or programs from which the component is funded should be identified and the relationship between the component's objective and that of the Estimates program explained.

5.
  
Component Resources :
 
  (i)
  
Fiscal Expenditures - The operating, capital and grants and contribution costs of the component as well as the authorized person-years devoted to the component.
 
  (ii)
  
Capital Assets - An identification of the facilities and equipment other than office space devoted to the component.
 

Part B: Elements and Structure

1 . Component Elements :

(i)
  
Activities - A list of the major work tasks and any powers or functions that characterize a given component and which are performed or administered by the component personnel.
 
(ii)
  
Outputs - A list of the goods and services which are produced or directly controlled by the component personnel and distributed outside the component organization, as well as any regulations or provisions in tax legislation produced by or monitored by component personnel.
 

(iii) Expected Impacts and Effects - These are the further goods, services and regulations (if any) produced by others as a result of the program's outputs and the consequent expected chain of outcomes which occur outside the program on society or part thereof.

2.
  
The Component's Structure : A description and chart showing the linkages among the component elements; i.e. a program model.
 

44


      The component's structure is a description and diagram - a model - of this chain of results, i.e. of how the component is supposed to work: how the activities are expected to produce certain outputs, which result in direct impacts and effects, which in turn typically cause other impacts and effects. A basic structure for a component was shown in Figure 2.2.

      The program evaluation component profile serves several important purposes: as an explanation - especially for organizations or persons outside the particular department or agency - of what each component involves; as a mechanism for reviewing the rationale of the component; as a basis for planning for evaluation; and as a way of ensuring that appropriate components have been identified.

4.3 ESTABLISHING PRIORITIES FOR PROGRAM EVALUATIONS

      Ranking the evaluations of programs will allow plans to be developed showing when each component will be evaluated. This ranking will be required for both long-term and current year planning. The major factors to be taken into account in setting priorities should be:

-

the importance of evaluating the component in terms of departmental priorities; and

-   the concerns and priorities of cabinet committees.

      With the review of departmental program evaluation plans being part of the Policy and Expenditure Management System (see Section 2.1.2), the Policy Committees and the Treasury Board of Canada will have a formal way, as well as any informal means which may develop at the officials level, to express their priorities and concerns on program evaluation to departments and agencies. Nevertheless, the deputy head remains the client of these studies and as such determines the final priorities for program evaluation in his or her department or agency.

In addition, a number of technical factors should be considered in the ranking of components such as the size of the component in terms of expenditures and/or impacts on society, the anticipated cost of the evaluation, the expected difficulties and lead time required in doing the evaluation in relation to the experience and capabilities of the program evaluation unit, the timing of the internal audit of the component, the time since the last evaluation, and the maturity of the component.

45


4.4 DEPARTMENTAL PROGRAM EVALUATION PLANS

A departmental program evaluation plan should contain two parts:

1.
  
A Long-Term Program Evaluation Plan - which comprises:
 
  (i)
  
A Departmental Program Evaluation Profile - a description and explanation of the program evaluation component structure and the individual component profiles.
 
(ii)
  
A Program Evaluation Schedule - a listing of target dates by which all components are planned to be evaluated.
 
2.
  
An Annual Program Evaluation Plan - an operational plan indicating what will be carried out in the 12-18 months following the date of the plan.
 

      Departmental evaluation plans and revisions thereof should be submitted when modified or updated to the Office of the Comptroller General for review. Chapter 8 discusses the role the OCG can play in assisting departments in the development of their plans. As mentioned in Section 2.1.2, departmental long-term program evaluation plans are to be submitted by March 31 and the departmental annual program evaluation plan by October 31 to the appropriate Policy Committee and to the Treasury Board.

      Departmental program evaluation plans should be developed by the departmental program evaluation unit and should be coordinated with the departmental internal audit plans to ensure that both the timing and coverage of common areas of interest are satisfactory to both groups (see Sections 2.4 and 3.4.5).

4.4.1 The Long-Term Program Evaluation Plan

      The long-term evaluation plan provides a description of what is to be evaluated, and why, and comprises the departmental program evaluation profile and schedule.

(a)
  
The Departmental Program Evaluation Profile
The departmental program evaluation profile should include:
 
  (i)
  
an explanation of the reasons for the specific program evaluation component structure being used; and
 
  (ii)
  
the individual program evaluation component profiles.
 

46


Because a given component structure in a department or agency is not unique (see Section 4.2.1), the first part of the departmental program evaluation profile should be a short discussion of the reasons for choosing the particular component structure. The program evaluation component profiles were discussed in Section 4.2.2. For an initial departmental plan, Part A of the profile may suffice, but over time complete profiles should be developed.

      For the most part, evaluation plans will consist of studies based on the identified program evaluation components. Situations may occur, however, where the deputy head requires an evaluation on certain parts of components which do not coincide with any existing departmental component. At least two such cases can be expected to occur: studies which cut across or combine components and interdepartmental studies.

      The former case was mentioned in Section 4.2.1. For example, a program evaluation component structure in an industrial incentives area may be based on the type of aid given to produce certain effects, such as increased exports or increased jobs. While such components should be evaluated, it may at some point be useful to divide up the same activities by sector - aerospace, electronics, etc. - and evaluate the impact of all assistance on each sector. Nothing in the Treasury Board policy or this guide prevents such evaluation studies which may focus on several, or parts of several, "evaluation-plan" components. Indeed such a rethinking of what to evaluate should routinely be done and could ultimately lead to changes in the formal component structure, if such a change would better reflect how decisions are being made.

      The other case that can arise is when the evaluation of certain components clearly should include consideration of closely related components in other departments. In this case, the evaluation might focus on two or more components and require interdepartmental coordination and cooperation. During the evaluation assessment phase, a requirement for such interdepartmental studies can be determined (see Section 6.2.3). The Treasury Board policy encourages this interdepartmental cooperation.

      In both cases, such evaluations should be part of the departmental evaluation plan, since they are part of the deputy head's fulfillment of the responsibility to have his or her programs evaluated. Basing the plan on a component structure is a convenient way to ensure that all departmental programs are covered.

47


(b)
  
The Program Evaluation Schedule
The program evaluation schedule can be developed when the components
 

have been identified, when they have been ranked and when there is a reasonable idea of the resources necessary or available to the evaluation function. The overall program evaluation resource level in any given situation determines the length of the evaluation cycle, namely, the time period in which all departmental programs can be evaluated. The Policy suggests a 3-5 year cycle. In addition, a rough estimate of the costs of evaluating each component, in terms of both dollars and persons, may have to be made in order to know how many evaluations could be carried out in each year of the cycle. The program evaluation schedule should:

(i)
  
clearly identify the targeted departmental evaluation cycle;
 
(ii)
  
state the overall level and type of resources to be devoted to program evaluation;
 
(ii)
  
indicate the reasons for the priorities; and
 
(iv)
  
identify a set of evaluations based on the department's program evaluation component structure and their expected start and completion dates, which covers all of the department's activities over the evaluation cycle.
 

      The program evaluation schedule should be approved by the deputy head. This indicates commitment to the evaluation function and to the completeness of the evaluation coverage of the department's or agency's activities. While departmental priorities will always result in specific evaluations being done at certain times, the Treasury Board policy on evaluation requires all components to be evaluated over a reasonable time frame. Exceptions - components that are not in the schedule and hence are not going to be evaluated - must be documented as to why they have been excluded. Such documentation should be part of the schedule.

      It is fully expected that the long-term program evaluation plan will be altered over time as priorities change and experience is gained. Section 4.5 discusses updating the long-term program evaluation plan. The long-term plan represents the department's or agency's general strategy for evaluation and, as such, forms part of its Multi-Year Operational Plan (see Section 2.1.2).

48


4.4.2 The Annual Program Evaluation Plan

      The long-term program evaluation plan is, of necessity, general in nature. In order to determine in sufficient detail what will be done in evaluation in the following 12-18 months, an annual evaluation work plan is required. The annual plan spells out in some detail the specific evaluation activities which will be carried out and should reflect current year evaluation resource levels and the capabilities and past evaluation experience of the department. It should also incorporate the current year interests and concerns of the department and cabinet committees. The annual plan should include:

(i)
  
identification of which components will be evaluated and why;
 
(ii)
  
the timing of the evaluations including dates of important milestones, such as the end of an evaluation assessment;
 

(iii)a preliminary identification of people and dollar resources that will be committed to the evaluation;

(iv)
  
an indication of who will be carrying out the work; and
 
(v)
  
up-to-date profiles for those components in the annual plan.
 

      Annual program evaluation plans should be approved by the deputy head. Approval is both authority to proceed with the stated evaluations and general commitment of funds and person-years. The annual program evaluation plan represents the departments short-term operational plan in the area of evaluation and, as such, forms part of its Budget-Year Operational Plan (see Section 2.1.2).

      It may be noted that the annual program evaluation plan would not, in general, be the same as the program evaluation unit's annual work plan. The latter could include, in addition to the work outlined in the evaluation plan, such activities as preparing program evaluation guides and the plans themselves, responding to demands placed on deputy head for evaluation information that could not be incorporated into the annual plan, and developing evaluation frameworks for new programs. Such organizational work plans are not considered part of the department's annual program evaluation plan.

49


4.5 UPDATING DEPARTMENTAL PROGRAM EVALUATION PLANS

      Departmental program evaluation plans will have to be revised as required and at least annually to reflect both the passage of time and changes in content as a result of changing circumstances. Updated long-term plans are required by March 31 and updated annual plans by October 31 (Section 2.1.2).

4.5.1 Revisions Due to a New Year

      Even assuming there is no need to make changes in the departmental program evaluation plan because of changing circumstances -an unlikely event -it will still be necessary each year to develop new annual plans and to add one more year to the long-term plan. This is the normal updating of plans required over time and required for the annual determination of next year's operational plans. This aspect of updating the departmental evaluation plans would be done without consultation with central agencies.

4.5.2 Revisions Due to Changing Circumstances

      In addition to revisions to departmental evaluation plans required by the passage of time, in most cases such plans will have to be revised due to changing circumstances. Such factors as:

- new or changed departmental priorities - new or changed governmental priorities - delays in carrying out evaluation studies - new experience gained in evaluation, and - new departmental structures and programs

all would mean that the existing evaluation plans were outdated. Indeed one of the main responsibilities of the departmental manager of program evaluation is to remain abreast of such changing circumstances.

      Each long-term and annual plan might be in need of revision, and both the set of program components and their prioritizing could be altered. In addition, departments or cabinet committees may wish that certain evaluation studies be carried out that combine or cut across program components and/or departments. Up-to-date evaluation plans should reflect the current priorities in program evaluation in each department or agency.

50


Notes to Chapter 4

1.
  
Recently, the concept of a planning element has been introduced to refer to sub-groupings of departmental Estimates Programs which are to be used for budgeting purposes in the Policy and Expenditure Management System, specifically in the Operational Plans. Planning elements are, like program components, results-oriented groupings of activities and hence should, as they are developed over time, be compatible with program components. That is, one should be either identical with or an aggregate of the other.
 

51


CHAPTER 5

PERSONNEL FOR CARRYING OUT

EVALUATION WORK

5.0 GUIDELINES

5.1 The personnel carrying out evaluations of programs should have an appropriate mix of analytic, methodological and project management skills, techniques and experience. Each individual evaluator should have basic analytical skills and some knowledge of the program area, be able to take an independent view of the program, be able to establish credibility with senior management as well as line managers and be acceptable to the departmental manager of program evaluation.

5.2 Consultants should be used in evaluation work as appropriate but should be closely managed. When consultants are used, terms of reference for studies and resulting recommendations are still the responsibility of the departmental manager of program evaluation.

5.1 PROFESSIONAL SKILLS FOR EVALUATION

      Different departments organize differently for evaluation (see Chapter 3) and, as a result, make use of different groups of people:

- people who work full time in the evaluation function; - people seconded from programs or elsewhere; and - outside consultants.

      In each case, personnel working in evaluation, while bringing to the task differing experiences and differing perspectives on evaluation, should share certain common characteristics. In the case of seconded personnel, this may imply the need for an orientation course on program evaluation. In any event, the personnel used should be acceptable to the departmental manager of program evaluation.

      Evaluation involves a systematic gathering of demonstrable evidence on the performance and results of a program. Difficult issues typically arise on how best to collect such evidence so that the findings

53


- the evidence and conclusions drawn from it - are objective and credible. Sometimes this requires knowledge of special techniques. A general requirement will be the ability to appreciate the technical problems involved, to approach the task from an independent point of view, to have some knowledge and understanding of the subject-matter under study, to be able to work closely with program personnel, and to manage the study as necessary. These skills, outlined below, imply the need for many of the evaluators in a unit to have attained the senior officer level.

5.1.1 The Unique Nature of Each Evaluation Study

      The conduct of evaluation studies does not involve routine repetitive tasks. There are no detailed step-by-step procedures to be carried out in each case, but only general principles to be considered. The need for and emphasis on evaluation assessment implies that each study must be carefully designed. In evaluation studies, the issues addressed and the approaches used may be quite different from study to study. The implication for evaluators is that they must be able to work without well established systems, procedures and standards having been previously developed. They must be creative in deciding how to carry out the study, must be able to pin-point the main evaluation issues and they must be able to quickly develop a credible understanding of what the program is supposed to be doing.

      Each program represents some process for achieving certain ends. Each is unique. The evaluator must be able to think conceptually about the program and to develop - often from incomplete or conflicting data - one or more conceptual models of the process underlying the program.

5.1.2 Independence

      The independence of the evaluators is essential to the production of objective and credible evaluation work. But independence is not simply achieved by organizational separation, although this is usually a prerequisite.

Independence also requires evaluators to be able to stand back from the everyday concerns of a program's operation and to look at what is going on in a detached, but not uninformed, way. The evaluator must be able to identify, articulate, and question program assumptions at several levels. The evaluator, should not let personal biases influence his or her view, and yet should be aware of the environment and constraints under which the program operates. While the evaluator may be convinced of the validity of certain evidence or conclusions, he or she must remember that the evaluator's task is to systematically (i.e. not selectively) collect evidence to inform others.

54


The evaluator must be able to separate argument from evidence. Conclusions must be based on evidence which others will have to accept, including those who may not like the findings produced. By being constantly conscious of the need for independence, evaluators will enhance their own credibility and the credibility of the findings.

      In cases where evaluation teams are composed in part of seconded personnel, it may be unrealistic to expect each individual member to take a completely independent view. In such cases, independence must be assured through the judicious selection of other members of the team.

5.1.3 Subject-Matter Knowledge

      Evaluators will unquestionably produce better evaluations when they are familiar with the subject-matter under investigation. Knowledge and experience in the area will allow the evaluator to more quickly determine key aspects of the program and its results. As important is the fact that such knowledge is almost a prerequisite for him or her to establish credibility with line management. The depth of subject-matter knowledge needed will vary in different areas but will be in more demand when the program being evaluated is more technical or professional in nature.

5.1.4 Analytical Skills

      Most evaluation work involves analytical tasks and fairly sophisticated analytical approaches are not uncommon. Evaluators typically bring to their job a mixed collection of skills and experience. This is in keeping with the variety of possible considerations and tasks that may be undertaken in any given study. What is essential is that evaluators have an appreciation of the potential use of a variety of analytical approaches, what special techniques can do, when they may be used, where to get the skills, and what the limitations are. What is needed is the capability for analytic thinking rather than, necessarily, an in-depth knowledge of any single given analytical technique. An understanding of and experience with, for example, experimental designs, sampling methods, or economic impact models is not essential for any one evaluator to have, although a program evaluation unit as a group should possess or have ready access to such skills and experience.

Evaluators should know when analytical techniques and/or experts are required. They should know, for example, that designing good questionnaires is a specialized task and that an effective questionnaire often requires expert assistance. They should know that sampling for data and for surveys is a sophisticated technique and

55


cannot be adequately done without some knowledge of sampling theory. They should know that statistical analysis, cost-benefit analysis, etc. all are based on certain assumptions and conditions which, in practice, are often hard to meet, and hence lessen the credibility of the resulting findings. Evaluators must have had sufficient exposure to a variety of analytic approaches - without necessarily acquiring an in-depth knowledge - t o have a substantial appreciation of the usefulness and limitations of these techniques.

5.1.5 Interpersonal Skills

      Evaluators do not work in isolation. A high level of interpersonal skills is essential. Typically, there is a high level of contact with, in particular, senior management. The ability to deal with others in a cooperative, sensitive way is often the key to acquiring the assistance of program personnel. Without their assistance, little credible information can be obtained.

Evaluators may be viewed as a threat. Therefore, they must be able to satisfactorily explain the evaluator's role, be able to demonstrate a willingness to listen and incorporate the views of program personnel, and be able to develop personal credibility with senior management and program personnel by displaying a knowledge and understanding of the program and its environment. The evaluator should be able to take a management perspective on a program. With-out appropriate interpersonal skills, the evaluator is unlikely to be effective.

5.1.6 Project Management Skills

      The role of the evaluator within a departmental program evaluation unit will often be one of managing a program evaluation study team consisting of either consultants, seconded staff, departmental program evaluation staff or a mix of all three. This requires that the evaluator concerned have the appropriate management skills to ensure that the study is delivered within budget, target dates and within appropriate quality standards. He or she must therefore be familiar with project management principles and be aware of the contracting for services procedures if consultants are to be used. He or she must be able to direct staff and must also understand the decision making process in the department, to ensure that the appropriate information and support is forthcoming as part of the study process.

5.2 THE USE OF CONSULTANTS

      Many departments and agencies can effectively supplement their evaluation resources through the use of outside consultants. Consultants may be used for a number of reasons:

56


- when specialized expertise is needed;

- when adequate departmental personnel are not available; and

- in situations where a third party is essential, for example when any evaluation carried out by departmental or agency personnel, no matter how good, will not be seen as objective.

      Good consultants can bring required expertise and experience to an evaluation assessment or evaluation study, but should be used for well-defined tasks and in well-controlled situations. Frequent progress reports and close management are required to ensure that problems are identified early, that the work being carried out is on-track and in keeping with what the department or agency had in mind. Because they are not part of the evaluating organization and are not subject to the daily monitoring of departmental resources, consultants need to work within well developed work plans and terms of reference.

      Finally, it must be pointed out that consultants cannot assume the responsibilities assigned to departmental personnel (see Chapter 3). For example, departmental personnel should be heavily involved in the evaluation assessment phase since this is when the critical decisions are made on what exactly will be evaluated and how. When consultants are being used, terms of reference for their work constitute a legal contract and should, of course, be prepared by departmental personnel. Consultants may very well be asked to prepare recommendations, but the recommendations that go to the deputy head must be those of the departmental evaluation staff.

      If these precautions are followed, consultants can be an effective additional means of bringing expertise, experience and credibility to the evaluation process.

57


CHAPTER 6

CONDUCTING PROGRAM EVALUATIONS

6.0 GUIDELINES

6.1 The conduct of program evaluations should comprise three distinct phases: the evaluation assessment (pre-evaluation planning); the evaluation study itself (data collection, analysis and reporting); and the decision-making based on findings and recommendations of the study.

6.2 An adequate evaluation assessment, with appropriate deputy head involvement, which seriously considers all the basic evaluation issues, should be undertaken prior to any evaluation study, in order to determine the appropriate focus and approach to be taken. This should result in an updated component profile, an evaluation assessment report presenting costed evaluation options and specific terms of reference (Table 6.1) agreed to by the deputy head, including a detailed work plan, for the subsequent evaluation study. When an evaluation assessment does not result in an evaluation study, the reasons for such an outcome should be documented in the assessment report and the departmental evaluation plans appropriately amended.

6.3 Any evaluation study undertaken should produce objective and credible findings, i.e. evidence and conclusions, on each evaluation issue specified in the terms of reference of the study.

6.3 A final report should be prepared for each evaluation assessment study and evaluation study undertaken. Reports should be credible and useful and keep separate the evidence, conclusions and recommendations. Reports should be reviewed by all concerned parties.

6.4 Once decisions are taken as a result of program evaluation findings and recommendations, departments and agencies should ensure that appropriate follow-up action is taken and reported on to the deputy head.

59


6.1 INTRODUCTION AND OVERVIEW

      Central to the program evaluation function is the planning and conduct of program evaluations and the subsequent taking of decisions based on the findings and recommendations. Efforts in establishing evaluation policies, organizations and processes will come to naught if credible, timely, useful and objective evaluations are not produced and used. In this chapter, general guidelines on the program evaluation process are presented. More detailed principles and suggestions on carrying out evaluations are presented in the companion OCG document, Principles for the Evaluation of Programs by Federal Departments and Agencies.

The evaluation process is viewed as comprising three phases: - pre-evaluation planning (evaluation assessment); - conducting and reporting on the evaluation study; and - decision-making based on findings and recommendations.

That is, the evaluation of any program involves some planning for the work including designing the evaluation, the actual study (data collection, analysis, formulation of findings and recommendations), and the taking of decisions as a result of the study.

      Figure 6.1 illustrates a representative evaluation process. The figure indicates that in the area of implementing decisions, the program evaluation function merges with the regular program management function (shown by dotted lines). At this point other management information is brought to bear on decisions taken as a result of the evaluation study. The major decision points in the process, where the deputy head will normally play a key role, are:

- in deciding on an appropriate focus for the evaluation study through the selection of an evaluation option;

- in agreeing to detailed terms of reference for the evaluation study; and

- in making decisions as to the action to be taken on the findings and recommendations.

      The responsibilities of the deputy head in this area were discussed in Section 3.2.1.

      Each phase in this evaluation process will be discussed in this chapter.

60


The Program Evalaution Process: A Representative Model


6.2 EVALUATION ASSESSMENT

      It is essential that adequate planning for evaluation studies take place. Evaluation studies can be costly in terms of both financial and human resources, and experience has shown that many studies in the past have turned out not to be used or useful. At least part of the reason for non-use of evaluation studies has been a lack of adequate consideration before commencing the study of what is needed, what can be done and what shall be done. This Guide aims at avoiding such pitfalls by strongly encouraging an adequate planning phase - termed evaluation assessment - for all evaluation studies.

      Evaluation assessment is a critical part of the program evaluation process. It provides the client of the study - the deputy head -with a way to ensure an appropriate focus for the ensuing evaluation study and is a means for indicating to the client and other interested parties the kind of information which will be produced. As such, evaluation assessment should provide a control on the spending of resources for evaluation studies which do not answer the relevant questions or in any other way are, after completion, found to be of little or no use to the study's client.

      Figure 6.1 shows the main elements of the evaluation assessment phase. The physical outputs are typically terms of reference for the assessment, an evaluation assessment report, an updated component profile and terms of reference for the evaluation study itself.

      The evaluation assessment process may be more or less formal, should involve program personnel and will usually be iterative in nature, as issues are selected from among the many possible issues that could be addressed, and subjected to further questioning, consideration and costing. Terms of reference for the assessment may not always be needed and an evaluation assessment report may be limited to a short memorandum in straightforward cases. It is expected, however, that any substantial evaluation study will be preceded by a thorough evaluation assessment and an appropriate report.

6.2.1 The Evaluation Assessment Study

      An evaluation assessment study involves an identification of the program-specific evaluation issues to be considered in the assessment and an analysis of the nature and extent to which these evaluation issues can and, perhaps should, be examined in the subsequent evaluation study. The assessment study will typically consider evaluation options for carrying out the evaluation study. It is essential that program personnel be involved in these considerations in order that the

62


realities of the program environment are appreciated and that their concerns are incorporated into the evaluation design. In particular, of course, the deputy head should play a significant role.

      The evaluation assessment study should be comprehensive, relevant, credible and cost-justified. 1 It should:

(i)
  
Develop an understanding of the rationale and structure of the program and the environment in which it is operating.
 
(ii)
  
Identify the expected use of the evaluation study (e.g. for program improvement, for input to policy development, for accountability, etc.).
 

(iii) Review previous evaluation work carried out on the program as well as other relevant published material.

(iv)
  
Determine the program-specific evaluation issues which could be examined in the evaluation study. (Table 1.1 lists the basic generic evaluation issues that should be considered.)
 
(v)
  
Explain the reasons for excluding any of the basic evaluation issues from investigation in the evaluation study.
 
(vi)
  
Determine, analyse and cost the evaluation options - the different sets of issues and different evaluation methods and procedures, including data collection - available for carrying out an evaluation study based on the identified issues and, if known, the decisions that have to be made as a result of the study.
 

(vii) Recommend an appropriate evaluation approach.

      The resulting evaluation assessment report may or may not contain explicit terms of reference for the evaluation study. This will depend, assuming a study will be undertaken, on the extent to which the deputy head (the client) has had input to the evaluation assessment process. If, during the assessment, the deputy head has decided on the particular focus of the study, then recommended terms of reference could be included. On the other hand, the report could present the most appropriate options for the evaluation study, with terms of reference developed in a subsequent phase after a choice among the options is made. When appropriate, summaries of findings from evaluation assessments, should be submitted as part of the department's or agency's Strategic Overview (see Section 2.1.2).

63


6.2.2 The Evaluation Study Terms of Reference

      Terms of reference are essential for all evaluation studies. They provide a formal record of agreement between the client and the evaluators as to what will be done. When the terms of reference are agreed to by the deputy head, they represent a senior management commitment to the study, and authorize the execution of the study and expenditure of resources thereon. They also outline the obligations of all participants in the study - line managers plus evaluation personnel. Appropriate terms of reference are particularly important when outside consultants are to be used.

      Coming at the end of an evaluation assessment phase, the terms of reference give specific detail as to what is expected in the study. Being specific in the terms of reference will help to reduce the likelihood of misunderstanding during the study and will provide a useful reference on which to base the final report. On the other hand, terms of reference should not be followed blindly. Changing circumstances and enhanced knowledge of the program, obtained during the study, may necessitate alternative approaches. Terms of reference should serve as clear guidelines on what is expected of the evaluation study.

      An outline of typical terms of reference are shown in Table 6.1. As the expression is used in this guide, terms of reference are considered to include the detailed study work plan (item 2 in Table 6.1). Where this is not the case, a separate work plan should still be prepared before the evaluation study commences. 2

6.2.3 Other Outcomes from Evaluation Assessment

      While the typical outcome of an evaluation assessment is agreement by the deputy head on terms of reference for the evaluation study, other outcomes are possible. It may, for example, be decided that an evaluation is not appropriate at that time or that certain important evaluation issues are best addressed in another context.

      An evaluation study may not be appropriate at a point in time for a number of reasons, for example:

- the program may be too new for any significant results to have occurred;

- a recent externally conducted study has answered most evaluation questions;

64


Table 6.1

CONTENTS OF TERMS OF REFERENCE

FOR A

PROGRAM EVALUATION STUDY

1.
  
A statement and discussion of the specific issues to be addressed in the study.
 
2.
  
A detailed work plan* of the study design indicating:
- how each issue is to be addressed, that is, an explanation of the evaluation approach and design to be used and the tasks to be done;
 


-
what are the criteria for measuring the attainment of the program's objectives and its impacts or effects;


-
what information is to be collected and how;


-
who is going to do each task and the responsibilities of all participants;


-
when each and all tasks will be completed (a timetable); - what reports will be produced and at what frequency; and - who are the recipients of such reports.

3.
  
A description of the organizational and reporting relationships for the study.
 
4.
  
A clear statement of the authority (if needed) to do the study.
 
5.
  
A specification of the resources and other costs to be committed in the study.
 
6.
  
An outline of the procedures for amending the study work plan.
 

*In certain cases the work plan may be a separate document.

65


- the program may be found to be undergoing significant review or restructuring;

- the priorities for evaluating have changed; or

- the available data may be so inadequate that an evaluation framework followed by appropriate data collection may be required before an evaluation can be adequately carried out which answers the client's needs.

Where data are found to be a problem for certain aspects of the evaluation, an evaluation study may, of course, still proceed, and include as one of its tasks, an identification of what evaluation data, if any, it would be reasonable to collect in the future on an ongoing basis as part of the program's performance measurement system (see Section 2.4). In any event, since the component profile will have just been completed or updated, whenever an evaluation is rescheduled to a later date, this may be an opportune time to develop an appropriate evaluation framework for the future evaluation.

      An evaluation or part thereof may be better carried out in another context when the evaluation assessment finds that, for example,

- certain basic evaluation issues have been identified which cut across several program evaluation components or several departments, suggesting the need for an integrated evaluation extending beyond the particular component under consideration; or

- important evaluation issues are found to be better addressed through a different component structure, suggesting a need for a rethinking of the components being used for evaluation.

      These possibilities are the exceptions to the usual outcome of an evaluation assessment, namely the evaluation study terms of reference followed by the evaluation study. As such, they should be documented when they occur, and the evaluation plans appropriately updated.

6.3 THE PROGRAM EVALUATION STUDY

      The program evaluation study itself will typically involve the greatest proportion of resources and time in the evaluation process. This is where the data are collected, the analysis is carried out, and the conclusions and recommendations are formulated. If an adequate evaluation assessment phase has been carried out, then the steps involved in the evaluation study should be reasonably clear.

66


A wide range of approaches and techniques are available to carryout an evaluation study from a controlled experimental design to an analysis of reported past results of the program, from a random sample to case studies. No one approach is best in all cases and each has its advantages and disadvantages. Each produces a different kind of product. All are aimed at measuring not only the impacts and effects which have taken place, but also at gathering reliable and credible evidence that the impacts and effects took place because of the program and not because of some other set of conditions or factors. This question of incrementality, of determining what would have taken place without the program, is the most difficult and problematic methodological aspect of the evaluation.

      At the beginning of this Guide (Section 1.1.2), a contrast was made between the more classical approach to evaluation, which views evaluation mainly as a scientific research activity, and the approach being encouraged in the federal government, which views evaluation as an aid to decision-making and management in government. It was suggested that evaluation should not be viewed as a scientific exercise aimed at trying to produce definitive conclusions but rather aimed at producing objective but not necessarily conclusive evidence. This viewpoint has significant implications as to the choice of an appropriate methodology for an evaluation. It often means, for example, that evaluators must aggregate inferences obtained in a variety of ways, as opposed to seeking a definitive answer to a particular question through a single more rigorous method. It means that program evaluation typically seeks to determine reliable relationships between a program's activities and its results rather than definitive explanations of why the program caused certain results.

Program evaluation should, of course, involve the systematic gathering of demonstrable information and evidence on a program and its results. It must be objective. Objectivity means that the evidence and conclusions must be capable of being verified by persons other than the original authors. This further implies that evidence contrary to expectations should not be suppressed. In other words, the evaluation information and data should be collected, analysed and presented in a manner such that if others conducted the evaluation and used the same basic assumptions , they would reach similar findings.

      In certain cases, systematic information gathering and objectivity may very well imply the need for classical evaluation research based on the experimental design model. But objective evidence can be gathered through other methods as well. 3 The relevant criteria for selecting among evaluation methods should be that credible, objective and timely

67


information is produced which is appropriate for decision making and management. What may be lost to scientific authoritativeness can be gained in the areas of relevance, timeliness, and acceptance of findings by decision-makers, without the loss of objectivity. That is, while objectivity and reliability - the "rules of evidence" - should not be compromised, there is usually a very real and often difficult trade-off to be made between the timeliness, resources available and relevance of the evidence gathered and the conclusiveness of the evidence - the degree of certainty with which definitive conclusions can be drawn without the need of substantial judgement. More frequently, what might be called conditional conclusions are made, namely conclusions that are not absolute in nature but are conditional on certain assumptions, points of view or conditions. This is in keeping with the view of evaluation as producing relevant information for decision-making rather than as replacing decision-making by producing definitive conclusions.

      Evaluation studies should gather objective and credible information and evidence on each of the evaluation issues specified in the terms of reference of the study and should, as a minimum, produce conclusions on each of these issues. The conclusions of the study should be the answers to the questions posed in the terms of reference.

      A more detailed discussion of the quality of evaluation studies is given in the OCG companion document Principles for the Evaluation of Programs in Federal Government Departments and Agencies.

6.4 EVALUATION REPORTING

For both the evaluation assessment and the evaluation phases of a program evaluation, a final report is essential. This records what was done and what was found and should be both credible and useful. 4 A credible report is one which presents the findings in a balanced and complete manner, identifying the assumptions underlying the study and outlining the constraints under which the study was undertaken. A useful report is one which produces relevant, significant and timely findings on the issues addressed, in a clear and understandable manner.

      It is important in any evaluation report to distinguish between evidence, conclusions and recommendations as well as between factual conclusions, judgement and opinions. These distinctions are especially important to keep in mind in light of the general approach being adopted for evaluation (see Section 6.3). It is recognized at the outset that in many cases the evaluation will not produce complete definitive evidence on a program, so that findings often require a certain amount of professional judgement on the part of the evaluator and will often be

68


conditional on other considerations or factors. Further, recommendations, while being based on these findings, will usually be arrived at using the experience of the evaluator and others in the area and will take into account other information on the program. Evaluation reports should justify their selection of information and evidence and its relationship to the report's conclusions and make their assumptions explicit. For these reasons it is good practice in any evaluation assessment or evaluation study report to keep separate the information and evidence, the conclusions and the recommendations.

      The distribution of draft evaluation reports and recommendations should normally include the departmental manager of program evaluation and others, in particular line managers, who have been associated with the study. In this way, such parties can be informed on what the study found. All interested parties should be able to comment on drafts of the report and, where differences of opinion remain, append their comments to the final report and/or the recommendations. The deputy head should approve the distribution and release of final evaluation reports.

6.5 TAKING DECISIONS BASED ON PROGRAM EVALUATIONS

      Once the evaluation study report and recommendations are submitted for senior management consideration, the final phase of evaluation begins, namely the taking of decisions based on the study. As shown in Figure 6.1, such decisions incorporate whatever other information is available on the program being considered, in addition to the findings and recommendations of the evaluation study.

      It is imperative that adequate procedures be in place to ensure that appropriate follow-up actions are taken for any decisions reached by the deputy head at this stage. As indicated in Section 3.2.3, responsibility for keeping track of such action should be clearly spelled out, either in the department's program evaluation policy or as a corollary to the decisions taken. In cases where specific changes to a program are to be made, a plan should be developed to indicate just how and by whom the changes will be made. There should be an implementation report prepared which indicates just what, in fact happened -as a result of the decisions taken by the deputy head. The responsibility for such a report should be clearly assigned by the deputy head.

69


      The management of the program evaluation function should keep informed on any actions taken which are based in whole or in large part on program evaluation studies. Furthermore, summaries of the findings of program evaluations and the changes proposed as a result of these findings should form part of the department's Strategic Overview (see Section 2.1.2).

Notes to Chapter 6

1.
  
The comprehensiveness, relevance, credibility and cost-justification of evaluation assessment studies are discussed in the OCG document, Principles for the Evaluation of Programs by Federal Departments and Agencies , Chapter 2 (The Evaluation Assessment Study).
 
2.
  
Appropriate terms of reference are discussed in the OCG Principles document, Chapter 4 (Terms of Reference).
 
3.
  
For one discussion of choosing among evaluation methods see C.S. Reichardt and T.D. Cook, "Beyond Qualitative versus Quantitative Methods", in T.D. Cook and C.S. Reichardt (eds.) Qualitative and Quantitative Methods in Evaluation Research . Beverly Hills: Sage, 1979, pp. 7-32.
 
4.
  
Credible and useful evaluation reports are discussed in the OCG Principles document, Chapter 5 (Evaluation Reporting).
 

70


CHAPTER 7

EVALUATION REQUIREMENTS

FOR NEW PROGRAMS

7.0 GUIDELINES

7.1 The deputy heads of departments and agencies, through the departmental program evaluation policy, should ensure that all new or renewed programs have, as part of their design, a program profile and an evaluation framework. A preliminary component profile should be available when the program concept is submitted for approval, and an appropriate evaluation framework and completed profile should be developed while the new or renewed program is being designed and implemented.

7.2 Evaluation profiles and frameworks should be developed in close consultation with those involved in planning, designing and implementing the new program and should have the flexibility to cover a range of issues and indicators which could become important in the subsequent evaluation. Each evaluation profile and framework should be approved as part of the normal program design approval process. Upon approval of the new or renewed program, the department evaluation plan should be amended appropriately, if required.

7.1 REQUIREMENTS FOR EVALUATING NEW PROGRAMS

      The Treasury Board policy on the evaluation of programs calls for the identification of future evaluation requirements when designing and implementing new programs. This is a responsibility of the deputy head and the procedures for doing so should be elaborated upon in the departmental program evaluation policy. An additional impetus for the identification of evaluation requirements comes from the Policy and Expenditure Management System. As part of its review of the departmental Multi-Year Operational Plan, Treasury Board Canada will be examining program designs to ensure that the means of delivery are consistent with objectives and facilitate evaluation.

      In order to be able to adequately evaluate a new program at some time in the future, appropriate evaluation requirements should be developed as part of the basic program design and should contain two parts: a profile and an evaluation framework. This will ensure that

71


the purposes of the program are clear and that, when the evaluation is in fact carried out, the results of the program can be determined. An appropriate evaluation profile and framework will greatly reduce the work required during the future evaluation assessment, since much of the analysis required will already be done. By ensuring that relevant information will be available when the evaluation study is carried out, the evaluation framework and its implementation will improve the quality of the findings. In addition, the costs of both the evaluation assessment and evaluation study will be reduced.

      While the development of evaluation requirements will typically be associated with new programs, other cases may arise when such a requirement is needed for an existing program, such as when an evaluation is postponed to a future date or the program receives a renewed mandate or is otherwise redirected. For the purposes of this guide all such programs will be referred to as "new".

7.1.1 New Program Evaluation Components

      A new program will necessitate revisions to the existing program evaluation component structure. In the simplest case the new program will be a new component. In other cases, the new program may be composed of several new components or be composed of parts of several existing components. In all cases, the program evaluation component structure should be reviewed and amended to accommodate the new program before or during the development of the evaluation requirements.

7.1.2 A Component Profile

      A profile of the component (or components) - discussed in Section 4.2.2 - will provide an overview of the new program. Once the component or components have been identified, a preliminary profile should be prepared to include information on the background objectives, and basic rationale of the program. Particular attention should be paid to a description of both the environment into which the program is being introduced and the reasons why the program is being introduced at this time. This description will be useful in the future for an understanding of the original rationale of the program. The information in the profile which is essential for an understanding of the new or renewed program should be available when the program concept is submitted for approval. Once the program concept has been approved, the profile can be completed as the program design is firmed up.

72


7.1.3 An Evaluation Framework

      An evaluation framework is the basis on which a future evaluation is to be built. It outlines what the evaluation is likely to entail and, more critically, describes the information and data that are to be collected prior to and during the evaluation. An evaluation framework should include:

(i)
  
a statement and discussion of the evaluation issues that are likely to be addressed in the subsequent evaluation;
 
(ii)
  
a list of tentative evaluation indicators which will be used to describe the results of the program and how well the program has performed;
 

(iii) a description of the information and data requirements needed to investigate and analyze the issues and to measure the indicators, including an identification of which, if any, evaluation data it would be reasonable to collect on an ongoing basis;

(iv)
  
a description of any program design features needed to collect the information and data requirements; and
 
(v)
  
a tentative plan for evaluating the components, including an estimate of the timing and general resource requirements of the subsequent evaluation.
 

      The evaluation framework should be developed while the program is being designed and implemented. In the case of a pilot program where the purpose is to evaluate a new program concept on a small scale, the evaluation framework should be available when the program is submitted for approval.

7.2 DEVELOPING EVALUATION REQUIREMENTS

      The development of evaluation requirements is shown schematically in Figure 7.1. Much of this work should normally be carried out by the departmental program evaluation unit. The profile and framework would be approved as part of the program design approval process. As in the case of the conduct of program evaluations, the concerns and interests of cabinet committees should be considered. Two aspects of the development of the profile and framework are critical: involvement of those participating in the program design and implementation, and flexibility of the evaluation design.

73


Developing Evaluation Requirements: A Representative Model


      It is essential that there be close coordination and cooperation between those actually developing the profile and framework - normally the departmental program evaluation unit - and the group designing and planning the implementation of the new program. An appropriate profile and framework cannot be developed without a thorough knowledge of what the program is supposed to do and how it is to be implemented. Conversely, the evaluation framework requirements may place demands on the program design. For example, certain data may be needed for the evaluation which are not strictly required to deliver the program. This would have program design and resource implications. In developing the framework, evaluation information and data which are reasonable to collect on an ongoing basis should be identified. It would be expected that any extra resources consumed in program management as a result of evaluation requirements would be more than recovered at the time of the future evaluation. Indeed it is expected that an appropriate evaluation profile and framework will itself improve the management and control of the new program. Clearly, cooperation between the evaluation and planning units is required in order that an appropriate profile and framework be developed.

      A second important feature of the evaluation framework is the requirement for flexibility. The framework must be able to accommodate inevitable changes both in the program itself as the program matures, and in the perceived importance of evaluation issues. If the framework is limited so that only certain issues can be effectively addressed, then there is the possibility that, when the evaluation is carried out, the framework will be found inadequate due to the changing priorities on what the important evaluation issues are. Flexibility will usually require that several, rather than one, indicators of program achievement be developed.

      As outlined in Table 3.2, the program evaluation unit has a role to play in establishing evaluation frameworks. Preferably this would involve the actual development of the frameworks in cooperation with those designing the program. Where this is not possible, any framework developed by others should be reviewed by the program evaluation unit and comments prepared for the deputy head.

      Once the program design is approved by the department or agency, the departmental evaluation plan should be updated to reflect the new program and its future evaluation requirement. The profile developed as part of the evaluation requirement will become the profile in the plan and the evaluation can then be fitted into the evaluation plans.

75


CHAPTER 8

      THE ROLE OF THE OFFICE OF THE COMPTROLLER GENERAL

      The Office of the Comptroller General of Canada (OCG) has been established by the Government with broad authority and responsibility for administrative practices and control in the areas of financial and operational management and procedures for program evaluation. This chapter outlines the responsibilities, the role and the expectations of the OCG in the area of program evaluation.

8.1 OCG RESPONSIBILITIES FOR PROGRAM EVALUATION

      The Office of the Comptroller General was constituted in April 1978 by the appointment of the first Comptroller General of Canada and was confirmed in legislation by the granting of royal assent to an amendment to the Financial Administration Act (Chapter 33) on June 30, 1978. At that time the President of the Treasury Board transferred several responsibilities to the new Office from the Treasury Board Secretariat. One of these responsibilities was to implement the Treasury Board Policy 1977-47 on the Evaluation of Programs.

      The responsibility for implementing the Treasury Board Policy in the area of program evaluation is being carried out by the Program Evaluation Branch of the OCG. As a result, the Branch has the responsibility to see to the development and ongoing operation of program evaluation in departments and agencies and to provide Ministers with information on the status of the program evaluation function in departments and agencies and on the quality of individual program evaluations. As well, the Office has a responsibility to coordinate evaluation activities, as required, between departments and other central agencies.

In order to exercise these responsibilities the OCG will:

- develop and promulgate policy and guidelines on program evaluation;

- develop and maintain a close working relationship with departments and agencies;

- advise and assist departments and agencies; and

77


- comment upon the program evaluation function in departments and agencies and the resulting evaluation documents and reports.

8.1.1 The OCG Relationship with Departments and Agencies

      The Program Evaluation Branch has been organized to encourage, develop and maintain a close working relationship with departments and agencies. By working closely with departmental and agency program evaluation units, the OCG will be able to maintain an awareness in each department and agency of the status of the program evaluation function of individual departments and of the decisions and actions taken as a result of these program evaluation activities. In this way the Branch is able to provide more effective advice and assistance to individual departments and agencies. In addition, this should obviate the need for conducting formal compliance reviews as mentioned in the Treasury Board Policy (1977-47).

8.1.2 OCG Assistance

      In order to facilitate the development of program evaluation in the federal government, the OCG will provide departments and agencies with both general and specific advice and assistance. At the general level, the OCG will:

-gather and disseminate information on program evaluation;

-provide training sessions for program evaluation personnel in conjunction with the Public Service Commission and in accord with the Senior Training Committee;

-facilitate consultation, when required, among departments and between departments and central agencies on evaluation related matters; and

-mediate conflicting demands placed on the evaluation resources and capabilities of departments by other departments or central agencies.

      More specifically, the OCG will provide departments and agencies with advice and assistance, as needed:

-on developing departmental program evaluation policies and responsibilities in the area of program evaluation by discussing with departments various policy and responsibility options;

78


-in setting up their evaluation organization, including assisting in personnel classification and staffing, and facilitating dealings with central agencies on personnel and organizational matters;

-in preparing departmental program evaluation plans by explaining the concepts of program evaluation components and plans, and discussing with departments and agencies possible component structures;

-on the planning for and conduct of individual program evaluations, including general and technical advice, participating on advisory and steering committees, and reviewing evaluation work; and

-on developing appropriate evaluation frameworks for new programs.

8.1.3 OCG Comments

      In order to further the quality of program evaluation in the federal government and to be able to fulfill its responsibilities, the OCG will comment upon departmental and agency program evaluation policies and program evaluation plans when they are submitted.

      These comments will be based on the compatibility and appropriateness of the program evaluation function and plans with the Treasury Board policy and guidelines, taking into account the particular departmental or agency setting. Program evaluation plans will be reviewed as to the extent to which the department is likely to be able to carry out the plan with the resources available, the appropriateness of the program evaluation component structure, the extent to which the deputy head has been involved in their development, and the extent to which cabinet committee concerns and interests have been considered.

      The Office will also, on a selective basis, call for and comment upon evaluation assessment reports, evaluation study reports and evaluation frameworks. In, addition to commenting upon these evaluation products at the request of departments, others will be commented upon based on requests by other central agencies; requests by ministers; size and importance of the evaluation; and representativeness.

79


      The resulting comments will be on the compatibility of the evaluation work with Treasury Board policy, guidelines and, for the evaluation reports, the principles enunciated in the companion OCG document, Principles for the Evaluation of Programs by Federal Departments and Agencies . Attention will focus on the terms of reference for program evaluation studies, and the findings of these studies.

8.2

TREASURY BOARD EXPECTATIONS FOR PROGRAM EVALUATION

      The Treasury Board Policy on the Evaluation of Programs implies certain expectations of departments and agencies in the area of program evaluation. These guidelines and the companion OCG Principles document reflect these expectations which are summarized below:

1.
  
Each department and agency should have an adequate program evaluation function in place which includes
 
  (a)
  
an appropriate program evaluation policy outlining roles, responsibilities and procedures;
 
  (b)
  
an appropriate program evaluation organization;
 
  (c)
  
an adequate level of resources - dollars and person-years -devoted to program evaluation;
 
  (d)
  
appropriate descriptions of the programs or program components to be evaluated; and
 
  (e)
  
appropriate, up-to-date long-term and annual plans for program evaluation;
 

and which

  (f)
  
carries out program evaluations as indicated in the plans; and
 
  (g)
  
develops, as appropriate, evaluation frameworks for new or renewed programs.
 
2.
  
Program evaluations should be carried out in such a manner that
 
  (a)
  
adequate planning is undertaken to determine the appropriate focus and approach for each study including a consideration of the basic evaluation issues of continued relevance, objectives achievement, impacts and effects, and cost-effectiveness;
 

80


(b)
  
specific terms of reference are prepared for each study;
 
(c)
  
procedures to carry out the study are appropriate to the information
 

needs of the deputy head (the client) and adequately ensure the objectivity of the results and the credibility of the conclusions;

(d)
  
appropriate final reports are prepared;
 
(e)
  
findings and recommendations of evaluations are adequately considered by the deputy head; and
 
(f)
  
as a result of decisions taken on the findings and recommendations of the evaluation, appropriate follow-through actions are taken.
 

81


GLOSSARY OF TERMS

Activities - the major work tasks and any powers and functions that characterize a given program and which are performed or administered by the program personnel.

Annual Program Evaluation Plan - an operational plan showing the specific evaluation work - timing, resources, work tasks, and personnel - to be carried out during the 12-18 months following the date of the plan.

Departmental Manager of Program Evaluation - the departmental person reporting to the deputy head or to the senior departmental officer for program evaluation, who is primarily responsible for the management of the program evaluation function.

Departmental program component structure - a presentation and description of the set of program components which comprise the department or agency.

Departmental Program Evaluation Plans - the long-term and annual program evaluation plans of a department or agency.

Departmental Program Evaluation Profile - the collection of program evaluation component profiles in a department along with a description of the departmental program evaluation component structure.

Deputy Head - the senior manager in a department or agency.

Evaluation Cycle - the time period within which all programs of a department will be evaluated.

Evaluation Framework - a description of how it is planned to evaluate a program.

Impacts and Effects - the consequences of a program's outputs encompassing the chain of events which occur between the program's outputs and the ultimate effects of the component on society or any part thereof.

Line Manager - a manager with overall responsibility for an operating program or programs, or parts thereof.

83


Long-Term Program Evaluation Plan - the departmental program evaluation profile and the program evaluation schedule.

Mandate - a statement of the legal basis for a program - what it may do and what it must do.

Objective - a normative statement of what impacts and effects the program is specifically designed to accomplish or contribute to.

Outputs - the goods, services, regulations, or provisions in tax law which are produced or directly controlled by program personnel and distributed outside the program organization.

Program (component) Profile - a description of the background of the program (mandate, objective, what the program does, funding and resources), plus a statement of the elements (activities, outputs, impacts and effects) and a description of the program's structure (linkages among elements).

Program Evaluation Advisory Committee - a committee of departmental personnel formed to advise on the conduct of and recommendations from an individual evaluation study.

Program Evaluation Process - the activities carried out, the decisions taken and the outputs produced during the evaluation of a particular program.

Program Evaluation Schedule - a long-term schedule showing when all evaluation components are to be evaluated over the evaluation cycle.

Program manager - a manager with direct responsibility for the management of an individual program.

Results - the collection of outputs, and impacts and effects associated with a program.

Senior Management - the deputy head and assistant deputy heads.

Senior Program Evaluation Committee - a committee chaired by the deputy head for the purposes of maintaining the function, and of reviewing and taking decisions on the findings and recommendations of evaluation studies.

84


PROGRAM EVALUATION TERMS: ENGLISH-FRENCH

ENGLISH FRENCH

activities - activités
   
departmental program component - structure ministérielle des
    structure composantes de programme
   
departmental program evaluation - plan ministériel d'évaluation
    plan programme
   
effects - effets
   
evaluation assessment (study) - étude préparatoire à 1'évaluation
   
evaluation cycle - cycle d'évaluation
   
evaluation design - méthode d'évaluation
   
evaluation framework - cadre d'évaluation
   
expected impacts and effects - répercussions et effets
  attendus
   
impacts - répercussions
   
mandate - mandat
   
outputs - extrants
   
program component's structure - structure de composantes de
  programme
   
program evaluation - évaluation de programme
   
program evaluation annual - plan annuel d'évaluation de
    plan programme
   
program evaluation component - profil de composantes de
    profile programme
   
program evaluation components - composantes de programme

85


    ENGLISH FRENCH
   
   
    program evaluation function - fonction de 1'évaluation
  de programme
   
    program evaluation process - processus d'évaluation
  de programme
   
    program evaluation study - étude d'évaluation de programme
   
    program evaluation schedule - cédule d'évaluation de programme

86


Date modified: