We are currently moving our web services and information to Canada.ca.

The Treasury Board of Canada Secretariat website will remain available until this move is complete.

Evaluation Guidebook for Small Agencies


Archived information

Archived information is provided for reference, research or recordkeeping purposes. It is not subject à to the Government of Canada Web Standards and has not been altered or updated since it was archived. Please contact us to request a format other than those available.

 

Appendix A―Types of Small Agencies

Regulatory: Agencies that grant approvals or licences based on criteria set out in legislation or regulation.

Judicial: Courts presided over by federally appointed judges. (Please note that the Federal Court and Tax Court recently amalgamated to form the Courts Administrative Service, which leaves only the Supreme Court.)

Quasi-judicial Tribunal: Agencies that hear evidence under oath and render decisions based on that evidence alone in conjunction with the applicable statutes and precedents but independent of government policy.

Investigative Agency: Agencies that investigate a complaint or inquiry and report or make recommendations on their findings.

Parliamentary Agency (Agents of Parliament): Agencies that report directly to Parliament (i.e., Information Commissioner and Privacy Commissioner, Commissioner of Official Languages, Auditor General and Chief Electoral Officer).

Policy Development and Advisory: Agencies that develop policy and make recommendations to the government on issues such as health, the economy, or environment.

Other: Small agencies that do not fall under any of the above categories.


Appendix B―Horizontal Initiatives

Horizontal initiatives are efforts involving the co-ordinated activities of several federal departments and/or agencies focussed on specific objectives of national interest. They must have an RMAF and associated evaluation plans and strategies to which all partners are expected to contribute information and possibly resources.

These types of initiatives may provide opportunities for small agencies to share and/or develop internal evaluation capacity (e.g., through case studies or peer review steps in an evaluation design, through existing internal information, and benefiting from evaluation expertise or consulting expertise provided or paid for by other partners). Horizontal initiatives also provide opportunities for an appropriate grouping of agencies to develop common indicators and share experience in performance measurement and evaluation.

For some general guidance on developing Results-based Management and Accountability Frameworks, please refer to the TBS Web site at http://www.tbs-sct.gc.ca/cee/pubs/guide/sarmaf-ascgrr-fra.as .

Types of horizontal initiatives include:

  • partnerships with other jurisdictions;
  • health promotion;
  • public safety and anti-terrorism;
  • climate change;
  • youth employment;
  • Government-wide initiatives such as
    • infrastructure;
    • Government On-Line (GOL);
    • official languages; and
    • aboriginal procurement.

Some challenges for evaluating horizontal initiatives include:

  • the need to minimize the number of performance indicators;
  • the difficulty of collecting information across different databases, agencies, and/ or departments; and
  • co-ordination.
 

Appendix C―Seeking External Advice and Support

It is important to have adequate advice and support when attempting to build evaluation capacity. Consider the following possible sources of support.

TBS/CEE Support for Small Agencies

According to the TB Evaluation Policy , the Treasury Board of Canada Secretariat must provide central direction for evaluation; use evaluation results where appropriate in decision-making at the centre; and set standards and monitor capacity in the government.

At TBS, the Centre for Excellence in Evaluation (CEE) was established to

  • provide leadership for the evaluation function within the federal government;
  • take initiative on shared challenges within the community, such as devising a human resources framework for long-term recruiting, training and development needs; and
  • provide support for capacity building, improved practices, and a stronger evaluation community within the Public Service of Canada.
  • The CEE has also established a Small Agency Portfolio Team. The function of this team is to
    • provide feedback to TBS Program Sectors on evaluations and RMAFs submitted as part of TB Submissions;
    • monitor the evaluation function in small agencies;
    • undertake projects to support the small agency community in its evaluation functions and activities; and
    • provide advice and guidance to small agencies on
      • evaluation function and capacity
      • evaluation plans
      • evaluation studies
      • RMAFs
      • performance measurement activities

Other particularly relevant TBS areas include those involved in the change functions associated with Modern Comptrollership, Internal Audit, Results-based Management, Horizontal Reporting, and Expenditure Reporting. For more information, see the following Web site http://www.tbs-sct.gc.ca/cee/about-apropos-eng.asp .

Canadian Evaluation Society (CES)

In the field of evaluation, the CES promotes leadership, knowledge, advocacy, and professional development. The CES provides access to a community of evaluators, annual conferences, the Essential Skills Series of courses in evaluation, and reserved resources on the CES Web site ( http://www.evaluationcanada.ca/ ). The CES has various provincial chapters as well as a National Capital Chapter.

Small Agency Administrator's Network (SAAN)

SAAN's mission is to provide opportunities for small agencies to share information and practices as well as to discuss issues of common concern and to provide a common voice to Central and Common Service Agencies with respect to small agency issues. For more information, see the following Web site: http://www.cso-cpo.gc.ca/saan-rapo/charter_e.html .


Appendix D ―Expenditure Review Committee's 
7 Tests

Program spending will be assessed against the following specific tests:

1.  Public Interest Test – Does the program area or activity continue to serve the public interest?

  • What public policy objectives is the initiative designed to achieve?
  • How does it align with current government priorities and the core mandate of the organization?

2.  Role of Government Test – Is there a legitimate and necessary role for government in this program area or activity?

  • Governance: Who else is involved? Is there overlap or duplication?

3.  Federalism Test – Is the current role of the federal government appropriate, or is the program a candidate for realignment with the provinces?

  • What are the initiative's impacts on other levels of governments? Could they play a greater role?

4.   Partnership Test – What activities or programs should or could be transferred in whole or in part to the private or voluntary sector?

  • What are the initiative's impacts on the private and/or voluntary sectors and/or other key stakeholders? Could they play a greater role?

5.  Value for Money Test – Are Canadians getting value for their tax dollars?

  • Results: What is the evidence that the initiative is achieving the stated policy objectives?
  • Is the program citizen-centred?

6.  Efficiency Test – If the program or activity continues, how could its efficiency be improved?

  • Efficiency and Effectiveness: Does the program exploit all options for achieving lower delivery costs through intelligent use of technology, public-private partnership, third-party delivery mechanisms, or non-spending instruments?

7.  Affordability Test – Is the resultant package of programs and activities affordable? If not, which programs or activities could be abandoned?

  • Relativity and Performance: How do program delivery costs compare to those in other jurisdictions and the private sector for similar activities?
  • Sustainability and Stewardship: What actions have been taken to manage future spending pressures? What more can be done?

Appendix E―"How to" Information for Planning and Conducting Evaluations

1.0  How do you build a logic model?

The following graphic presents an overview of the three steps for logic model development.

 


 

Step 1a:  Preparing for Logic Model Development
Determining Internal Capacity

 

Is your agency ready to embark on building a logic model?

Ask yourself the following questions:

  • Is there sufficient time and commitment to develop the logic model internally?
  • Is there familiarity with respect to logic model development?
  • Are there sufficient planning and communication skills, which are key to building consensus and obtaining commitment?
  • Is there sufficient objectivity, neutrality?
  • Does the program involve only my agency?

If you answered "yes" to these questions, you are probably ready to build a logic model.

If you answered "no" to any of the first four questions, you may wish to contract out the development of the logic model.

If you answered "no" to the last question, then the initiative is considered a "horizontal initiative." There are typically more challenges to developing a logic model for a horizontal initiative since you have to involve many stakeholders with different perspectives and opinions.

For further information on RMAFs, see Preparing and Using Results-based Management Accountability Frameworks , April 2004.

 

Step 1b:  Preparing for Logic Model Development
Collecting Relevant Information

 

What are the key sources of information for developing a logic model?

Review the following documents:

  • relevant legislation, regulations, and policy
  • performance reports, business plans and other strategic documents
  • monitoring, audit, and evaluation reports
  • narrative descriptions or overview documents
  • documents or information from similar projects

Consult the following people:

  • senior management
  • board members
  • program or policy staff
  • stakeholders

Key Questions to Ask

  • What is the rationale for the program?
  • What key results do you expect from this program?  
  • How should this program be undertaken in order to achieve these results?
  • Who are the clients? Who are the other stakeholders?
  • What activities need to be in place to achieve those results? (A relevant question when developing a logic model for a planned initiative.)

 

Note that the extent to which you consult with all of the above groups depends on your information needs and resource constraints. However, you should note that perspectives from a variety of stakeholders provide you with a better understanding of the program.

Step 2:  Building the Logic Model

There are different strategies for building a logic model. Two options include

  • developing a draft model first and presenting it for discussion at a working session;
  • developing the draft logic model during the working session.

The key advantage of the first option is that sometimes the session proceeds more efficiently if the core elements of the logic model are developed beforehand. If there is limited time to conduct a working session, you may want to consider drafting the logic model prior to the working session.

There are also advantages of the second option in that stakeholders develop the logic model. It helps to build internal capacity with respect to logic model development and group processes. It can also lead to an enhanced understanding of the initiative. Finally, this approach may also strengthen commitment of stakeholders to the process.

When making the decision, consider stakeholder and facilitator preferences and timeframe.

Where do you start? Results or Activities?

While there is no right way to build a logic model, some experts suggest that you start with the selection of key activities if it is an existing program or policy. Start identifying the key results if it is a planned initiative. Remember that you can start wherever you prefer.

With an existing program you might start by asking: What is it we do? Then you can ask why? For example, why are we aiming for enhanced skills? The next result statement should provide the answer to this question (i.e., so that staff will work more efficiently).

When planning a program, you can start developing the logic model from the results. Once you have identified an appropriate result you can ask "How do we achieve this ultimate result?" The previous result statement (i.e., intermediate result) should provide the answer to this question.



Guidelines for constructing logic models

  • No logic model is ever perfect! It should be a reasonably accurate picture of the program.
  • Keep the logic model focussed.
  • Get feedback from a variety of key stakeholders, including program and/or policy staff.
  • The logic model components and linkages have to make sense. Can you spot any leaps in logic?
  • Link final results to the agency's strategic outcomes as specified in its Program Activity Architecture.
  • Ensure that the logic model demonstrates the "if...then", cause-effect relationship, from activities to outputs through to results.
  • Begin activity statements with an action verb.
  • Keep the number of activities to a minimum. Some activities may be merged with another activity.
  • Do not include administrative activities that are not directly involved in delivering your mandate (e.g., HR, IT, Finance, Corporate Services).
  • "If you control it, then it's an activity or an output, if you can only influence it, then it's an outcome."
  • Question activities with no outputs or results.
  • Results are modified (e.g., increase, decrease, improve, maintain).
  • Some programs may have more than one result track.
  • Does it build on, or is it situated in relation to, the business plan or strategic objectives of the department or agency?
  • Results have a "who," a "what," and a "when" (e.g., What change? In whom? By when?)
  • Results demonstrate that you are making a difference.
  • Immediate, intermediate, and ultimate results are presented as a sequence of results, but are not necessarily tied to particular timeframes.
  • You can add the connections after the component boxes are completed. (This applies to flow chart model only.)
  • You can use sticky paper to note activities, outputs, and results. This gives you flexibility to move the components around.
  • Remember that, as you move from immediate to final results, there are decreased levels of control with shared accountability and increased difficulty in evaluating attribution (i.e., the degree to which the program produced the results).
 

Step 3:  Validation of Logic Model

Consult with working groups and stakeholders. It is often helpful to solicit the feedback of individuals who are familiar with the program but who were not part of the working session to verify that all necessary elements are represented in the model.

Build awareness of the logic model. The working group or individual can create awareness on an informal basis by referring to the logic model in conversations with staff and stakeholders. Management and program teams can use the model as a consistent reference for all aspects of the management cycle: planning, monitoring, and reporting. Increased awareness will lead to feedback and insights on how to improve the model.

The model will never be prefect. Use feedback from consultations and from using it as a management reference to update it. Remember that as the context of the program changes over time, so will the underlying logic.

Key References

Treasury Board of Canada Secretariat. RBM E-Learning Tool.



2.0  How do you develop evaluation questions?

 



Guidelines for Developing Evaluation Questions

Start with the broad issues

  • Consider standard evaluation questions associated with each of the issues in accordance with the TB Evaluation Policy :
    • Relevance – Does the program continue to be consistent with agency and government-wide priorities and does it realistically address an actual need?
    • Success – Is the program effective in meeting its objectives, within budget, and without unwanted results?
    • Cost-effectiveness – Are the most appropriate and efficient means being used to achieve objectives relative to alternative design and delivery approaches?
  • Consider Expenditure Review Questions – See Appendix D.

Tailor the questions to your program

  • Use your logic model as a guide. Review your outputs as an aid to developing questions relating to efficiency and service delivery. Review the results as an aid to developing questions relevant to effectiveness.
  • Consult with key stakeholders to clarify key evaluation interests.
  • Consider the audience for the report and what action might be taken based on the report.

Prioritize

  • Consider accountability and information requirements.
  • Consider previous evaluation, audit, and monitoring reports.
  • Consider risks.
  • Consider costs and benefits associated with addressing each issue.
  • Separate "nice to know" from "need to know."


3.0  How do you identify the right performance indicators?

Step 1:  Review Logic Model

Go through each row of the logic model (except activities) and determine what specific piece of information or particular data would be required to assess whether each output has been produced or result achieved. A working session is an effective method for brainstorming indicators.

Step 2:  Prioritize

Identify the "need to have" versus the "nice to have" for each component.

 

Once a comprehensive set of performance indicators and associated measurement strategies have been identified, a smaller set of the best indicators needs to be identified. Check the top ranked indicators against the selection criteria described.

 

Step 3:  Check Against Criteria

What are criteria for selecting indicators?

  • Relevant : Is the indicator meaningful? Is it directly linked to the output or result in question?
  • Reliable : Is it a consistent measure over time?
  • Valid : Does it measure the result?
  • Practical : Will it be easy to collect and analyze? Is it affordable?
  • Comparable : Is it similar to what other organizations or areas in your organization already measure?
  • Useful : Is it useful? Will it be useful for decision making?

Useful tips

  • Begin by developing a few indicators. (Over time, additional indicators can be added, if necessary.)
  • Keep the number of indicators to a minimum.
  • Few indicators are good – but be aware of their limitations.
  • Try to keep a core set of indicators which can be maintained over time to allow for comparison between past and present performance.
  • Consider proxy indicators. Proxy indicators are sometimes used to provide information on results where direct information is not available. For example, the percentage of cases that are upheld at appeal could be a proxy indicator for the quality of decisions.

 

4.0  How do you choose an appropriate evaluation design?

Evaluation design is the process of thinking about what you want to do and how you want to go about doing it.


Guidelines for Choosing Appropriate Evaluation Designs

  • Consider the following:
  • What are the information and decision-making needs of the agency with respect to the evaluation?
  • What type of evaluation would be most appropriate given the life cycle of the program?
  • What considerations should be made with respect to practicality and costs?
  • What would be an appropriate balance between information needs and costs?
  • What level of concern exists with respect to the program to be evaluated (i.e., related to the quality of evidence to be gathered)?
  • What are other internal and external factors that may influence the program? How can the evaluation design minimize these factors?
  • How can the evaluation be designed to target evaluation questions to the most pressing concerns?
  • What sources of information exist for the evaluation? Consider existing data, secondary data, and performance measurement information as potential sources of information for the evaluation.
  • Are there multiple lines of evidence? (More than one line of evidence improves reliability of findings.)
  • To what extent will a rigorous design be required to accept findings and conclusions and implement recommendations?

 

Considerations of threats to validity in choosing evaluation design

When developing an evaluation design, you have to consider whether other factors are affecting the results of the program. It is particularly important when you are trying to determine the impacts or effectiveness that these factors or threats to validity are considered. These factors can be due to real changes in the environment or changes in participants involved in the program.

  • Changes in the environment that occur at the same time as the program and will change the program results (e.g., the state of the economy could influence the results of a program)
  • Changes within individuals participating in program (e.g., changes due to aging or psychological changes that are not the result of the intervention)
  • The evaluation itself may influence the results (e.g., effects of taking a pre-test on subsequent post-tests, inconsistencies in observers, interviewers, scores or measuring instruments).

Overview of Evaluation Designs

 

Type 1: Implicit or Non-experimental Designs

In this type of design, "changes" to the program participants are measured. There is no comparison group of non-participants in the design. Using this design type, it is difficult to determine the extent to which the results can be attributed to the program. However, this design is useful for obtaining information relating to service delivery, extent of reach of the intervention, and progress towards objectives.

The post-test-only design and the pre-test/post-test design are two common types of non-experimental design.

Single group post-test-only design

In this design, beneficiaries or clients of an intervention are measured after the intervention. Participants, for example, can be simply asked about the impact of the intervention.

Single group pre-test/post-test design

This design uses before-and-after measures on a single group. For example, when measuring the impact of a training program, a knowledge test may be administered before and after the training program to help assess the impact of the training.

This design can be used

  • to answer certain types of information requests (e.g., questions about management issues relating to how the program is being implemented or whether risk is being managed, strategies for improvement);
  • when no pre-program measures exist;
  • where there is no obvious control or comparison group available; and
  • where practicality and costs are important considerations.

This type of design can be enhanced by

  • using varied quantitative and qualitative data collection methods and sources of information; and
  • ensuring the collection of "high-quality" data.

 

Type 2: Quasi-experimental Designs

The key distinction that separates experimental designs from non- or quasi-experimental designs is the random assignment of subjects into the intervention (treatment) groups and non-intervention (control) groups. Quasi-experimental designs involve comparison groups that are not randomly selected nor randomly assigned to the intervention. Efforts are usually made to match the comparison and the "treatment" groups as closely as possible according to a predetermined set of characteristics.

Quasi-experiments require analysis techniques that are much more complicated than those for true experiments. High-level statistics (e.g., econometric models) are required to deal with the differences between groups and isolate the effect of the program.

 

Type 3: Experimental Designs

Random assignment of subjects to the intervention (i.e., treatment) and control groups helps ensure that subjects in the groups will be equal before the intervention is introduced. Although experimental designs are considered ideal for measuring impact, they are rarely practical.

 

Both quasi-experimental and experimental designs involve some type of pre-test followed by a post-test. Both design types are appropriate for conducting summative evaluations. However, practicality and costs must also be considered.

Key References

Treasury Board of Canada Secretariat. Program Evaluation Methods: Measurement and Attribution of Program Results , 1998.
http://www.tbs-sct.gc.ca/cee/pubs/meth/pem-mep00-eng.asp


5.0  How do you choose appropriate data collection methods?

To choose an appropriate data collection method, you may consider the following:

  • information and decision-making needs;
  • appropriate uses, pros and cons of the data collection methods;
  • costs and practicality of each method; and
  • taking a balanced approach, including a mix of quantitative and qualitative methods.

For more detailed information, see the two tables below that compare the quantitative and qualitative methods, as well as describe the specific data collection methods available for evaluations.


A Comparison of Quantitative and Qualitative Methods

 

Quantitative Methods

Qualitative Methods

Use

to numerically measure "who, what, when, where, how much, how many, how often"

when you need to generalize findings

to qualitatively analyze "how and why"

to clarify issues and discover new issues

when you need a better understanding of context

Data Collection Methods

standardized interviews; surveys using closed-ended questions; observation using coded guides

administrative data

open and semi-structured interviews; surveys using open-ended questions; observation; interpretation of documents, case studies, and focus groups

Strengths

provides quantitative, accurate, and precise "hard data" to prove that certain problems exist

can test statistical relationships between a problem and apparent causes

can provide a broad view of a whole population

enables comparisons

establishes baseline information which can be used for evaluating impact

useful when planning an initiative concerned with social change

particularly in formative evaluations, investigators may need to know participant attitudes about a program, their ideas about how it could be improved, or their explanations about why they performed in a particular way

provides a thorough understanding of context to aid in interpretation of quantitative data

provides insights into attitudes and behaviours of a small sample population

establishes baseline information which can be used for evaluating qualitative outcomes

useful for getting feedback from stakeholders

Weaknesses

may be precise but may not measure what is intended

cannot explain the underlying causes of situations (i.e., it may tell you that the program had no effect, but will not be able to tell you why)

information may not be representative

more susceptible to biases of interviewers, observers, and informants

time-consuming to collect and analyze data

Source:  Adapted from the Program Manager's Monitoring and Evaluation Toolkit Number 5, Part III: Planning and Managing the Evaluation – the Data Collection Process . May 2001 ( www.unfpa.org United Nations Population Fund, Office of Oversight and Evaluation).

 

Overview of Data Collection Methods

Data Collection Method

When to Use

Strengths

Challenges

External Administrative Systems and Records: use of data collected by other institutions or agencies (e.g., Statistics Canada)

need information about context

need historical information

to compare program/ initiative data to comparable data

It is efficient and avoids duplication.

Is the information accurate, applicable, and available?

Are we comparing apples to apples?

Internal Administrative Data: program data collected internally for management purposes

need information about management, service delivery

It is efficient and can provide information about management activities and outputs.

It can be designed to collect performance information related to the program.

Is the information accurate and complete?

Literature Review: review of past research and evaluation on a particular topic

to identify additional evaluation questions/ issues, and methodologies

need information on conceptual and empirical background information

need information on a specific issue

need information about comparable programs, best practices

make the best use of previous related work

best practices

may suggest evaluation issues or methodologies for current study

can be secondary source of data helping to avoid duplication

Data and information gathered from a literature search may not be relevant to evaluation issues.

It can be difficult to determine the accuracy of secondary data in the early stages of a study.

Interview: a discussion covering a list of topics or specific questions, undertaken to gather information or views from an expert, stakeholder, and/or client; can be conducted face to face or by phone

complex subject matter

busy high-status respondents

sensitive subject matter (in-person interviews)

flexible, in-depth approach

smaller populations

inexpensive method for collecting contextual and systematic information about a program or service

flexible method (can occur either in person or remotely and can be either open-ended or structured)

danger of interviewer bias

The response rate to requests for phone and/or electronic interviews are often much lower than for in-person interviews.

Travel costs for in-person interviews can be high.

Focus groups: a group of people brought together to discuss a certain issue guided by a facilitator who notes the interaction and results of the discussion

depth of understanding required

weighted opinions

testing ideas, products, or services

where there are a limited number of issues to cover

where interaction of participants may stimulate richer responses (people consider their own views in the context of others')

Group processes can be helpful in revealing interactions and relationships within an organization.

The discussion may uncover insights on the rationale behind common perceptions and reactions, as well as demonstrate how differences in opinion are resolved.

Focus groups are short-lived, artificial situations.

Group situations may not put participants at ease to discuss personal beliefs and attitudes especially if the people have to relate to each other after leaving the focus group.

The data generated in a focus group tends to be quick response instead of considered answers.

Case studies: a way of collecting and organizing information on people, institutions, events and beliefs pertaining to an individual situation

when detailed information about a program is required

to explore the consequences of a program

to add sensitivity to the context in which the program actions are taken

to identify relevant intervening variables

permits a more holistic analysis and consideration of the inter-relationships among the elements of a particular situation

permits an in-depth analysis of a situation

provides depth of information

complex method of data organization

difficult to make conclusions that can be applied to other situations

Questionnaire/Survey: (paper, on-line, or telephone) a list of questions designed to collect information from respondents on their knowledge and perceptions of a program or service

useful for large target audiences

can provide both qualitative and quantitative information

tend to be less time and money intensive than interviewing large numbers of people

questions can cover a range of topics (on-line or mail-out)

respondents can take time to consider their answer and look up information

provides a breadth of information

may allow you to make statistically valid inferences about the entire population

low response rates

possibility that those who returned their questionnaire are not typical of the general population being surveyed

requires considerable expertise in their design, conduct, and interpretation

Expert panels: the considered opinion of a panel of knowledgeable outsiders

experts can share lessons learned and best practices

where outside validation is required

where diversity of opinion is sought on complex issues

where there is a need to draw on specialized knowledge and expertise

An expert panel can draw on the knowledge and experience of the panel members to provide opinions and recommendations on a program or approach.

efficient especially if done electronically or by phone

Unless the experts know a great deal about the program and context within which it operates, their opinion may offer very little useful insight.

Experts tend to hold a particular worldview or opinion that may affect their perception of a program or approach.

Comparative studies: a range of studies that collects comparative data (e.g., cohort studies, case-control studies, experimental studies)

summative evaluations

a powerful way of collecting data for comparative purposes

finding reasonable comparative groups

structuring valid studies

analyzing data is time and money intensive

Source :  Adapted from TBS, RBM E-Learning Tool

 Key References

Treasury Board of Canada Secretariat. RBM E-Learning Tool .

 

6.0  How do you design a survey questionnaire?

 

When you may need information about a large group or population, a survey is typically conducted. A sampling strategy should be such that the information obtained from a sample is information that is representative of the entire population. The more representative a sample is, the more confidence there is that you can attach to your findings. Representativeness is generally related to the sample size and lack of bias.

Step 1:  Sampling Procedures

Clearly define purpose of survey

In order to develop sampling procedures, you need to clearly define the purpose of the survey. The sampling strategy must be designed to answer the evaluation questions.

  • What are the key evaluation questions the survey can answer?
  • What are the priorities for the survey?
  • What are the characteristics of the general population you wish to survey (e.g., gender, age)?

Sample Size

Determining the sample size will help determine which type of survey to use.

Note: A sample is part of entire population that possesses the characteristics you wish to study.

 

The following are considerations for determining sample size:

  • budget;
  • the level of precision required and extent of sub-population comparisons;
  • a smaller sample size is used if you have reason to expect a strong effect;
  • Expected non-response rate – you can increase the sample size by that factor; and
  • to track respondents over time, a much larger sample is required (to account for people who no longer wish to participate in the survey).

If you wish to draw conclusions about the entire population, you will require a certain sample size that is based on statistical parameters. For example, when polling firms say such a sample is accurate with a 5 per cent margin of error, 90 per cent of the time, this is based on a certain size of sample. The required sample size is related to the size of your population, the confidence level required, and the allowable margin of error. For example, if you have a population of 1,000 people from which you need a representative sample and you require a 95 per cent confidence interval with a 5 per cent margin of error, then you will require a maximum sample of 278. In social science research, a 95 per cent confidence level and a 5 per cent allowable margin of error is typically specified. You may need to consult with a statistical expert regarding appropriate sample size.

Sampling Techniques

There are a number of different methods that can be used to select a sample. Ideally you want the sample to represent the whole group so that you can generalize the findings to the program. Types of sampling include simple random sampling, stratified random sampling, systematic sampling, and cluster sampling.

With simple random sampling a list of all people (i.e., the survey population) is made and then individuals are selected randomly for inclusion in the sample. Random sampling means that everyone in the target group has an equal chance of being included in the study. One challenge with this approach is obtaining a complete list of the group.

The steps for sampling procedures are as follows:

1.  obtain a sampling frame;

2.  check for bias;

3.  assess the potential sampling source in advance; and

4.  apply sampling procedure.

Sampling Frames

Sampling frames are listings of people that represent or approximate the population of interest. Sampling frames should be comprehensive and representative. The list should be unbiased.

Check for Biases

Some common biases include:

  • lists of only approved applicants and not rejected applicants (where the sampling frame is a list of applicants); and
  • lists are out of date.

Assess the potential sampling sources in advance

  • Verify if contact information is available (e.g., for all provinces/cities/services).
  • Verify whether there is information about when the service was received.
  • Minimize recall bias.
  • Conduct the survey as close to the service event as possible.
  • If measuring satisfaction with a product, you need to allow enough time for client(s) to use it.

What if contact information doesn't exist? Options include

  • on-site surveys; and
  • asking clients at the end of a transaction if they would be willing to participate in a client satisfaction survey.

What if information about potential respondents is limited?

  • Sample during the survey.
  • Ask what services they received from where and when.
  • Ask them to target their answers to a specific time period, or channel, etc.

Examples of Random Sampling Technique

  • Systematic sampling – Select every "nth" number. Make sure there are no hidden patterns in population list.
  • Random digit dialling – used for telephone surveys – enables interviewers to call unlisted, new, and recently changed numbers.

Step 2:  Determining the Survey Format

  • Considerations for determining the appropriate survey format (on-line, mail, telephone, face-to-face) includes the following:
    • type of information the respondent is expected to provide;
    • budget – Telephone interviewing can be more expensive. Mail surveys are more economical if you have a large sample or if your sample spans a large geographical area.
    • sample size;
    • speed – online and telephone surveys are most timely.
    • length of survey – for lengthy surveys (over one hour), consider in-person;
    • subject matter – if questions are personal or require thought, consider self-administered survey (mail, on-line).

Step 3:  Develop Survey Questionnaire

A key consideration in developing your questionnaire is determining what types of questions to use. While open-ended questions can provide detailed information, they are time consuming to record and analyze. If you have a large group to survey, the questionnaire should be largely comprised of closed-ended questions.

You should also prepare a script and instructions for the interviewers (if by telephone or in-person). The script should include how to greet the respondent, how to invite them to participate, how to respond to their answers, how to keep the respondent on the line, how to thank the respondent, how to code each survey (completed, no answer).

Types of Survey Questions

Open-ended questions provide no structured answers. These types of questions are time-consuming to record and analyze. They should be kept to a minimum in survey research where there are a large number of respondents. Skilled interviewers are required to adequately probe and record these questions. They allow you to probe more deeply into issues of interest being raised. Open-ended questions are useful for exploring issues and providing more detailed information as to why and how.

Closed-ended – Scaled-response. These list alternative responses that increase or decrease in intensity along a continuum (e.g., very dissatisfied/ dissatisfied/neither satisfied nor dissatisfied/satisfied/very satisfied; strongly agree/agree). A 5-point scale is common and allows you to keep a neutral position (neither agree nor disagree). Try to include all possible answers among answer categories (e.g., don't know and not applicable).

Close-ended – Fixed Response questions. These involve choosing one or more options from a list. A category of "other" should be included so that the respondent is not forced to select an inappropriate answer. Ensure that categories are mutually exclusive (e.g., 0-9, 10-19). Avoid long lists of categories.

 

Checklist  Checklist for Developing a Survey Questionnaire

Considerations

P

1.  Relevance to research questions

 

2.  Consider your population (e.g., age, gender)

 

3.  Sample size will help to determine which type of survey to use

 

4.  Budget (Mail and Web surveys are cost-effective.)

 

5.  Subject matter (e.g., sensitive subject matter, requires thought)

 

6.  Appropriate length (Length influences response rate.) As a general rule, phone interviews should last between 10-20 minutes; 10 minutes is the ideal.

 

7.  Consider type of questions (open-ended, closed-ended – scale, fixed response)

 

8.  Consider what kind of scale. Choose from 3-point, 5-point, 7-point, 10-point and 100-point scales. Level of satisfaction?

 

9.  Keep questions short, simple, and clear.

 

10.  Keep questions as specific as possible.

 

11.  Avoid the use of double negatives.

 

12.  Avoid double-barrelled questions. These are single questions that ask for responses about two or more different things. For example: To what extent are you satisfied with the telephone and in-person service?

 

13.  Establish a relevant time frame for questions. When asking about past events it is important to establish an appropriate time frame. Respondents often can only recall general information. Example: Over the last seven days, how often have you exercised? In the past six months, how often have you gone to your doctor?

 

14.  Consider whether respondents have the knowledge, opinions, or experience necessary to answer the question.

 

15.  Make every effort to be consistent (e.g., one scale, one wording choice)

 

16.  Use social conversation as a guide to organizing the questionnaire (i.e., introduction, building up to main topic, main topic, closing).

 

17.  Develop script and instructions.

 

 

Step 4:  Pre-testing the Questionnaire

Surveys can be pre-tested and then adjusted. The purpose of pilot testing is to identify and resolve problems and deficiencies in the information collection methods or in the form and usefulness of the information gathered. Based on the results of the pre-test you can modify the survey accordingly. Conducting a small number of about 10 interviews can provide higher quality survey results.

You should determine what questions your pre-test will answer. For example, your pre-test might want to determine the following:

  • Can the respondents answer the questions?
  • How long did the interview last? Is this within your budget?
  • Did the respondent have any problems interpreting or understanding the questions?
  • When you analyzed the preliminary findings, did the results make sense?

Step 5:  Implement the Survey

During implementation

  • track progress periodically
  • monitor your response rate (A higher response rate returns higher quality results, since self-selection and other biases are minimized by tracking down the original clients who were sampled.)

If you expect your results to be comparable with existing results for benchmarking purposes, the research design, questionnaire, and administration need to be very similar, if not identical. It is okay to make some improvements over the previous iteration, but the bigger the change, the less comparable the results.

You should ensure there are procedures in place for following up non-response and compensating for lower response rates. Changes in the original composition of a sample are usually inevitable during the course of an evaluation study. Individuals may drop out from the sample and others may provide incomplete information. These changes may bias the study if they are not addressed.

Key References

Goss Gilroy Inc., Designing Effective Client Satisfaction Surveys , Strategic Management Conference, Montreal, 2003.

SPSS BI Survey Tips, A Handy Guide to Help You Save Time and Money as You Plan, Develop and Execute Your Surveys . http://www.spss.com/uk/SurveyTips booklet.pdf


7.0  How do you analyze data?

Step 1: Start with the evaluation objectives

When analyzing any type of data, review the purpose of the evaluation. This will help you organize your data and focus your analysis.

Step 2: Review for accuracy, completeness and consistency

Step 3: Summarize and organize data

  • Describing and counting – These are two of the most common analytic techniques and are often required as the basis or context for further data analysis. All types of qualitative and quantitative data at the input, output, and result stages can be described and counted. Data are gathered from various data sources using previously described data collection methods. Qualitative data can be described in narrative form or counted and analyzed using a variety of statistical techniques. Quantitative data can also be used to describe a program or purpose, and are easily counted and coded for analysis.
  • Aggregating and disaggregating – Aggregating is the process of grouping (or clustering) data by identifying characteristics or patterns that seem to link them. Disaggregating means breaking down (or factoring) information into smaller units. The reason for aggregating data is to determine whether relationships exist among different variables based on a pre-existing theory (hypothesis) or patterns seen in the data. Disaggregated data can be examined in different ways (e.g., over time, across different populations, between two comparison groups).
  • Comparison – Comparison covers a range of methods that can be used to draw conclusions about the relationship among data and make generalizations to a larger population. Comparison involves contrasting a person or population against itself, another comparison group, or a standard typically after an event or the implementation of a program.
Generalizing the Findings

The only valid way of generalizing findings to an entire or target population (where you cannot survey or study everyone) is to use findings from a random sample of the population you wish to study. Caution must therefore be exercised when analyzing data from non-randomized samples.

Qualitative and Quantitative Analysis

Analyzing qualitative data requires effective synthesis and interpretative skills. Qualitative information can be used to provide contextual information, explain how a program works, or to identify barriers to implementation. Qualitative data can be analyzed for patterns and themes that may be relevant to the evaluation questions. Qualitative material can be organized using categories and/or tables making it easier to find patterns, discrepancies and themes.

Quantitative data analysis provides numerical values to information. It can range from simple descriptive statistics (e.g., frequency, range, percentile, mean or average) to more complicated statistical analysis (e.g., t-test, analysis of variance). Computer software packages such as Statistical Package for Social Sciences (SPSS), Minitab, and Mystat can be used for more complicated analysis. Quantitative data analysis also requires interpretation skills. Quantitative findings should be considered within the context of the program.

About quantitative data

Frequencies, range, percentile, and standard deviation are used for descriptive statistics. Measures of central tendencies – mean, median, or mode – are also calculated.

Examples of inferential statistics:

  • Correlation coefficients are used to determine the strength of the relationship between two variables.
  • T-tests are used to determine differences in average scores between two groups.
  • ANOVA (analysis of variance) determines differences in average scores of three or more groups.

Key References

Treasury Board of Canada Secretariat. RBM E-Learning Tool.


Appendix F―Terms of Reference Template

 

Main Elements of the Terms of Reference

Project Background

Project context and rationale

Identification of key stakeholders, clients and partners

Project description

Reasons for the Evaluation

Statement of purpose of the study

Expected value-added

Intended use of results

Scope and Focus

Broad issues to be addressed/specific evaluation questions

Type of analysis to be used/level of detail

Specify who the audience(s) will be for the reports and findings

Statement of Work

How purposes of study are to be achieved

Describe approaches

Describe data collection methods

Outline the tasks required to undertake study

State what groups will be consulted

List expectations with respect to communications and ongoing progress reports

Evaluation Team

Required professional qualifications/expertise/experience

Role responsibilities of evaluation team, role of agency (program and/or evaluation managers)

Timetable

Approximate timetable to guide the preparation of the work plan

Budget

A specification of the estimated resources to be committed to the study and its different parts

Deliverables

Identification of key deliverables (e.g., work plan or methodology report, draft evaluation report, final evaluation report)

 

Toolkit: Templates
Agency Audit and Evaluation Plan Template
[9]

Introduction and Context

  • Identify management client and stakeholder information needs
  • Outline how audit (where applicable), risk management and evaluation will be used in the agency
  • Link evaluation to strategic concerns (PAA strategic outcomes, program inventory and performance measures)
  • Refer to TB Evaluation Policy and Policy on Internal Audit

Methodology/Approach

  • Methodology used for determining projects
  • Take into account priority setting and risk management approach
  • Link to agency service, business lines and strategic priorities

Rationale

  • Indicate scope and coverage for evaluation, audit, risk management plan
  • Rationale for including study in the plan – factors considered in selecting audit and evaluation projects
  • Give appreciation of proportion of the agency's evaluation universe the current year's projects represent
  • If applicable consider cross-jurisdictional evaluations

Evaluation and Audit Plan Summary

  • Identify planned projects for fiscal year
  • Estimate costs for completing each project and/or planned expenditure in current fiscal year
  • Total expenditure on evaluation, funding received in addition to A-base funding for evaluation and audit

Detailed Evaluation and Audit Plan
  • Indicate project title, objective, client, status of project (e.g., planned, in progress)
  • Identify project teams and schedules
  • Identify key assumptions in order to achieve deliverables as per plan
  • Consider TBS standards during development of plan

Appendices

  • May include draft TORs, statements of work for proposed projects or expenditures
 

Appendix G―Glossary

Accountability ( Responsabilisation) – The obligation to demonstrate and take responsibility for performance in light of agreed expectations. There is a difference between responsibility and accountability: responsibility includes the obligation to act whereas accountability includes the obligation to answer for an action

Activity ( Activit̩ ) РAn operation or work process internal to an organization, intended to produce specific outputs (e.g., products or services). Activities are the primary link in the chain through which outcomes are achieved.

Attribution ( Attribution ) – The assertion that certain events or conditions were, to some extent, caused or influenced by other events or conditions. This means a reasonable connection can be made between a specific outcome and the actions and outputs of a government policy, program, or initiative.

Departmental Performance Reports (DPR) ( Rapport minist̩riel sur le rendement (RMR) ) РDepartmental Performance Reports, tabled in the fall of each year by the President of the Treasury Board on behalf of all federal departments and agencies named in Schedule I, I.1 and II of the Financial Administration Act , are part of the Estimates and Supply process. The reports explain what the government has accomplished with the resources and authorities provided by Parliament. The performance information in the reports is intended to help members of Parliament advise the government on resource allocation in advance of the annual budget and Supply process in the spring.

Effectiveness ( Efficacit̩ ) РThe extent to which an organization, policy, program, or initiative is meeting its planned results. (A related term is Cost Effectiveness РThe extent to which an organization, policy, program, or initiative is producing its planned outcomes in relation to expenditure of resources.)

Efficiency ( Efficience ) – The extent to which an organization, policy, program, or initiative is producing its planned outputs in relation to expenditure of resources.

Evaluation ( Évaluation ) – The systematic collection and analysis of information on the performance of a policy, program, or initiative to make judgements about relevance, progress, or success and cost-effectiveness and/or to inform future programming decisions about design and implementation.

Expenditure Management Information System (EMIS) ( Syst̬me d'information sur la gestion des d̩penses (SIGD) ) РThis is a common information framework that supports the Expenditure Review Committee and departmental assessments related to the Management Accountability Framework. Program Activity Architecture and EMIS are to become the basis for the following:
  • Annual Reference Level Update (ARLU)
  • Estimates
  • Reports on Plans and Priorities (RPP)
  • Departmental Performance Reports (DPR)

Final Outcome ( R̩sultat final ) РThese are generally outcomes that take a longer period to be realized, are subject to influences beyond the policy, program, or initiative, and can also be at a more strategic level.

Goal ( But ) – A general statement of desired outcome to be achieved over a specified period of time. The term goal is roughly equivalent to Strategic Outcome. For technical precision, the Treasury Board of Canada Secretariat recommends that Strategic Outcome be used instead of goal. See also Objective.

Horizontal Result ( R̩sultat horizontal ) РAn outcome that is produced through the contributions of two or more departments or agencies, jurisdictions, or non-governmental organizations.

Impact ( Impact ) – Impact is a synonym for outcome, although an impact is somewhat more direct than an effect. Both terms are commonly used, but neither is a technical term. The Treasury Board of Canada Secretariat recommends that result be used instead of impact.

Indicator ( Indicateur ) – A statistic or parameter that provides information on trends in the condition of a phenomenon and has significance extending beyond that associated with the properties of the statistic itself.

Input ( Intrant ) – Resources (e.g., human, material, financial) used to carry out activities, produce outputs, and/or accomplish results.

Logic Model ( Mod̬le logique ) Р(also referred to as Results-based Logic Model) An illustration of the results chain or how the activities of a policy, program, or initiative are expected to lead to the achievement of the final results. Usually displayed as a flow chart. See also Results Chain.

Management Accountability Framework (MAF) ( Cadre de responsabilisation de la gestion (CRG) ) – It is a set of expectations for modern public service management. Its purpose is to provide a clear list of management expectations within an overall framework for high organizational performance.

Management of Resources and Results Structure (MRRS) ( Structure des ressources et des r̩sultats de gestion (SRRG) ) РThe MRRS replaces the Planning, Reporting, and Accountability Structure (PRAS) policy as the new reporting regime. The MRRS

  • clearly defines appropriate strategic outcomes;
  • is a complete program inventory that links all departmental programs and program activities so that they are aligned with strategic outcomes;
  • sets performance measures for each level of the department's architecture; and
  • ensures that a departmental governance structure that defines decision-making and accountability by strategic outcome and by program is in place.

Mission Statement ( Énoncé de mission ) – A formal, public statement of an organization's purpose. It is used by departmental management to set direction and values.

Objective ( Objectif ) – The high-level, enduring benefit towards which effort is directed.

Outcome ( R̩sultat ) РAn external consequence attributed to an organization, policy, program, or initiative that is considered significant in relation to its commitments. Outcomes may be described as immediate or intermediate; final, direct or indirect; intended or unintended. See also Result.

Output ( Extrant ) – Direct products or services stemming from the activities of a policy, program, or initiative, and delivered to a target group or population.

Performance (Rendement) – How well an organization, policy, program, or initiative is achieving its planned results measured against targets, standards, or criteria. In results-based management, performance is measured, assessed, reported, and used as a basis for management decision-making.

Performance Measurement Strategy ( Strat̩gie de mesure du rendement ) РSelection, development, and ongoing use of performance measures to guide corporate decision-making. The range of information in a performance measurement strategy could include reach; outputs and results; performance indicators; data sources; methodology; and costs.

Performance Measures ( Mesures du rendement ) – An indicator that provides information (either qualitative or quantitative) on the extent to which a policy, program, or initiative is achieving its results.

Performance Monitoring ( Suivi du rendement ) – The ongoing process of collecting information in order to assess progress in meeting Strategic Outcomes, and, if necessary, provide warning if progress is not meeting expectations.

Performance Reporting ( Rapport sur le rendement ) – The process of communicating evidence-based performance information. Performance reporting supports decision-making, serves to meet accountability requirements, and provides a basis for citizen engagement and a performance dialogue with parliamentarians.

Planned Results (Targets) ( R̩sultats pr̩vus (Cibles) ) РClear and concrete statement of results (including outputs and results) to be achieved within the time frame of parliamentary and departmental planning and reporting (1 to 3 years), against which actual results can be compared.

Reach ( Port̩e ) РThe individuals and organizations targeted and directly affected by a policy, program, or initiative.

Reliability ( Fiabilit̩ ) РRefers to the consistency or dependability of the data. The idea is simple: if the same test, questionnaire, or evaluation procedure is used a second time, or by a different research team, would it obtain the same results? If so, the test is reliable. In any evaluation or research design, the data collected are useful only if the measures used are reliable.

Reports on Plans and Priorities (RPP) ( Rapport sur les plans et les priorit̩s (RPP) ) РAs part of the Main Estimates, the RPPs provide information on departmental plans and expected performance over a three-year period. These reports are tabled in Parliament each spring, after resource allocation deliberations. They generally include information such as mission or mandate, strategies, as well as Strategic Outcomes and performance targets.

Result ( R̩sultat ) РThe consequence attributed to the activities of an organization, policy, program, or initiative. Results is a general term that often includes both outputs produced and outcomes achieved by a given organization, policy, program, or initiative. In the government's agenda for results-based management and in the document Results for Canadians: A Management Framework for the Government of Canada , the term result refers exclusively to outcomes.


Results Chain (also results-based logic model, results sequence) ( Enchąnement des r̩sultats (mod̬le logique ax̩ sur les r̩sultats, s̩quence de r̩sultats) ) РThe causal or logical relationship between activities and outputs and the outcomes of a given policy, program, or initiative, that they are intended to produce. Usually displayed as a flow chart.

Results for Canadians: A Management Framework for the Government of Canada ( Des résultats pour les Canadiens et les Canadiennes : un cadre de gestion pour le gouvernement du Canada ) – A document published in early 2000 that describes the management framework for the Government of Canada. This key document outlines the four management commitments for the federal government: citizen focus, values, results, and responsible spending.

Results-based Management ( Gestion ax̩e sur les r̩sultats ) РA comprehensive, life-cycle approach to management that integrates business strategy, people, processes, and measurements to improve decision-making and drive change. The approach focuses on getting the right design early in a process, implementing performance measurement, learning and changing, and reporting performance.

Results-based Management and Accountability Framework (RMAF) (Cadre de gestion et de responsabilisation ax̩ sur les r̩sultats (CGRR)) РA document that sets out the performance monitoring, evaluation, and reporting strategies for a policy, program, or initiative.

Service Commitment ( Engagement en mati̬re de service ) РService commitments or standards generally set performance objectives for the delivery of government products or services to the public, specifying the quality or level of service to which a department or agency commits, or can be expected to deliver to clients.

Strategic Outcome ( R̩sultat strat̩gique ) РA Strategic Outcome is a long-term and enduring benefit to Canadians that stems from a department's mandate, vision, and efforts. This Outcome represents the difference a department wants to make for Canadians and should be measurable. The achievement of or progress towards a strategic outcome will require, and Canadians will expect, the sustained leadership of a federal department or agency, especially in developing partnerships and alliances with other stakeholders and organizations.

Canadians also expect that departments will strive for excellence by establishing challenging outcomes that are within their sphere of control or influence. These outcomes will form the standards by which a department's performance is assessed through departmentally derived measures.

Target Group (Target Population) ( Groupe cible (Population cible) ) – The set of individuals that an activity is intended to influence.

Validity ( Validit̩ ) РThe extent to which the questions or procedures actually measure what they claim to measure. In other words, valid data are not only reliable, but are also true and accurate. Measures used to collect data about a variable in an evaluation study must be both reliable and valid if the overall evaluation is to produce useful data.

Source: TBS Guide to RMAFs .
http://www.tbs-sct.gc.ca/cee/tools-outils/rmaf-cgrr/guide00-eng.asp


Appendix H―Evaluation Web Sites

1.  http://www.tbs-sct.gc.ca/pol/doc-eng.aspx?id=15024

This is the Treasury Board of Canada Secretariat (TBS) Evaluation Policy .

2.  http://www.oecd.org/dataoecd/29/21/2754804.pdf

This link is a comprehensive glossary of key terms in evaluation and results-based management.

3.  http://www.wkkf.org/Pubs/Tools/Evaluation/Pub770.pdf

This is an evaluation handbook by Kellogg.

4.  http://www.evaluationcanada.ca

This is the Canadian Evaluation Society homepage. It contains information on courses and special events, various resources on evaluation, and unpublished documents for evaluators.

5.  http://www.phac-aspc.gc.ca/ncfv-cnivf/familyviolence/html/fvprojevaluation_e.html

This document is entitled Guide to Project Evaluation: A Participatory Approach . It was developed by the Population Health Directorate at Health Canada in 1996. The Guide provides an easy-to-use, comprehensive framework for project evaluation. This framework can be used to strengthen evaluation skills and knowledge to assist in the development and implementation of effective project evaluations.

6.  http://www.mapnp.org/library/evaluatn/fnl_eval.htm#anchor1585345

This link contains a Basic Guide to Program Evaluation. This document provides guidance toward planning and implementing an evaluation process for non-profit or for-profit organizations.

7.  http://www11.hrdc-drhc.gc.ca/pls/edd/toolkit.list

This link contains HRDC's Evaluation Tool Kit series developed by Evaluation and Data Development (EDD). This is a series of publications that provides pertinent information about designing, planning, and conducting an evaluation. Publications include the following:

  • Evaluation Tool Kit Focus Group – A guide to understanding the use of focus groups as an information gathering tool
  • Quasi-Experimental Evaluation – Summarizes the basics of evaluation research focussing on the "quasi-experimental" design
  • User Guide on Contracting HRDC Evaluation Studies – Summarizes the provisions of the Treasury Board of Canada Secretariat's (TBS) Contracting Policy as well as HRDC 's contracting guidelines and administrative practices as they apply to services related to evaluation.

Logic Models

8.  http://www.ed.gov/teachtech/logicmodels.doc

Logic Models: A Tool for Telling Your Program's Performance Story , describes the Logic Model process in detail and how logic models can be used to develop and tell the performance story for a program.

9.  http://national.unitedway.org/outcomes/resources/mpo/

The manual Measuring Program Outcomes: A Practical Approach , is a good source for information on logic models and performance indicator development.

10.  http://www.wkkf.org/Pubs/Tools/Evaluation/Pub3669.pdf

See the W.K. Kellogg Foundation Logic Model Development Guide for a good overview on logic model development. It also provides information on variations and types of logic models.

11.  http://www.impactalliance.org/file_download.php/prevent+1.pdf?URL_ID=2744& filename=10196046740prevent_1.pdf&filetype=application%2Fpdf&filesize=646378&name=prevent+1.pdf&location=user-S/

Prevention Works! A Practitioner's Guide to Achieving Outcomes

12.  http://www.insites.org/documents/logmod.htm

Everything you wanted to know about logic models but were afraid to ask.

13.  http://www.calib.com/home/work_samples/files/logicmdl.pdf

This paper provides a description of logic models and discusses their uses in treatment services planning and evaluation.

14.  http://www.uwex.edu/ces/lmcourse/

Enhancing Program Performance with Logic Models , University of Wisconsin, an online self-study course and excellent resource.

15.  http://www.gse.harvard.edu/hfrp/content/pubs/onlinepubs/rrb/learning.pdf

Learning from logic models. An example of a family/school partnership

RMAF Links

16.  http://www.tbs-sct.gc.ca/cee/tools-outils/rmaf-cgrr/guide00-eng.asp

This is the August 2001 Guide for the Development of Results-based Management and Accountability Frameworks . It contains guidelines for developing the Profile, Logic Model, Performance Measurement Strategy, Evaluation Strategy, and Reporting Strategy.

17.  http://www.tbs-sct.gc.ca/cee/tools-outils/comp-acc00-eng.asp

This is a companion guide for the development of RMAFs for horizontal initiatives. Horizontal initiatives often need to integrate vertical and horizontal accountabilities, various resource pools, as well as a variety of departmental mandates, performance measurement strategies, and reporting structures. This guide is designed to complement the Guide for the Development of Results-based Management and Accountability Frameworks by addressing the unique challenges encountered when diverse organizations work together to achieve common objectives. While it does not provide answers to every question, it does provide guidance based on the most important lessons learned to date.

18.  http://www.tbs-sct.gc.ca/cee/pubs/guide/sarmaf-ascgrr-eng.asp

This document Guidance for Strategic Approach to RMAFs complements the August 2001 Guide for the Development of Results-based Management and Accountability Frameworks . The purpose of this document is to help managers tailor the development of the RMAF to specific circumstances, taking into account such factors as overall risk, program complexity, and reporting requirements so as to ensure that RMAFs remain responsive to evolving needs.

19.  http://www.tbs-sct.gc.ca/res_can/rc_e.html

This document is Results for Canadians: A Management Framework for the Government of Canada . It outlines what public service managers are expected to do to improve the efficiency and effectiveness of their programs. The RMAF is an important management tool in meeting the four main objectives of Results for Canadians : a citizen focus in all government activities; emphasis on values; achievement of results; and responsible use of public funds.

20.  http://www.tbs-sct.gc.ca/pol/doc-eng.aspx?id=12257

This is the Treasury Board of Canada (TB) Policy on Transfer Payments , which formalizes the requirement of the RMAF as part of a TB submission involving transfer payments.


[1] .  See the TBS Web site at http://www.tbs-sct.gc.ca/cee/tools-outils/model-eng.asp .

[2] .  Treasury Board of Canada Secretariat. Models for Evaluation and Performance Measurement for Small Agencies , 2003.

Treasury Board of Canada Secretariat. Interim Evaluation of the Evaluation Policy , 2002.

[3] .  Treasury Board of Canada Secretariat. Models for Evaluation and Performance Measurement for Small Agencies , 2003.

[4] .  Treasury Board of Canada Secretariat. Preparing and Using Results-based Management and Accountability Frameworks. April 2004.

[5] .  Refer to the following Web site: http://www.tbs-sct.gc.ca/cee/ .

[6] .  Treasury Board of Canada Secretariat, Models for Evaluation and Performance Measurement for Small Agencies , 2003.

[7] .  For more information on the organizations mentioned, please refer to Appendix C.

[8] .  Smaller agencies may have only one level of management.

[9] .  Many small agencies submit an Annual Audit and Evaluation Plan. The template incorporates both audit and evaluation. Internal audit plans are required where internal audit priorities have been identified.

 



Date modified: