Language selection

Government of Canada / Gouvernement du Canada

Directive on Automated Decision-Making

1. Effective date

  • 1.1This directive takes effect on April 1, 2019, with compliance required by no later than April 1, 2020.
  • 1.2This directive applies to all automated decision systems developed or procured after April 1, 2020.
    • 1.2.1Existing automated decision systems developed or procured prior to June 24, 2025, will have until June 24, 2026 to comply with the new or updated requirements.
    • 1.2.2Agents of Parliament will have until June 24, 2026, to comply with the requirements.
  • 1.3This directive will be reviewed every two years, and as determined by the Chief Information Officer of Canada.

2. Authorities

3. Definitions

  • 3.1Definitions to be used in the interpretation of this directive are listed in Appendix A.

4. Objectives and expected results

  • 4.1The objective of this directive is to ensure that automated decision systems are used in a manner that reduces risks to clients, departments and Canadian society, and leads to more efficient, accurate, consistent and interpretable decisions made pursuant to Canadian law.
  • 4.2The expected results of this directive as it applies to automated decision systems are as follows:
    • 4.2.1Decisions made by departments are data-driven and responsible, and comply with procedural fairness and due process requirements.
    • 4.2.2Impacts to administrative decisions are assessed and negative outcomes are reduced.
    • 4.2.3Data and information on the use of automated decision systems in departments are made available to the public, while protecting privacy, security and intellectual property.

5. Scope

  • 5.1This directive applies to any automated decision system in production used to make an administrative decision or a related assessment about a client.
  • 5.2This directive excludes automated decision systems used solely for research and experimentation purposes, and those operating in test environments.

6. Requirements

The assistant deputy minister responsible for the program using the automated decision system, or any other senior official named by the deputy head, is responsible for:

6.1 Algorithmic impact assessment

  • 6.1.1Completing, approving and publishing the final results of an algorithmic impact assessment in an accessible format on the Open Government Portal prior to the production of any automated decision system.
  • 6.1.2Applying the relevant requirements prescribed in Appendix C as determined by the algorithmic impact assessment.
  • 6.1.3Reviewing, approving, and updating the published algorithmic impact assessment on a scheduled basis, including when the functionality or scope of the automated decision system changes.

6.2 Transparency

Providing notice before decisions

  • 6.2.1Providing notice through all service delivery channels in use that the decision will be made or assisted by an automated decision system, as prescribed in Appendix C.
  • 6.2.2Providing notices prominently and in plain language.

Providing explanations after decisions

  • 6.2.3Providing a meaningful explanation to clients of how and why the decision was made, as prescribed in Appendix C.

Access to components

  • 6.2.4Determining the appropriate licence for software components, including consideration of open source software.
  • 6.2.5Obtaining and safeguarding all released versions of software components used for automated decision systems.
  • 6.2.6If using a proprietary licence, ensuring that:
    • 6.2.6.1The department responsible for the automated decision system retains the right to access, test and monitor the automated decision system, including all released versions of proprietary software components, in case it is necessary for a specific audit, investigation, inspection, examination, enforcement action, or judicial proceeding, subject to safeguards against unauthorized disclosure.
    • 6.2.6.2As part of this access, the department responsible for the automated decision system retains the right to authorize external parties to review, test, monitor, and audit these components as necessary.

Documenting decisions

  • 6.2.7Documenting the decisions and assessments made or assisted by automated decision systems in accordance with the Directive on Service and Digital, and in support of the testing (6.3.1), monitoring (6.3.2), data governance (6.3.6) and reporting requirements (6.5.1 and 6.5.2).

6.3 Quality assurance

Testing and monitoring outcomes

  • 6.3.1Before an automated decision system is in production, testing the data, information, and underlying model for accuracy, unintended biases and all factors that may unintentionally or unfairly impact the outcomes or violate human rights and freedoms.
  • 6.3.2Monitoring the outcomes of the automated decision system to safeguard against unintentional or unfair outcomes and to verify compliance with human rights obligations, departmental and program legislation, and this directive, on a scheduled basis.
  • 6.3.3Ensuring that testing and monitoring assesses human rights and is consistent with applicable legislation such as the Canadian Charter of Rights and Freedoms, Canadian Human Rights Act, and the United Nations Declaration on the Rights of Indigenous Peoples Act.
  • 6.3.4Documenting client feedback, unexpected impacts, human overrides of the decision or assessment made by the system and other system failures as appropriate.
    • 6.3.4.1Using findings from outcome monitoring and documented feedback, unexpected impacts and human overrides to identify issues and take corrective actions.

Data quality

  • 6.3.5Validating that the data used to train the automated decision system and the data input into the system are relevant, accurate, up-to-date, and in accordance with the Policy on Service and Digital and the Privacy Act.

Data governance

Peer review

  • 6.3.7Consulting the appropriate qualified experts to review the automated decision system, algorithmic impact assessment and supporting documentation, and publishing the complete review or a plain language summary prior to the automated decision system’s production, as prescribed in Appendix C.

Gender-based Analysis Plus

  • 6.3.8Completing a Gender-based Analysis Plus during the development or modification of the automated decision system, as prescribed in Appendix C.

Employee training

  • 6.3.9Providing training to each employee involved in any aspect of the development, use or management of automated decision systems, as prescribed in Appendix C.

Security

  • 6.3.10Conducting risk assessments during the development and maintenance of the automated decision system and implementing appropriate information management and information technology security protections, in accordance with the Policy on Government Security and the Policy on Service and Digital.
  • 6.3.11Implementing measures to secure data and model integrity to prevent tampering and unauthorized modifications, ensuring alignment with Government of Canada guidance, standards, and industry best practices, as prescribed by Treasury Board of Canada Secretariat.

Legal

  • 6.3.12Consulting with the department’s legal services from the concept stage of an automation project to ensure that the use of the automated decision system is compliant with applicable legal requirements.

Ensuring human involvement

  • 6.3.13Ensuring that the automated decision system allows for human involvement, as prescribed in Appendix C.
  • 6.3.14Obtaining the appropriate level of approvals prior to the production of an automated decision system, as prescribed in Appendix C.

6.4 Recourse

  • 6.4.1Informing clients of recourse options to challenge the administrative decision.
    • 6.4.1.1Ensuring that recourse options are timely, effective, and easy to access.

6.5 Reporting

  • 6.5.1Publishing information on the effectiveness and efficiency of the automated decision system in meeting program objectives on the Open Government Portal.
  • 6.5.2Publishing information on the Open Government Portal on how the use of the automated decision system is fair and transparent and does not violate human rights and freedoms.

7. Roles and responsibilities of Treasury Board of Canada Secretariat

The Treasury Board of Canada Secretariat is responsible for:

  • 7.1Providing government-wide guidance on the use of automated decision systems.
  • 7.2Developing and maintaining the Algorithmic Impact Assessment tool and any supporting documentation.
  • 7.3Communicating and engaging government-wide and with partners in other jurisdictions and sectors to develop common strategies, approaches, and processes to support the responsible use of automated decision systems.
  • 7.4Raising with the relevant deputy head as appropriate any compliance issues that arise with this directive.
  • 7.5Supporting policy implementation by working with departments, as appropriate, to ensure that automated decision systems are fair, effective and transparent.

8. Application

  • 8.1This directive applies to all institutions subject to the Policy on Service and Digital.
  • 8.2Other departments or separate agencies that are not subject to the Policy on Service and Digital are encouraged to meet the requirements of this directive as good practice.
  • 8.3Agents of Parliament
    • 8.3.1The following organizations are considered agents of Parliament for the purposes of the directive:
      • Office of the Auditor General of Canada
      • Office of the Chief Electoral Officer
      • Office of the Commissioner of Lobbying of Canada
      • Office of the Commissioner of Official Languages
      • Office of the Information Commissioner of Canada
      • Office of the Privacy Commissioner of Canada
      • Office of the Public Sector Integrity Commissioner of Canada
    • 8.3.2The heads of agents of Parliaments are solely responsible for monitoring and ensuring compliance with this directive within their organizations, as well as for responding to cases of non-compliance in accordance with any Treasury Board instruments that address the management of compliance.
    • 8.3.3The subsections 6.2.2.1 and 7.4 do not apply to agents of Parliament.
    • 8.3.4Agents of Parliament are not required to publish the algorithmic impact assessment (6.1.1) or reporting information on the Open Government Portal (6.5.1 and 6.5.2).
    • 8.3.5The heads of agents of Parliaments are responsible for approval of level 4 systems to operate (6.3.14, Appendix C).

9. References

10. Enquiries


Appendix A - Definitions

administrative decision
Any decision that is made by an authorized official of a department as identified in section 8 of this directive pursuant to powers conferred by an Act of Parliament or an order made pursuant to a prerogative of the Crown that affects legal rights, privileges or interests.
algorithmic impact assessment
A framework to help departments better understand and reduce the risks associated with automated decision systems and to provide the appropriate requirements that best match the type of system being designed.
automated decision system
Any technology that either assists or replaces the judgment of human decision makers. These systems draw from fields like statistics, linguistics and computer science, and use techniques such as rules-based systems, regression, predictive analytics, machine learning, deep learning, and neural networks.
human rights
The rights to which persons are inherently entitled because they are human beings (Source). Human rights are protected in the Canadian Charter of Rights and Freedoms, Canadian Human Rights Act, and the United Nations Declaration on the Rights of Indigenous Peoples Act.
production
An automated decision system is in production when it is in use and has impacts on real clients. This can include when it is in beta or user testing and producing outputs that impact clients.
proprietary
Refers to systems, algorithms or software owned by an entity, such as a company or government. These systems are often closed source, meaning the source code is not publicly available.
test environment
An environment containing hardware, instrumentation, simulators, software tools, and other support elements needed to conduct a test. A system in a test environment may mimic a production environment but does not impact real clients. Test environments may include exploration zones and sandboxes.

Appendix B - Impact Assessment Levels

LevelDescription
IThe context in which the system is operating likely has low levels of risk associated with it. This may be because of:
  • the identity factors of the clients that may be impacted
  • the line of business and the decision that the system is supporting
  • the type of technology being used
The decision will likely have little to no, easily reversible, and brief impacts on some of:
  • the rights of individuals or communities
  • the equality, dignity, privacy, and autonomy of individuals
  • the health or well-being of individuals or communities
  • the economic interests of individuals, entities, or communities
  • the ongoing sustainability of an ecosystem
The data used by the system likely presents low levels of risk. This may be because of:
  • the sensitivity of the data (such as the use of non-personal or unclassified information)
  • the use of structured data
  • the data collection approach
IIThe context in which the system is operating likely has moderate levels of risk associated with it. This may be because of:
  • the identity factors of the clients that may be impacted
  • the line of business and the decision that the system is supporting
  • the type of technology being used
The decision will likely have moderate, likely reversible and short-term impacts on some or all of :
  • the rights of individuals or communities
  • the equality, dignity, privacy, and autonomy of individuals
  • the health or well-being of individuals or communities
  • the economic interests of individuals, entities, or communities
  • the ongoing sustainability of an ecosystem
The data used by the system likely presents moderate levels of risk. This may be because of:
  • the sensitivity of the data (such as the use of personal, non-personal, unclassified or protected information)
  • the use of structured or unstructured data
  • the data collection approach
IIIThe context in which the system is operating likely has high levels of risk associated with it. This may be because of:
  • the identity factors of the clients that may be impacted
  • the line of business and the decision that the system is supporting
  • the type of technology being used
The decision will likely have high, difficult to reverse and potentially ongoing impacts on some or all of :
  • the rights of individuals or communities
  • the equality, dignity, privacy, and autonomy of individuals
  • the health or well-being of individuals or communities
  • the economic interests of individuals, entities, or communities
  • the ongoing sustainability of an ecosystem
The data used by the system likely presents high levels of risk. This may be because of:
  • the sensitivity of the data (such as the use of personal or protected information)
  • the use of unstructured data
  • the data collection approach
IVThe context in which the system is operating likely has very high levels of risk associated with it. This may be because of:
  • the identity factors of the clients that will be impacted
  • the line of business and the decision that the system is supporting
  • the type of technology being used
The decision will likely have very high, irreversible and perpetual impacts on some or all of:
  • the rights of individuals or communities
  • the equality, dignity, privacy, and autonomy of individuals
  • the health or well-being of individuals or communities
  • the economic interests of individuals, entities, or communities
  • the ongoing sustainability of an ecosystem
The data used by the system likely presents very high levels of risk. This may be because of:
  • the sensitivity of the data (such as the use of personal, protected, or classified information)
  • the use of unstructured or incomplete data
  • the data collection approach

Appendix C - Impact Level Requirements

RequirementLevel ILevel IILevel IIILevel IV
Notice
(sections 6.2.1–6.2.2)
Plain language notice posted through all service delivery channels in use (Internet, in person, mail or telephone).Same as level IPlain language notice posted through all service delivery channels in use (Internet, in person, mail or telephone).
In addition, the notice must direct clients to the published explanation required under Explanation level I.
Same as level III
Explanation
(section 6.2.3)
In addition to any applicable legal requirement, ensure that a meaningful explanation is published on how the system works in general. The explanation must be in plain language and include information about:
  • the role of the system in the decision-making process
  • input data, its source and method of collection
  • the criteria used to evaluate input data and the operations applied to process it
  • results of any reviews or audits
  • the output produced by the automated decision system and any relevant information needed to interpret it in the context of the administrative decision
  • the principal factors behind a decision
This explanation must be made on a discoverable departmental website and linked in the algorithmic impact assessment.
Explanations must also inform clients of relevant recourse options, where appropriate.
The explanation from level I is published.
In addition, a more detailed, meaningful, plain language, explanation is provided to the client with any decision that results in the denial of a benefit or service, or involves a regulatory action.
This explanation must inform the client of the reason or justification of the administrative decision. This involves a clear and client-focused description of how the automated decision system came to the output it did, including:
  • the principal factors that led to it, such as, where appropriate, a description of the decision tree, scoring or weights of certain factors, and
  • how the automated decision system output was used by human officers
The client must also be provided with the link to the published level I explanation.
Same as level IISame as level II
Peer review
(section 6.3.7)
NoneConsult at least one of the following qualified experts and publish the complete review or a plain language summary on a Government of Canada website:
  • Experts from a federal, provincial, territorial or municipal government institution
  • Faculty members of a post-secondary institution
  • Researchers from a relevant non-governmental organization
  • Contracted third-party vendor with a relevant specialization
  • A data and automation advisory board specified by Treasury Board of Canada Secretariat
Same as level IIConsult at least two of the following qualified experts and publish the complete review or a plain language summary on a Government of Canada website:
  • Experts from the National Research Council of Canada, Statistics Canada, the Communications Security Establishment, or Shared Services Canada
  • Faculty members of a post-secondary institution
  • Researchers from a relevant non-governmental organization
  • Contracted third-party vendor with a relevant specialization
  • A data and automation advisory board specified by Treasury Board of Canada Secretariat
Gender-based Analysis Plus
(section 6.3.8)
NoneEnsure that the Gender-based Analysis Plus has been consulted with your department’s diversity and inclusion experts and includes:
  • An assessment of how the automation project might impact different population groups. This includes consideration of the impacts of the automated decision system and data used in the project, as well as the likely impact of the final decision. Where possible, cite the data used to assess the impacts. It is recommended that the data be gender-disaggregated and include other intersecting identity factors such as age, disability and race. If the data is unavailable, identify where the data gaps exist
Details of planned or existing measures to address risks identified through the Gender-based Analysis Plus or other assessments
Same as level IISame as level II
Training
(section 6.3.9)
Role-based training on how to appropriately use and explain the functionalities of the automated decision system, at a high level. Same as level IRole-based training on how to appropriately manage, secure, and use the functionalities and explain the capabilities of the automated decision system, including: 
  • the technical aspects of the automated decision system to ensure an up-to-date understanding of the how the system works 
  • potential impacts of the automated decision system on privacy, fairness, and human rights and how to detect them
  • how to evaluate and, if necessary, override decisions
Training must be recurring. 
Same as level III
Ensuring human involvement
(section 6.3.13)
The system may make decisions and assessments without direct human involvement.
Humans are to be involved in system quality assurance and can intervene in the making of decisions by the system where appropriate.
Same as level IThe final decision must be made by a human.
Decisions cannot be made without having clearly defined human involvement during the decision-making process.
Humans review the decisions or recommendations made by the system for accuracy and appropriateness. Humans are to be involved in ongoing quality assurance and can intervene in the making of decisions and assessments made by the system.
Same as level III
Approval for the system to operate
(section 6.3.14)
Assistant deputy minister responsible for the program.Same as level IDeputy headTreasury Board

© His Majesty the King in right of Canada, represented by the President of the Treasury Board, 2021,
ISBN: 9780660389394

"Page details"

Date modified: