Skip to content.

CPPA: problems and criticisms – automated decision making

Canada is planning to revamp its comprehensive privacy law by repealing the existing comprehensive privacy law, PIPEDA, and by enacting Bill C-27, the Digital Charter Implementation Act (“DCIA”) to enact the Consumer Privacy Protection Act (CPPA), Personal Information and Data Protection Tribunal Act (PIDTA), and Artificial Intelligence and Data Act (AIDA). Bill C-27 replaced Bill C-11 (the former drafts of the CPPA and PIDTA). While the DCIA attempts to rectify some of the criticisms with Bill C-11, many of the problems remain and problems have emerged in the new Bill. This blog series will address some of the more important problems with the DCIA including issues in the CPPA, PIDTA and AIDA. Prior posts focused on Bill C-27’s preamble and an overview and how the bill fails to meet these purposes; the problems with the “appropriate purposes” override section; the service provider provisions; and the amendments that deal with anonymization and pseudonymization of personal information. This post focuses on the amendments dealing with automated decision making.

After Bill C-11 passed first reading, Adam Goldenberg, Charles Morgan and I did an extensive post titled Using privacy laws to regulate automated decision making summarizing Bill C-11’s provisions and noting certain problems associated with the draft provisions.

Bill C-27 corrects some of the drafting problems with Bill C-11’s provisions dealing with automated decision making. However, problems still remain and new problems were introduced in the new Bill.

Background and Overview

Our prior blog provided an analysis of the proposed changes to the law dealing with automated decision making proposed in Bill C-11. The following is a summary of this analysis with updates to cover more recent developments. For those interested in further details, I refer you to the prior blog on the topic.

AI technology, while still in a nascent stage, is already pervasive. It’s use will undoubtedly spread to touch all manner of products and services. As one of the components driving the Fourth Industrial revolution, one can expect its ubiquity to grow exponentially. This will impact every sector of society, domestic and world economies, individuals, organizations of all types and governments. Countries that successfully develop and adopt AI based technologies will be winners in the global economies. Countries that lag including by making poor policy choices will be the losers.

Regulation of automated decision making under privacy laws

Member states of the European Union have implemented the GDPR provisions that regulate certain automated decision making using AI. Under Article 22(1) of GDPR, data subjects “have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.”

Recital 71 of the GDPR provides guidance as to what is intended by the phrase “legal effects concerning him or her or similarly significantly affects him or her” by giving the examples “such as automatic refusal of an online credit application or e-recruiting practices without any human intervention” and “to analyse or predict aspects concerning the data subject’s performance at work, economic situation, health, personal preferences or interests, reliability or behaviour, location or movements, where it produces legal effects concerning him or her or similarly significantly affects him or her”.

According to the Article 29 Data Protection Guideline on automated decision making prepared by the EU Working Party (now called the European Data Protection Board or EDPB) which is made up of the EU data protection authorities (the regulators that enforce the GDPR), for processing to significantly affect someone “the effects of the processing must be sufficiently great or important to be worthy of attention. In other words, the decision must have the potential to: significantly affect the circumstances, behaviour or choices of the individuals concerned; have a prolonged or permanent impact on the data subject; or at its most extreme, lead to the exclusion or discrimination of individuals.”

The U.K. Information Commissioner’s Office (ICO) describes the intent in a similar manner:

A decision producing a legal effect is something that affects a person’s legal status or their legal rights. For example when a person, in view of their profile, is entitled to a particular social benefit conferred by law, such as housing benefit.

A decision that has a similarly significant effect is something that has an equivalent impact on an individual’s circumstances, behaviour or choices.

In extreme cases, it might exclude or discriminate against individuals. Decisions that might have little impact generally could have a significant effect for more vulnerable individuals, such as children.

Although the wording of the GDPR limits the obligation to decisions made solely by automated means, it has also been interpreted to apply to situations where there is no meaningful oversight by human beings.

Articles 13 and 14 of the GDPR provide that where these disclosures are required, the data subjects are to be provided “meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject” “necessary to ensure fair and transparent processing in respect of the data subject”. The disclosure requirement is about intended or future processing, and how the automated decision-making might affect the data subject.

Article 15 of the GDPR, which deals with access to personal data, gives the data subject the “right to obtain from the controller confirmation as to whether or not personal data concerning him or her are being processed, and, where that is the case, access to the personal data and, inter alia, the following information” “the existence of automated decision-making, including profiling, referred to in Article 22(1) and (4) and, at least in those cases, meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject.” Recital 71, which like all recitals in the GDPR is non-binding, states that decisions made solely by automated means “should be subject to suitable safeguards, which should include specific information to the data subject and the right to obtain human intervention, to express his or her point of view, to obtain an explanation of the decision reached after such assessment and to challenge the decision.”

Articles 13, 14 and 15 of the GDPR provide for disclosures of the existence of automated decision-making and for access to information related to such decisions, but limit these disclosures to those referred to in Article 22. While some commentors have suggested that these obligations may also apply to any automated decisions made about data subjects, the wording is quite clear that it applies only to those referred to in Article 22, namely only those decisions made solely by automated means (which may also apply where there is no meaningful human oversight).

There is a heated debate as to whether the access (or explainability) requirement in the GDPR is restricted to a “right of information” (a requirement to provide information explaining how algorithms make decisions affecting individuals) or is a “right of explanation” (a right to information about individual decisions made by algorithms).

Some scholars argue the access provision in the GDPR requires that individuals must be given enough information to be able to understand what they are agreeing to, or to exercise their right to opt out of, companies making automated decisions.[i] Some scholars suggest a more contextual interpretation emphasizing a broad transparency and accountability obligation and systematic governance regime overseen by data protection authorities that focuses on multi-layered explanations that must be disclosed to individuals or regulators. While this is sometimes also referred to as a right of explanation, the term is frequently explained as requiring that automated decisions with significant effects must be made ‘legible’ to individuals, in the sense that individuals must be able to understand enough about the decision-making process to be able to invoke their other rights under the GDPR, including the right to contest a decision.[ii]

The Article 29 Data Protection Guideline on automated decision making, prepared by the EU Working Party (the EPBD) describes the requirement as a transparency obligation that extends to providing general information on factors used in the decision-making process and weight on an aggregate level and not a full right to an explanation of the logic involved in specific decisions concerning individuals:

Article 15(1)(h) says that the controller should provide the data subject with information about the envisaged consequences of the processing, rather than an explanation of a particular decision. Recital 63 clarifies this by stating that every data subject should have the right of access to obtain ‘communication’ about automatic data processing, including the logic involved, and at least when based on profiling, the consequences of such processing.

By exercising their Article 15 rights, the data subject can become aware of a decision made concerning him or her, including one based on profiling.

The controller should provide the data subject with general information (notably, on factors taken into account for the decision-making process, and on their respective ‘weight’ on an aggregate level) which is also useful for him or her to challenge the decision.

According to the Working Party Guideline, the level of disclosure or access is to provide meaningful information about the logic involved, not necessarily a complex explanation of the algorithms used or disclosure of the full algorithm.

The U.K. ICO describes the disclosure and access rights under the GDPR as follows:

Does data protection law require that we explain AI-assisted decisions to individuals?

As above, the GDPR has specific requirements around the provision of information about, and an explanation of, an AI-assisted decision where:

  • it is made by a process without any human involvement; and

  • it produces legal or similarly significant effects on an individual (something affecting an individual’s legal status/ rights, or that has equivalent impact on an individual’s circumstances, behaviour or opportunities, eg a decision about welfare, or a loan).

In these cases, the GDPR requires that you:

  • are proactive in “…[giving individuals] meaningful information about the logic involved, as well as the significance and envisaged consequences…” (Articles 13 and 14);

  • “… [give individuals] at least the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision.” (Article 22); and

  • “… [give individuals] the right to obtain… meaningful information about the logic involved, as well as the significance and envisaged consequences…” (Article 15) “…[including] an explanation of the decision reached after such assessment…” (Recital 71).

    The GDPR’s recitals are not legally binding, but they do clarify the meaning and intention of its articles. So, the reference to an explanation of an automated decision after it has been made in Recital 71 makes clear that such a right is implicit in Articles 15 and 22. You need to be able to give an individual an explanation of a fully automated decision to enable their rights to obtain meaningful information, express their point of view and contest the decision.

The GDPR (in Recital 63) also expressly recognizes that the data subject’s access rights “should not adversely affect the rights or freedoms of others, including trade secrets or intellectual property and in particular the copyright protecting the software”.

Most countries internationally have not yet enacted specific obligations under their privacy laws to require transparency or explainability of decisions made using AI.

Under s.12.1 of Quebec’s Bill 64, any person carrying on an enterprise who uses personal information to render a decision based exclusively on an automated processing must, at the time of or before the decision, inform the person concerned accordingly. The person must also inform the person concerned, at the latter’s request, (1) of the personal information used to render the decision; (2) of the reasons and the principal factors and parameters that led to the decision; and (3) of the right of the person concerned to have the personal information used to render the decision corrected. The person concerned must also be given the opportunity to submit observations to the enterprise who is in a position to review the decision.

Bill C-11 (CPPA v1)

Bill C-11 defined an “automated decision system” as:

“any technology that assists or replaces the judgement of human decision-makers using techniques such as rules-based systems, regression analysis, predictive analytics, machine learning, deep learning and neural nets. (système décisionnel automatisé)”

Under s.62(1) organizations would have been required to make readily available, in plain language, information that explains the organization’s policies and practices to fulfil its obligations. This includes “a general account of the organization’s use of any automated decision system to make predictions, recommendations or decisions about individuals that could have significant impacts on them”.

In addition to this disclosure obligation, Bill C-11 included a new explainability obligation for automated decisions. Under s.63(3), “If the organization has used an automated decision system to make a prediction, recommendation or decision about the individual, the organization must, on request by the individual, provide them with an explanation of the prediction, recommendation or decision and of how the personal information that was used to make the prediction, recommendation or decision was obtained.”

The problems with Bill C-11’s provisions related to automated decision making included the following.

The CPPA definition of “automated decision system” was too broad. As stated in the prior blog, “it is not limited by classes of technology, the products, services, or sectors of the economy that will use it, the nature of the automated decisions that the systems could implicate, the classes of individuals that could be affected by decisions, or nature or the significance of the impacts of the decisions to individuals. It also pertained to technology where there is a person “in the loop”, even if the human contribution dominated or the oversight was otherwise meanginful.

The (disclosure) transparency and explainability (access) obligations were also broader than those under the GDPR. They were not limited to decisions which “produces legal effects” “or similarly significantly affects” on individuals. As noted above, the transparency obligation would apply where “predictions, recommendations or decisions about individuals could have significant impacts on them”. However, the explainability obligation had no such limitation.

The other significant issue was the appropriateness of using privacy laws to essentially regulate the use of automated decision systems. The automated decision systems provisions were not truly directed at privacy-related mischief. The goal was primarily to regulate the use of artificial intelligence systems such as to shed a light on, and to give individuals a means of rectifying, possible algorithmic discrimination, bias, inaccuracy, or other undesirable outcomes, regardless of the degree to which personal information was used. As the prior blog noted, this horizontal regulation of AI could potentially overlap with numerous existing legal and regulatory frameworks creating the possibility of duplicative compliance burdens by different regulators at different levels of government.

The regulation of automated decision making by the Federal government also raises division of powers constitutional issues.

Bill C-27 (CPPA v2)

Bill C-27 would make several changes to what had been proposed in Bill C-11. While it helped somewhat overcome some problems, it did not go far enough and actually made the provisions more problematic than those in Bill C-11.

The term “automated decision making” was amended, as shown below.

automated decision system means any technology that assists or replaces the judgementjudgment of human decision-makers using techniques such asthrough the use of a rules-based systemssystem, regression analysis, predictive analytics, machine learning, deep learning and, a neural nets. (network or other technique. (système décisionnel automatisé)

As can be seen, the definition of automated decision system has no significant changes. Notably, it still incredibly broad and is not limited to systems where decisions are only made by the automated system. Nor does the degree of human oversight play any role in whether the system meets the definition. The use of a 20 year spreadsheet algorithm or other tool that even marginally contributes to a decision would be enough to make the tool an automated decision system because of the inclusion of the words “or other technique” in the definition. It is so broad, that it would be virtually impossible for many organizations to comply with the new rules. They likely would not even know where to start.

If the government is truly trying to bring practical disclosure and explainability rules into place for AI systems, it could limit the technology to those with no human in the loop, or at least revise the definition to be consistent with the interpretation given to Art. 22 of the GDPR which would include situations where there is no meaningful oversight.

There was no appreciable change to the requirement to disclose that an automated decision system is being used.

(c) a general account of the organization’s use of any automated decision system to make predictions, recommendations or decisions about individuals that could have a significant impactsimpact on them;

This disclosure requirement applies much more broadly than under the GDPR and Quebec’s Bill 64 as it covers “predictions, recommendations or decisions” (and not only decisions) and because it’s scope is much broader by targeting those that “could have a significant impact” (and not, as in the EU, that has “legal or similarly significant effects on them”.

Bill C-27 makes some changes in the provisions dealing with explainability.

Automated decision system

(3) If the organization has used an automated decision system to make a prediction, recommendation or decision about the individual  that could have a significant impact on them, the organization must, on request by the individual, provide them with an explanation of the prediction, recommendation or decision and of how the personal information that was used to make the prediction, recommendation or decision was obtained.

Explanation

(4) The explanation must indicate the type of personal information that was used to make the prediction, recommendation or decision, the source of the information and the reasons or principal factors that led to the prediction, recommendation or decision.

The changes to these provisions are significant.

First, they would now require the access or explainability obligations only to apply to a prediction, recommendation or decision about the individual “that could have a significant impact on” an individual. Even with these changes, the proposed law, as with the disclosure obligation, is more onerous than under the GDPR because it would potentially apply to a much broader range of technologies (those that assist as opposed to purely automated ones), to predictions, recommendations or decision (and not just decisions or to the narrower subset of circumstances required by the GDPR) and to those that have significant impacts without any limitations such as the GDPR limitation that limits the scope of this obligation to legal or similar effects on individuals.

Second, the access or explainabilty obligation has been broadened. Under C-11, the explanation was tied to an explanation of how personal information was used, something that attempted to tether the explainability obligations to privacy interests.

C-27 would require an explanation about the type of personal information was used and the source of the information. But Bill C-27’s explainability obligation now includes “the reasons or principal factors that led to the prediction, recommendation or decision”. This is focused on why a particular decision, recommendation or prediction was made. In that respect it also goes much further than the GDPR obligation (as interpreted by the Working Party) to only provide an explanation of the functionality of the automated decision technology and the aggregate factors that generally are weighed to achieve a result.

The government may believe that the departures from the GDPR are mere clarifications or incremental, but that belief would be unfounded. The changes, individually and in combination, are significant.

The difference in scope to the GDPR access requirement is substantial. As noted above, the GDPR deals with legal or similar significant effects. There is no such limitation in Bill C-27. In theory, it could therefore apply to a range of other technologies that make decisions that could significantly affect individuals including decisions made by technologies that control or are used in automated vehicles, robots, fraud detection, cybersecurity, medical devices, technologies that automate legal practices such e-discovery tools, insurance and loan underwriting, smart home technologies, and stock market trading platforms, to name just a few.

What is also clear is that the categories of technologies and decisions that could be made by them are open ended. This will leave organizations with no easy way to assess the situations in which the disclosure and explainability obligations apply. Each of the decisions, predictions, or recommendations made by these types of AI systems could produce significant effects far beyond “legal” effects or those that are significant and similar to legal effects. Some could have significant, direct or indirect, health, safety, economic, and human rights effects, as examples.

It makes little sense for these broad horizontal disclosure and explainability requirements to apply across the board without any recognition of the nuances associated with these particular industries, sectors, or domains. This horizontal approach to regulating decisions made by automated systems could lead to an overlap and lack of coordination between different regulatory bodies. It could lead to over-regulation and reduce competitiveness critical to the effective commercialization of AI products. This problem was identified in the recent U.K. Department for Digital, Culture, Media and Sport publication, Establishing a pro-innovation approach to regulating AI: An overview of the UK’s emerging approach:

The proliferation of activity; voluntary, regulatory and quasi-regulatory, introduces new challenges that we must take action to address. Examples include:

  • A lack of clarity. Stakeholders often highlight the ambiguity of the UK’s legal frameworks and application of regulatory bodies to AI, given these have not been developed specifically with AI technologies and its applications in mind. The extent to which UK laws apply to AI is often a matter of interpretation, making them hard to navigate. This is particularly an issue for smaller businesses who may not have any legal support.

  • Overlaps. Stakeholders also note the risk that laws and regulators’ remits may regulate the same issue for the same reason and this can exacerbate this lack of clarity. This could lead to unnecessary, contradictory or confusing layers of regulation when multiple regulators oversee an organisation’s use of the same AI for the same purpose.

  • Inconsistency. There are differences between the powers of regulators to address the use of AI within their remit as well as the extent to which they have started to do so. AI technologies used in different sectors are therefore subject to different controls. While in some instances there will be a clear rationale for this, it can further compound an overall lack of clarity.

Under the proposed CPPA provisions, organizations could face overlapping regulation with different disclosure and explainability standards including disparate standards set by provinces or federal bodies. By way of example only, autonomous vehicles and alternative trading systems are regulated under provincial and territorial laws. Medical devices are regulated federally. Decisions made by autonomous systems acting alone or in combination may trigger Competition Act violations that could have significant effects on individuals and could be investigated by the Competition Bureau. Significant decisions that result in discrimination could also be subject to regulation under provincial or federal human rights laws. These existing regimes often impose very specific requirements for the protection of the public based on carefully calibrated rules designed for the industry sector or interest being regulated.

The departure from the GDPR requirement to provide information about how decisions are made generally rather than how any particular decision is actually made will also be challenging to comply with. It is well known that decisions made using AI are often the result of complex logic and processing that cannot, or cannot easily, be meaningfully or intelligently explained precisely. Regulation which requires this, to the extent it can be accomplished, favours larger firms that can make up front investments to try. This can lead to slower progress and growth and fewer hometown success stories. The government should be extraordinarily cautious about introducing a legal standard that cannot be met, or cannot always be met, or cannot be meaningfully met with precision, or can only be made with the upfront investments that only larger firms can afford.

The new CPPA would also depart from privacy interests by requiring an explanation of the reasons or principal factors that led to the prediction, recommendation or decision. This change untethers the provision from its privacy anchor and is a direct attempt to regulate a component of artificial intelligence systems. While there may be benefits to this enhanced requirement, this policy choice should be part of a much larger discussion about how the regulation of AI should be approached.

There are also significant constitutional issues associated the Federal government regulating what amounts to a consumer protection law, something very arguably within provincial jurisdiction.

The scope of what must be disclosed to individuals under the explainability amendment is also unclear leaving organizations with a risk that information about their proprietary algorithms or computer code might need to be disclosed to individuals or to the Commissioner. Unlike in the GDPR, there are no guardrails in the CPPA that would prevent the disclosure of trade secrets in proprietary algorithms.

There are also no guardrails that would protect organizations from a requirement to make detailed explanations that could undermine the security or integrity of products. (The potential breadth of the access provisions could also potentially trigger an allegation that this provision violates Article 19.16 (Source Code) of CUSMA.)

There is no doubting the desirability of establishing a clear regulatory framework that facilitates disclosures to members of the public of when an AI system is being used to make significant decisions about them, or at least certain types of decisions. However, the rules should be clear (especially taking into account the steep penalties and new order making powers under the CPPA) and be consistent with those of our trading partners. Unclear rules and rules that depart in a material way from those of our major trading partners (including the U.S., EU, and Japan) and rules that result in overlapping and conflicting regulation will likely inhibit investment and innovation in this country.

The rules for regulating AI (including automated decision making) should not inhibit innovation, or the adoption of AI technologies. Regulation of AI technologies per se, can impede innovation, particularly in the shorter term, by increasing the cost of entry into markets and distorting competition. Unnecessary and burdensome regulations can not only create barriers to entry they can also limit the ability of firms to innovate and capture the social benefits of AI. Regulations would require companies to invest time and resources in order to comply takes away from resources devoted to innovation. Companies may also be hesitant to make investments in AI given the risk of compliance failure.

But, this is exactly what the automated decision making provisions in Bill C-27 do.

It is unclear why the government’s policy towards privacy law amendments in Bill C-27 (including the provisions dealing with automated decision making) favours enacting laws more stringent than those of any our trading partners. It may rest on the assumption that if strong privacy laws are good then even stronger privacy laws are better and that the strongest privacy laws promote innovation and have no zero sum gain effects. But, even if privacy is viewed as a fundamental right, this assumption is incorrect. As in other areas, as the Privacy Commissioner recently pointed out in a speech, “we can and must also have privacy while fostering the public interest and innovation” “as in so many things, we must reject extremes in either direction”.

The automated decision making provisions in Bill C-27 are yet further examples of choices being made to enact a privacy law for Canada that is at odds with not only international standards, but even the most onerous standards set by the GDPR. If the government is firmly intending on regulating disclosure and explainability for automated decisions, a more prudent approach would be to align Canadian standards to the disclosure and access standards required in the EU. If, and to the extent, these standards evolve, Canada could re-amend the law to keep it consistent with these standards. Such an approach would have the added benefit of strengthening the OPC’s audit and enforcement jurisdiction by enabling it to assess compliance with disclosures made generally to individuals, which would also be consistent with the approach under the GDPR. Alternatively, Canada and the provinces (which have strong claims to constitutional jurisdiction over many aspects related to the regulation AI) could establish a coordinated approach to this issue, much as they did when enacting e-commerce and electronic evidence rules over 20 years ago.

I recommend that the Bill C-27 provisions dealing with automated decision be amended as follows if the government is intent on keeping these provisions in the Bill and not moving them to AIDA to be considered as part of a larger study on the regulation of AI.

Recommend: That the definition of “automated decision system” be amended to apply to technology that replaces the judgment of human decision-makers and those where the human oversight is not meaningful.

Recommend: That the disclosure and explainability obligations be confined to decisions with legal or similarly significant effects. As an alternative, the obligations could be limited to decisions with legal effects and other significant similar or other effects prescribed by regulation.

Recommend: That the disclosure and explainability obligations be limited to explaining how the individual’s personal information was used to make the decision. The provisions should remain tied to privacy law. Alternatively, the obligation should be limited to providing the individual with information on factors taken into account for the decision-making process, and on their respective ‘weight’ on an aggregate level.

Recommend: The disclosure and explainability obligations should be subject to the reasonable protection of trade secrets and other confidential information in algorithms and the source codes of algorithms. S.20(1 (third party information) of the Access to Information Act, may be a useful starting point for any such restrictions.

This article was first posted on www.barrysookman.com

_______________

[i] See, for example, Andrew Selbst et al (Meaningful information and the right to explanation)

The ‘right to explanation’ debate has, in part, so captured imaginations because it is knotty, complex, and a non-trivial technical challenge to harness the full power of ML or AI systems while operating with logic interpretable to humans. This issue has drawn immense interest from the technical community. There is also rapidly increasing interest from a legal perspective, with a number of scholars beginning to explore both the importance of explanation as a normative value within the ML or AI context, as well as whether there is a requirement for explanation as a matter of positive law.

The legal debate so far has concerned a conception of the right oddly divorced from the legislative text that best seems to support it. The most prominent contributions are two explosive papers out of Oxford, which immediately shaped the public debate. The first paper, by Bryce Goodman and Seth Flaxman, asserts that the GDPR creates a ‘right to explanation’, but does not elaborate much beyond that point. The second paper, from Sandra Wachter, Brent Mittelstadt, and Luciano Floridi, asserts that no such right presently exists. It does so by unnecessarily constraining the idea of the ‘right to explanation’, while conceiving of a different ‘right to be informed’ that amounts to a right to a particular type of explanation.

The Selbst paper suggests that the GDPR provides “rights to ‘meaningful information about the logic involved’ in automated decisions”, but does not come to a conclusion as to whether logic must explain how the SAI system works or specifically the logic of how the system treats the data subject.

Also, Bart Custers (The right of access in automated decision-making: The scope of article 15(1)(h) GDPR in theory and practice)

These provisions in the GDPR have triggered heavy debate amongst legal experts.14 Several scholars have interpreted these provisions (chiefly Article 22 GDPR) as a ‘right to explanation’, arguing that these provisions effectively create a right for data subjects to ask for an explanation of an algorithmic decision that concerned them.15 Others have argued that these provisions are actually quite limited,16 concluding that there is no such right to explanation.17 Finally, some scholars suggested a more contextual interpretation, suggesting that these provisions can actually provide data subjects with more transparency and accountability.18 The WP29 also seems to take this latter view in its guidelines on profiling and automated decision-making.19

He also does not suggest that the GDPR provides a right of explanation concluding that it “seems to point to providing useful information, i.e., useful for data subjects in enabling them to properly assess whether they want to exercise their data subject rights”.

[ii] See, for example, Professor Kaminski et al Algorithmic impact assessments under the GDPR: producing multi-layered explanations

“The core debate has primarily focused on whether or not Article 22 creates an ex post right to explanation of an individual decision made by an automated system. Our view, discussed at length by each of us elsewhere, is that it does. Automated decisions with significant effects must be made ‘legible’ to individuals, in the sense that individuals must be able to understand enough about the decision-making process to be able to invoke their other rights under the GDPR, including the right to contest a decision.”

Also, Professor Kaminski The Right to Explanation, Explained

“Therefore, while individuals need not be provided with source code, they should be given far more than a one-sentence overview of how an algorithmic decision-making system works. They need to be given enough information to be able to understand what they are agreeing to…And it must provide enough information that an individual can act on it—to contest a decision, or to correct inaccuracies, or to request erasure.”

“Individuals should be told both the categories of data used in an algorithmic decision-making process and an explanation of why these categories are considered relevant. Moreover, they should be told the “factors taken into account for the decision making process, and . . . their respective ‘weight’ on an aggregate level . . . .” They should be told how a profile used in algorithmic decision-making is built, “including any statistics used in the analysis[,]” and the sources of the data in the profile. Lastly, companies should provide individuals an explanation of why a profile is relevant to the decision-making process and how it is used for a decision.”

“The GDPR sets up a system of “qualified transparency” over algorithmic decision-making that gives individuals one kind of information, and experts and regulators another. This multi-pronged approach to transparency should not be dismissed as lightly as some have done. There is an individual right to explanation. It is deeper than counterfactuals or a shallow and broad systemic overview, and it is coupled with other transparency measures that go towards providing both third-party and regulatory oversight over algorithmic decision making. These transparency provisions are just one way in which the GDPR’s system of algorithmic accountability is potentially broader, deeper, and stronger than the previous EU regime.”

 

Bill C-27 automation privacy CPPA

Authors

Subscribe

Stay Connected

Get the latest posts from this blog

Please enter a valid email address