Analysing the Right to Explanation under Article 86 of the EU Ai Act

Keketso Kgomosotho

Text of Article 86 of the Act

“1. Any affected person subject to a decision which is taken by the deployer on the basis of the output from a high-risk AI system listed in Annex III, with the exception of systems listed under point 2 thereof, and which produces legal effects or similarly significantly affects that person in a way that they consider to have an adverse impact on their health, safety or

fundamental rights shall have the right to obtain from the deployer clear and meaningful explanations of the role of the AI system in the decision-making procedure and the main elements of the decision taken.

 

  1. Paragraph 1 shall not apply to the use of AI systems for which exceptions from, or restrictions to, the obligation under that paragraph follow from Union or national law in compliance with Union law.

 

  1. This Article shall apply only to the extent that the right referred to in paragraph 1 is not otherwise provided for under Union law.”

 

Relevant Recitals 

  1. Recital 93 emphasises the role of deployers in mitigating risks associated with high risk Ai systems, particularly because of their “more precise knowledge of the context of use, the persons or groups of persons likely to be affected, including vulnerable groups.” They are best placed to identify and address potential risks unforeseen by providers during development. This recital also underscores the deployer’s critical role in informing individuals about the use of high-risk Ai systems and their right to an explanation, particularly in law enforcement contexts. “The deployer should also inform the natural persons about their right to an explanation provided under this Regulation. With regard to high-risk Ai systems used for law enforcement purposes, that obligation should be implemented in accordance with Article 13 of Directive (EU) 2016/680.”
  2. Recital 171 provides that “Affected persons should have the right to obtain an explanation where a deployer’s decision is based mainly upon the output from certain high-risk Ai systems that fall within the scope of this Regulation and where that decision produces legal effects or similarly significantly affects those persons in a way that they consider to have an adverse impact on their health, safety or fundamental rights. 
  3. That explanation should be clear and meaningful and should provide a basis on which the affected persons are able to exercise their rights. The right to obtain an explanation should not apply to the use of AI systems for which exceptions or restrictions follow from Union or national law and should apply only to the extent this right is not already provided for under Union law.”

 

Comparable Provisions in Other International Instruments

  • Article 13(2)(f), and Article 22(1) and (4) of the GDPR
 
 Introduction

Article 86 of the Act establishes the “Right to Explanation,” aimed at addressing several fundamental concerns associated with the deployment of high-risk Artificial Intelligence (Ai) systems. These concerns include the opacity of their decision-making processes, the need for transparency, and accountability in algorithmic decision-making. This “crucial”[1] Right is a cornerstone of the Act’s approach to ensuring transparency and accountability in Ai systems, by empowering individuals to understand and even challenge algorithmic decisions that produce a legal or similar impact. As the UN Special Rapporteur on Privacy makes clear, explainability “makes it possible for [individuals] to exercise their rights, such as the right to due process and to a defence when faced with decisions made using artificial intelligence tools or technologies.”[2] It is designed to ensure that individuals subject to decisions based on high-risk Ai systems can obtain a clear and meaningful explanation of how the Ai system contributed to the decision and the key factors that led to the outcome. Further, and as Wan Kim and Bryan also argue “a right to (ex post) explanation uniquely helps to complete informed consent” in the context of algorithmic decision-making— “especially for secondary, noncontextual, and unpredictable uses.”[3]

The right is not absolute. It is subject to limitations and exceptions left open for Ai systems used in the contexts of law enforcement and national security. However, and as we will explore further, the right is juxtaposed with the equally important obligation to respect confidentiality in the implementation of this Act – designed to protect businesses’ intellectual property and trade secrets. This chapter follows the format of the text of Article 86. It provides a close reading of Providers obligations to provide an explanation, and describes the element of such explanation. Finally, and to provide context, the right to explanation will be considered in its relationship to confidentiality and intellectual property protections.

Relationship between Article 86 and other Principles and Provisions of the Act

The right to explanation in the Ai Act is grounded in existing EU legal frameworks and principles of transparency and accountability in EU and human rights law[4] generally. Explainability is concerned with the capacity to “explain both the technical processes of an [artificial intelligence] system and the related human decisions (e.g. application areas of a system).”[5]

The United Nations Educational, Scientific and Cultural Organization (UNESCO) clarifies in their Recommendations on the Ethics of Artificial Intelligence, that “transparency and explainability relate closely to adequate responsibility and accountability measures, as well as to the trustworthiness of artificial intelligence systems.”[6] This is reiterated by the EU High-Level Expert Group on AI, that  “transparency is closely linked with the principle of explicability” in the context of Ai.

The Right at Article 86 of the Act evolved most directly from the GDPR which,[7] at Article 13(2)(f), requires that data subjects be provided with “meaningful information about the logic involved” in automated decision-making. This Article of the GDPR also provides that data subjects must be informed about “the existence of automated decision-making, including profiling, referred to in Article 22(1) and (4) and, at least in those cases, meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject.”[8] This requirement that “meaningful information about the logic involved” be made available to data subjects has led to it being characterised as establishing different rights, including a “right to information,” a “right to be informed,”[9] or, more commonly, a “right to explanation.”[10] At the same time, the discourse is divided on whether these requirements cumulatively produce a right to explanation, as opposed to the more narrow right to be informed. However Some authors hold the view that the distinction is immaterial, based on mere semantics. The Article 86 of the Act uses the language of right to explanation, while also remaining substantively similar to the requirements under GDPR. To that end, recourse must be had to the interpretation of the right to an explanation under the GDPR. The Ai Act seems to merely transplant the right to explanation from the GDPR, into the specific context of high-risk Ai algorithmic decision systems, while retaining its substantive scope.[11]

As is the case with other regulatory areas, regulators of Ai systems face the delicate task of striking a balance between multiple competing and conflicting rights and interests. On the one hand, the Act seeks to foster the advancement and uptake of Ai technologies, and promote trustworthy innovation. On the other hand, the Act also seeks to protect fundamental rights. While these two objectives are not always mutually exclusive, the context of Ai introduces significant tradeoffs that carry exponential stakes on either side.

Specifically, the Ai Act must balance the interplay between transparency and confidentiality. This commentary considers the tension between Article 86, which enshrines the right to an explanation for decisions made by high-risk Ai systems, and Article 78, which imposes a broad obligation to respect confidentiality. The right to explanation, a welcomed progression in the legal governance of  Ai, and a cornerstone of the Act’s commitment to transparency and individual autonomy. As clarified by the EU High-Level Expert Group on AI, it empowers  persons affected by a decision based on the outputs of a high risk Ai system to understand and seek redress for decisions that negatively affect them.[12]

I submit that confidentiality as formulated at Article 78 the Act limits the scope and effectiveness of the right to explanation at Article 86 of the Ai Act. As we explore in the commentary to Article 78 of the Act, it’s emphasis on protecting confidential business information, as evidenced by the provisions in Article 78 and other related articles, creates a tension between transparency and the interests of businesses. While these protections are necessary to foster innovation, they should not come at the expense of individuals’ right to an explanation. The Act attempts to balance these competing interests by allowing for exceptions to confidentiality obligations in certain circumstances, such as when disclosure is necessary to protect fundamental rights or reveal misconduct. However, the effectiveness of this approach in practice remains to be seen.

The right to an explanation takes transparency a step further by not only making the decision-making process accessible and visible, but also ensures that the rationale behind specific decisions can be communicated in terms understandable to non-expert affected persons.[13] 

Right to explanation under Article 86 of the EU Ai Act

Article 86 of the Ai Act establishes a right to an explanation in the context of decisions taken on the basis of outputs of high risk Ai systems. At Article 86(1), it provides that “any affected person” who has been subject to a “decision which is taken by the deployer on the basis of the output from a high-risk AI system listed in Annex III” has “the right to obtain from the deployer clear and meaningful explanations of the role of the AI system in the decision-making procedure and the main elements of the decision taken.”

The right to an explanation for individual decision making is made necessary by the opacity and complexity of Ai systems.[14] One of the major technical and legal challenges at the heart of Ai discourse is the inscrutable, opaque nature in which the algorithm functions.[15] This is referred to as the “black box” problem,[16] which renders the intricate workings and decisional logic of complex Ai systems obscured even as they make increasingly consequential decisions in society.[17]

This technical opacity is a multifaceted issue, reflecting both the complexity of the algorithms involved, the strategic choices made by their developers and the imperative to protect trade secrets.[18] As regards the latter, algorithmic opacity here is occasioned by intellectual property protections against disclosure of proprietary code, reinforced by confidentiality obligations. Companies often consider the specific design and functioning of their Ai decision systems as a protectable competitive advantage, leading to a reluctance to disclose detailed information about how these algorithms operate.[19] While confidentiality and protections of intellectual property are understandable from an Ai business perspective, they operate to significantly limit the ability to provide meaningful explanations to subjects of decisions, and limits ability for effective oversight, external scrutiny and validation – further entrenching the opaqueness of these technologies beyond technical opacity.[20]

To the point of this contribution, this opacity reveals an inherent tension between competing principles of transparency and explainability on the one hand, and confidentiality and protections of intellectual property on the other hand. The two must exist in opposing compromise. The question, then, is whether the Ai Act strikes an equitable balance between these competing interests and principles, to achieve its stated objective of promoting “the uptake of human-centric and trustworthy artificial intelligence (AI), while ensuring a high level of protection of health, safety, fundamental rights enshrined in the Charter, …”[21] A careful examination of the language of the Act provides a useful starting point.

Article 86(1)

Article 86(1) of the Ai Act provides that “any affected person” who has been subject to a “decision which is taken by the deployer on the basis of the output from a high-risk AI system listed in Annex III” has “the right to obtain from the deployer clear and meaningful explanations of the role of the AI system in the decision-making procedure and the main elements of the decision taken.” The list of high-risk Ai systems at Annex III is sufficiently broad to cover contexts where Ai decision systems might result in risk or harm; it includes are those in administration of justice and democratic processes, migration, asylum and border control management; law enforcement; access to and enjoyment of essential private services and essential public services and benefits; employment, workers management and access to self-employment; education and vocational training; and Biometrics – to the extent that their uses are permitted under relevant Union or national law.[22]

  1. “affected person”

This right vests to any “affected person.” While the Act does not define “affected person”, it is clear from the text that the term refers only to natural persons, to the exclusion of legal persons or entities. The Article’s title heading’s use of “individual” as well as the requirement that the decision impact the person “in a way that they consider to have an adverse impact on their health, safety or fundamental rights” suggest that the right only vests in natural persons, who are capable of holding fundamental rights, health and safety, as understood in the context of the Act.[23] Therefore, any affected individual denied social benefits or a job applicant rejected by a “high risk” Ai-powered recruitment tool, for example, would also be entitled to an explanation under the Act. This means that legal entities who have been subject to a decision taken on the basis of a high risk decision support system will not be entitled to an explanation, even where the decision impacts their legal rights in a way they consider adverse to their business interests.

  1. “A decision”

The right to an explanation applies only where there is a “decision.” While the Act does not define what constitutes a “decision,” the context suggests that it refers to an algorithmic decision, that is, a decision taken by the deployer “on the basis of the output from a high-risk AI system.” I will refer to this action, this decision making mechanism as Algorithm Decision Making (ADM), which is understood largely in  academic discourse and soft literature as referring to decisions produced by Ai algorithms or based on algorithms.[24] ADM process in our view comprises 3 elements: (1) a decision; (2) taken by Ai algorithmic means (i.e. with or without any human involved in the decision-making process); (3) which produces legal effects or similarly significant effects on the subject/individual (more on this later).

The EU Law Institute defines ADM as “a (computational) process, including AI techniques and approaches, that, fed by inputs and data received or collected from the environment, can generate, given a set of pre- defined objectives, outputs in a wide variety of forms (content, ratings, recommendations, decisions, predictions, etc).”[25] Others define it as a “process involving algorithms, or sequences of logical, mathematical operations, to implement policies by software.”[26] Understood in this way, then, Article 86(1) of the Act applies only to those decisions, that is, those determinations, conclusions, or resolutions, which are taken on the basis of the outputs of a ML algorithmic decision making system, ie in an algorithmic decision making process, involving Ai algorithms, or sequences of logical, mathematical operations, to implement policies by software.

I must pause here to distinguish the broader concept of Algorithmic Decision Making, which is the subject of Article 86, from automated decision making and Profiling addressed under the GDPR. As noted above, ADM refers to the process of using algorithms to analyse data and make decisions, with varying levels of human involvement.[27]

Automated decision making on the other hand, which is often used interchangeably with ADM, specifically denotes the absence of human intervention in the decision-making process. Essentially, it comprises three elements: “(1) a decision; (2) taken solely by automated means (i.e. without any human involved in the decision-making process); (3) which produces legal effects or similarly significant effects on the data subject.”[28] As such, automated decision making is but one form of ADM, to the extent that it entails the use of algorithms and computer systems to make decisions with limited or without human input, after the system has been designed and deployed. Article 22 of the GDPR refers to this category of decisions as “a decision based solely on automated processing.” The key distinction lies in the level of human involvement. While algorithmic decision making under the Ai Act encompasses a wide range of human intervention levels, automated decision-making under GDPR specifically refers to Ai decision systems that operate with minimal or no human intervention after their initial setup and deployment. Accordingly, Article 22 of the GDPR creates a right for data subjects not to be “subject to a decision based solely on automated processing, including profiling.”

Profiling is often part of both ADM and automated decision-making processes, as such it can be seen as a subset or a component of both.[29] the GDPR defines it as any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person, in particular to analyse or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements.[30]

It entails the analysis and categorisation of individuals based on collected ad-generated data for example, which can then inform broader decision-making processes, regardless of the system’s level of human involvement.[31] Profiling does not itself make any decisions; rather it is the insights derived from profiling that form the basis of both ADM and algorithmic decision-making.[32] Consider the example of a bank using a computer system to determine eligibility for a loan. In algorithmic decision making, the bank sets up a series of rules or algorithms based on financial data—such as income, debt, and savings—to assess applications. This process might still involve bankers manually reviewing outputs or adjusting parameters. Automated decision making occurs when the bank’s system autonomously evaluates loan applications against these algorithms, without human intervention, to approve or reject loans instantly based on the data. Profiling comes into play when the bank analyses an individual’s spending habits, transaction history, and even social media information to create a detailed financial profile. This profile could then inform both the algorithmic setup and the automated system’s decisions, affecting the likelihood of loan approval.

As such, a decision in the context of Article 86, then, is the outcome of a process where data and algorithmic systems evaluate information to arrive at a conclusion, both with or without direct human intervention. The decision is significant in that it encapsulates not only the final action taken by the system, such as approving a loan application, identifying a potential security threat, or personalising a digital advertisement, but also the process by which this conclusion is reached. This includes the selection of data, the design of the algorithm, and the criteria set for making the decision.

  1. “By the deployer”

The decision must be taken by the deployer. This part is rather straightforward. A deployer, as defined at Article 3 of the Act, is “a natural or legal person, public authority, agency or other body using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity.”[33] This definition encompasses a wide range of actors who utilise Ai systems within their operations, including businesses, government agencies, and other organisations. However, it excludes individuals using Ai systems for personal, and non-professional purposes. Notably however, while the deployer may be liable to satisfy requests for explanations, the allocation of legal responsibility becomes less clear when the matter graduates to a claim.

  1. “On the basis of”

The decision must be taken “on the basis of” the output from a high-risk Ai system. This raises the question; when is a decision taken “on the basis of” a High Risk Ai decision support system? The determination of whether a decision is a human or algorithmic decision is a factual question, based on the varying significance or materiality of Ai in a given decision-making process. In other words, it depends on the extent to which Ai algorithms are employed in a given decision-making process.[34] In practice, this takes various forms and arrangements, ranging from systems that merely support the human decision maker by making a recommendation, to fully automated decision systems that can autonomously make or implement a decision on behalf of an organisation or institutions, with limited or without human involvement.[35] Of course, there are instances where the significance of Ai in the decision-making process is manifestly evident, as is the case where all key aspects of the process are automated, or when human involvement in the process is only tokenized, without meaningful interaction or influence on the decision making.[36]

In other circumstances, the materiality of Ai might be more difficult to determine, as it is influenced by the specific context in which the decision-making process occurs.[37] Consider for instance the use of Ai to produce a recommendation upon which a human decision-maker then bases their final decision. Generally, this would constitute a significant use of Ai which might differ, for example, where a human decision maker employs a sophisticated word-processing program developed with Ai to merely document their decision. Although each would entail the use of sophisticated Ai algorithms, the materiality of these algorithms to the final decision will vary.

As such, the determination of whether a decision is based on the output of an Ai algorithmic system must be made on a factual case-by-case basis and will depend on the factual extent to which the Ai algorithms play a material role in reaching the particular decision, i.e., the extent to which the decision-making process has been automated. For instance, a decision will nonetheless remain an algorithmic decision, where a tokenized human is in the loop, but the human has no meaningful impact or influence on the final decision. This lets us know that in practice, this will be a no-doubt complex factual question to answer. Given the Act’s lack of specificity on when this threshold is met, we will likely rely on the Courts or the Commission to provide much needed guidance on how to interpret this requirement in practice.

  1. “legal effects or similarly significantly affects that person”

Algorithms are employed in various domains to make decisions that carry varying impacts and produce varying effects on individuals.[38] On the lower end, algorithms are used to make and support low-impact decisions such as the best route to take to a destination, which emails are spam, which Netflix show to watch, or what product to buy next. On the opposite end of the spectrum, Ai algorithms are also used to make decisions with a ‘legal or similarly significant effect’ for individuals, such as in the contexts listed at Annex III. Accordingly, the scope of Article 86 is limited to those decisions that produce “legal effects or similarly significantly affects” that person. The Ai Act adopts the term ‘legal or similarly significant effect’ from Article 22 of the GDPR,[39] to set as a benchmark distinguishing algorithmic decisions whose impact is trivial, from those that bear an impact on an individual’s legal rights and entitlements.

a.     Legal effect

According to the Article 29 Data Protection Working Party’s Guidelines on Automated Individual Decision-Making and Profiling, this phrase refers to “a processing activity that has an impact on someone’s legal rights, such as the freedom to associate with others, vote in an election, or take legal action.”[40] A legal effect may also encompass any outcome that alters an individual’s legal status or contractual rights. This same interpretation is confirmed by the UK Information Commissioner’s Office (ICO), which similarly understands a decision under the UKGDPR as having a“legal effect” if it influences an individual’s legal status or rights, such as eligibility for social security benefits.[41]

b.     Similarly significant effect

A decision with a ‘similarly significant effect’ has a comparable impact on an individual’s circumstances, behaviour, or choices.[42] In essence, this means that even in situations where no specific legal (statutory or contractual) rights or obligations are directly impacted, the individuals concerned — the data subjects — may still experience a significant enough impact to necessitate the application of protective measures.

The challenge here lies in defining the threshold for significance. For instance, under Article 22 of the GDPR, each of the following credit decisions would be considered as having “similarly significant effects,” yet they differ significantly in their impact on the individuals involved: (1) renting a city bike during a holiday abroad for two hours, (2) purchasing a kitchen appliance or a television set on credit (3) obtaining a mortgage for the purchase of a first home. These examples highlight the varying degrees of personal impact from decisions that have substantially different implications for the individuals concerned. This variation underscores the complexity in setting a clear and universally applicable threshold for what constitutes a significant decision.

  1. Clear and meaningful explanations of the role of the Ai system

The Act entitles individuals to “clear and meaningful explanations of the role of the AI system in the decision-making procedure.” In other words, how or to what extent the Ai system contributed to that decision in question. Although the Act offers no further guidance, the use of the words “clear and meaningful” suggests further specification which will require contextual interpretation. “Clear” on the one hand suggests that the explanations provided to affected persons must be presented in a way that is easy to comprehend, presumably avoiding technical jargon or overly complex language.[43]

According to the National Institute of Standards and Technology, an explanation is “meaningful” when it is “understandable to the intended consumer.”[44] In other words, to be “meaningful,” explanations must exceed mere technical descriptions, they must provide relevant and useful information, allowing affected persons to comprehend the role of the Ai system in the decision and its potential impact on their rights and interests, and provide the affected persons with further steps.[45] Moreover, what is meaningful will change depending on the purpose of the explanation.[46] The UN Special Rapporteur on Privacy adds further that the explanations must be “timely and adapted to the expertise of the stakeholder concerned (e.g. layperson, regulator or researcher).”[47]

Understanding the audience’s needs, level of expertise, and overall psychological differences will be required to a meaningful explanation. A meaningful explanation would also address specific concerns and questions that the affected person may have about the decision, including the factors or data points considered, the reasoning behind the outcome, and the potential impact on their rights and interests. The challenge however is that a meaningful explanation risks exposing confidential or sensitive business information, intellectual property or even system vulnerabilities.[48]

  1. The main elements of the decision taken

In addition to clear and meaningful explanations, the Act also entitles individuals to explanations of the “main elements of the decision taken.” The Act does not provide guidance on what elements these are, leaving this open for interpretation, however, it is clear that it goes beyond just knowing that an Ai system was involved. A purposive interpretation suggests that the affected individual should receive an explanation that empowers them to exercise their rights effectively. This implies that the explanation should be detailed enough to enable individuals to understand the decision-making process and, if necessary, challenge the outcome. Depending on the circumstances, this might entail an understanding of the decision logic and reasoning,[49] the specific factors that were considered in the decision-making process,[50]and  their weighting and combinations.[51] The UN Special Rapporteur on Privacy adds further that the explanations must be in“clear and understandable language,” include details about “the degree to which an [artificial intelligence] system influences and shapes the organisational decision-making process, design choices of the system, and the rationale for deploying it.”[52] This level of detail allows the affected person to understand why they were approved or denied and whether the decision was fair and unbiased.

  1. Exceptions – Article 86(2) and (3)

Article 86(2) of the Act exempts high risk Ai systems listed at point 2 of Annex III or those used for Critical infrastructure, including Ai systems intended to be used as safety components in the management and operation of critical digital infrastructure, road traffic, or in the supply of water, gas, heating or electricity. This exemption is more straightforward and logical; it is likely because of the critical nature of these systems and the potential risks and inefficiencies associated with disclosing detailed explanations, such as compromising security or revealing sensitive information. However, it still raises concerns about transparency and accountability in these especially critical sectors; it carries the potential to limit individuals’ ability to challenge decisions that may significantly impact their lives.

Third, sub-Article 2 clarifies that the right to explanation can be limited or excluded in certain situations. Specifically, it allows for exceptions or restrictions to the right to explanation when such limitations are established by Union (EU) law or national law that complies with EU law. In essence, it acknowledges that there may be legitimate reasons to restrict the right to explanation in specific cases, such as protecting national security, safeguarding confidential information,[53]or ensuring the effective functioning of law enforcement or judicial processes. However, it’s worth noting that any exceptions or restrictions to the right to explanation must be in line with EU law, including the Charter of Fundamental Rights. This means that any limitations to the right to explanation are proportionate and justified, and do not unduly infringe on individuals’ rights and freedoms.[54]

Finally, in an attempt to avoid duplication and facilitate the harmonious integration of the right to explanation, Article 86(3) provides that the right to an explanation under the Act shall apply only where it is not otherwise provided for under existing Union law. Thus, Article 86 is not intended to replace or override existing rights to explanation (such as Article 13 of the GDPR) or similar rights provided for in other EU laws. Article 86 fills a crucial gap in Ai by extending the right to explanation to high-risk Ai systems that may not be covered by existing data protection laws. This functions to ensure that individuals have a consistent right to explanation across different applications, regardless of whether they involve personal data processing.

Legal Challenges to the Right to Explanation

Lack of detail and specificity standards

The Ai Act does not explicitly define the level of detail required for explanations under Article 86(1). This lack of specificity may lead to an inconsistent application of the right to explanation across different Ai systems and use cases. Further, what constitutes a “clear and meaningful” explanation will vary depending on the complexity of the Ai system, the nature of the decision, and the affected person’s level of technical understanding.[55] Similarly, the “main elements” of a decision could be interpreted differently in different contexts. Consider for example the more specific, detailed language used at Article 13(2)(f) of the GDPR. Article 13(2)(f) of the GDPR requires that data subjects be provided with “meaningful information about the logic involved” in automated decision-making, including “the significance and the envisaged consequences of such processing for the data subject.” This GDPR provision offers more specific guidance on the content of explanations relative to the Ai Act, unfortunately.

Although writing primarily in the context of Article 13 of the GDPR, the discourse on explainability has long been proposing specific and detailed facts that may form part of a meaningful explanation to give effect to the objective behind the obligation. There is clearly a need for more concrete regulatory guidance on what constitutes a “clear and meaningful” explanation and the “main elements” of a decision under the Ai Act. In the meantime, and as the Act is implemented and interpreted, courts may save the day by profining much needed clarification of the scope of the right to explanation.

Limited to High Risk Ai systems

Article 86 is limited in its scope of application in a number of ways. Firstly, Article 86(1) only applies to Ai systems that have been classified as “high risk.” All other decision making or decision support systems, which are not classified as high risk are not covered by this provision. This leaves open a wide range of decision support systems, which though not meeting the threshold of “high risk,” may nonetheless produce adverse legal effects or have similarly significant impacts on individuals’ health, safety, or fundamental rights. In such cases, where a decision is taken on the basis of an Ai system falling outside the high risk category, individuals will not have a right to an explanation.

Conclusion

In closing, the right to an explanation is a welcomed and celebrated step forward in the governance of algorithmic transparency. It plays a central role as a prerequisite for effective judicial protection, because it enables individuals to understand the basis of an Ai-based decision and potentially challenge it in a competent forum. However, Arno Cuypers critiques the right to explanation, highlighting three key issues.

First, he argues that explanations may not be effective, as studies and research show that participants given explanations for black box decisions are less able to detect incorrect predictions due to information overload.[56] Second, he notes that explanations may be inaccurate since there is no guarantee they reflect the actual reasons for decisions, risking the use of plausible but incorrect justifications. Third, he points out that explanations may be incomplete, as they might not consistently lead to the same outcomes or could be overly complex, leading to information overload.[57]

In addition, the right faces both legal and technical feasibility challenges. Legally, the lack of clarity and specificity regarding what constitutes an adequate explanation leaves room for fragmented interpretations and inconsistent application. Striking a balance between the level of detail required in explanations and the protection of sensitive information is also a challenge. Furthermore, the right is only applicable to high-risk Ai systems, excluding a significant portion of AI systems, and is subject to limitations under Union law.

Technically, many decision-making systems, particularly those based on complex algorithms like deep learning or neural networks, are inherently difficult or impossible to interpret, evaluate, or explain. As the Special Rapporteur reminds is in her 2023 Report to the General Assembly, there are significant trade-offs between enhancing a system’s explainability on the one hand, which may reduce its statistical accuracy, or increasing its accuracy at the expense of its interpretability. More complex systems are less explainable, relative to more simple algorithms which while explainable, may not be as accurate.[58] Moreover, providing explanations for every decision can be computationally intensive and resource-demanding, especially for large-scale systems. Moreover, ensuring that explanations are interpretable by non-experts remains a challenge. Addressing these legal and technical challenges is crucial for the effective implementation of the right to explanation. In the following section, I will examine the text of Article 86 of the Ai Act to explore how the aforementioned challenges are addressed.

As such, it could be argued that the right to explanation cannot be exercised in a satisfactory manner when black-box models make high-stakes decisions. Black-box models, by their nature, lack transparency, making it difficult for individuals to understand how decisions are made. This opacity hinders meaningful scrutiny and challenges to Ai-driven decisions, thereby undermining the very purpose of the right to explanation. Instead, the right to explanation should mandate the use of interpretable models in scenarios involving significant impacts on individuals’ rights and freedoms. Interpretable models, which provide insights into their decision-making processes, allow for a more transparent and accountable Ai system, facilitating trust and ensuring that individuals can genuinely contest decisions that affect them. Without this requirement, the right to explanation risks being of little more than symbolic value, as it would fail to provide the necessary clarity and accountability. This limitation not only diminishes the effectiveness of the right but also perpetuates existing inequalities by allowing biassed or erroneous decisions to go unchallenged. Therefore, for the right to explanation to be truly effective and not merely symbolic, it must be coupled with the use of interpretable AI models in high-stakes decision-making contexts.

 

References

[1] EU High-Level Expert Group on AI ‘Ethics Guidelines for Trustworthy AI | Shaping Europe’s Digital Future’ (8 April 2019) <https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai> accessed 5 July 2024.

[2] Report of the UN Special Rapporteur on the right to privacy, Ana Brian Nougrères, ‘Principles of Transparency and Explainability in the Processing of Personal Data in Artificial Intelligence’ (30 August 2023) UN Doc A/78/310, available at https://documents.un.org/doc/undoc/gen/n23/242/35/pdf/n2324235.pdf?token=jKxBMKTaidlpsn21HL&fe=true

[3] Tae Wan Kim and Bryan R Routledge, ‘Why a Right to an Explanation of Algorithmic Decision-Making Should Exist: A Trust-Based Approach’ (2022) 32 Business Ethics Quarterly 75.

[4] Regulation (EU) 2016/679 of the European Parliament and of the Council of the European Union of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data (General Data Protection Regulation) [2016] OJ L119/1, Recital 39; Charter of Fundamental Rights of the European Union [2012] OJ C326/391, Article 8; E Bayamlıoğlu, ‘The right to contest automated decision under the General Data Protection Regulation: Beyond the so-called “right to explanation” [2020] Regulation & Governance 1058, 1059. 

[5] ‘A/78/310: Principles of Transparency and Explainability in the Processing of Personal Data in Artificial Intelligence’ (OHCHR) <https://www.ohchr.org/en/documents/thematic-reports/a78310-principles-transparency-and-explainability-processing-personal> accessed 5 July 2024.

[6] https://plus.google.com/+UNESCO, ‘Recommendation on the Ethics of Artificial Intelligence’ (UNESCO) <https://en.unesco.org/about-us/legal-affairs/recommendation-ethics-artificial-intelligence> accessed 5 January 2024.

[7] Maitrayee Pathak, ‘Data Governance Redefined: The Evolution of EU Data Regulations from the GDPR to the DMA, DSA, DGA, Data Act and AI Act.’ [2024] Social Science Research Network.

[8] Regulation (EU) 2016/679 of the European Parliament and of the Council of the European Union of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data (General Data Protection Regulation) [2016] OJ L119/1, Article 13(2)(f).

[9] See Sandra Wachter, Brent Mittelstadt and Luciano Floridi, ‘Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation’ (2017) 7 International Data Privacy Law 76, wherein they contend that.“the GDPR does not . . . implement a right to explanation, but rather [a] ‘right to be informed.” Notably, a more narrow interpretation.

[10] Casey, Bryan; Farhangi, Ashkon; Vogl, Roland, ‘Rethinking Explainable Machines: The GDPRs Right to Explanation Debate and the Rise of Algorithmic Audits in Enterprise’ <https://lawcat.berkeley.edu/record/1128983> accessed 1 July 2024.

[11] Specifically, Article 86(3) clarifies that the right is applicable only to the extent that the right is not otherwise provided for under Union law.

[12] EU High-Level Expert Group on AI ‘Ethics Guidelines for Trustworthy AI | Shaping Europe’s Digital Future’ (n 2).

[13] GDPR (n 5) Recital 39.

[14] Recital 72, Preamble to the Ai Act.

[15] Chloe Xiang, ‘Scientists Increasingly Can’t Explain How AI Works’ (Vice, 1 November 2022) <https://www.vice.com/en/article/y3pezm/scientists-increasingly-cant-explain-how-ai-works> accessed 6 August 2023; ‘How Does AI Make Decisions We Don’t Understand? Why Is It a Problem? | Built In’ <https://builtin.com/artificial-intelligence/ai-right-explanation?overridden_route_name=entity.node.canonical&base_route_name=entity.node.canonical&page_manager_page=node_view&page_manager_page_variant=node_view-panels_variant-13&page_manager_page_variant_weight=3> accessed 6 August 2023; ‘Even AI Creators Don’t Understand How Complex AI Works’ (Big Think, 17 April 2017) <https://bigthink.com/the-future/black-box-ai/> accessed 6 August 2023; ‘The Dark Secret at the Heart of AI’ (MIT Technology Review) <https://www.technologyreview.com/2017/04/11/5113/the-dark-secret-at-the-heart-of-ai/> accessed 6 August 2023.

[16] E Taylor, ‘Explanation and the Right to Explanation’ [2023] Journal of the American Philosophical Association 1,7.

[17] Taylor (n 12) 2.

[18] ibid.

[19] P.B de Laat, ‘Algorithmic decision-making employing profiling: will trade secrecy protection render the right to explanation toothless?’ [2022] Ethics Info Technol 1,2.

[20] ibid.

[21] Article 1 of the Ai Act.

[22] See Annex III.

[23] See Recital 48. This interpretation also reflects the EU’s commitment to a human-centric approach to Ai.

[24] Vincent Chiao, ‘Algorithmic Decision-Making, Statistical Evidence and the Rule of Law’ [2023] Episteme 1; Dirk J. Brand, ‘Creative Commons Attribution 3.0 Austria (CC BY 3.0)’, (2020) 12(1) Journal of eDemocracy and Open Government 114-131; Alina Köchling and Marius Claus Wehner, ‘Discriminated by an Algorithm: A Systematic Review of Discrimination and Fairness by Algorithmic Decision-Making in the Context of HR Recruitment and HR Development’ (2020) 13 Business Research 795; Chiao; Hao-Fei Cheng and others, ‘Explaining Decision-Making Algorithms through UI: Strategies to Help Non-Expert Stakeholders’, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (ACM 2019) <https://dl.acm.org/doi/10.1145/3290605.3300789> accessed 26 August 2022; Ari Waldman and Kirsten Martin, ‘Governing Algorithmic Decisions: The Role of Decision Importance and Governance on Perceived Legitimacy of Algorithmic Decisions’ (2022) 9 Big Data & Society 20539517221100449.

[25] European Law Institute, ‘Guiding Principles for the Development and Application of Administrative Decision-Making in the European Union’ (European Law Institute, 2020) https://www.europeanlawinstitute.eu/fileadmin/user_upload/p_eli/Publications/ELI_Innovation_Paper_on_Guiding_Principles_for_ADM_in_the_EU.pdfaccessed 8 February 2024.

[26] Ryan Calo, ‘Artificial Intelligence Policy: A Primer and Roadmap’ (2017) 51 U.C. Davis Law Review 399.

[27] European Law Institute (n 21).

[28] ‘Article 13 GDPR’ (GDPRhub Commentary) 13 <https://gdprhub.eu/index.php?title=Article_13_GDPR> accessed 29 May 2024.

[29] K Wiedemann, ‘Profiling and (automated) decision-making under the GDPR: A two-step approach’ [2022] Computer Law & Security Review 1,2.

[30] Article 4(4) of the GDPR.

[31] ibid.

[32] Profiling, then, entails collecting and analysing various aspects of an individual’s personal data to construct, predict or infer their characteristics, future behaviours, interests, or capabilities. It is often used to generalise or make predictions about individuals, which then functions to inform targeted decision-making processes. In practice however, these processes often overlap and mutually reinforce each other.

[33] See Article 3 of the Ai Act.

[34] ‘Final Report: Human Rights and Technology | Australian Human Rights Commission’ (1 March 2021) <https://humanrights.gov.au/our-work/technology-and-human-rights/publications/final-report-human-rights-and-technology> accessed 3 June 2024.

[35] Theo Araujo and others, ‘In AI We Trust? Perceptions about Automated Decision-Making by Artificial Intelligence’ (2020) 35 AI & SOCIETY 611..

[36] ‘Final Report: Human Rights and Technology | Australian Human Rights Commission’ (n 35).

[37] ibid.

[38] David Restrepo Amariles, ‘Algorithmic Decision Systems: Automation and Machine Learning in the Public Administration’ in Woodrow Barfield (ed), The Cambridge Handbook of the Law of Algorithms (Cambridge University Press 2020) <https://www.cambridge.org/core/books/cambridge-handbook-of-the-law-of-algorithms/algorithmic-decision-systems/B5731E525B19EBD98B132CC20A0DD7F6> accessed 27 November 2023; Ari Waldman and Kirsten Martin, ‘Governing Algorithmic Decisions: The Role of Decision Importance and Governance on Perceived Legitimacy of Algorithmic Decisions’ (2022) 9 Big Data & Society 20539517221100449.

[39] See Article 22 of the GDPR.

[40] Article 29 Data Protection Working Party, ‘Guidelines on Automated Individual Decision-Making and Profiling for the Purposes of Regulation 2016/679’ (WP 251, 3 October 2017).

[41] ‘Rights Related to Automated Decision Making Including Profiling’ (6 June 2024) <https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/individual-rights/individual-rights/rights-related-to-automated-decision-making-including-profiling/> accessed 5 July 2024.

[42] Lee A Bygrave, ‘Article 22 Automated Individual Decision-Making, Including Profiling’ in Christopher Kuner and others (eds), The EU General Data Protection Regulation (GDPR): A Commentary (Oxford University Press 2020) 22 <https://doi.org/10.1093/oso/9780198826491.003.0055> accessed 5 July 2024.

[43]‘A/78/310: Principles of Transparency and Explainability in the Processing of Personal Data in Artificial Intelligence’ (n 6).

[44] P. Jonathon Phillips et al, ‘Four Principles of Explainable Artificial Intelligence’ (NISTIR 8312, National Institute of Standards and Technology 2021) https://doi.org/10.6028/NIST.IR.8312 accessed 29 June 2024.

[45] Andrew D Selbst and Julia Powles, ‘Meaningful Information and the Right to Explanation’ (2017) 7 International Data Privacy Law 233.

[46] P. Jonathon Phillips et al, ‘Four Principles of Explainable Artificial Intelligence’ (NISTIR 8312, National Institute of Standards and Technology 2021) https://doi.org/10.6028/NIST.IR.8312 accessed 29 June 2024.

[47] ‘A/78/310: Principles of Transparency and Explainability in the Processing of Personal Data in Artificial Intelligence’ (n 6).

[48] Selbst and Powles (n 47); P. Jonathon Phillips et al, ‘Four Principles of Explainable Artificial Intelligence’ (NISTIR 8312, National Institute of Standards and Technology 2021) https://doi.org/10.6028/NIST.IR.8312 accessed 29 June 2024.

[49] The European Data Protection Board and the European Data Protection Supervisor have clarified that this entails a general explanation of the logic or procedure involved in the algorithmic decision process. See European Data Protection Board and European Data Protection Supervisor, Joint Opinion 5/2021 on the proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act), 18 June 2021, p. 17 https://edpb.europa.eu/system/files/2021-06/edpb-edps_joint_opinion_ai_regulation_en.pdf accessed 5 July 2024.

[50] ‘A/78/310: Principles of Transparency and Explainability in the Processing of Personal Data in Artificial Intelligence’ (n 6).

[51] Ibero-American Data Protection Network, Specific Guidelines for Compliance with the Principles and Rights that Govern the Protection of Personal Data in Artificial Intelligence Projects, Available at https://www.redipd.org/sites/default/files/2020-02/guide-specific-guidelines-ai-projects.pdf

[52] ‘A/78/310: Principles of Transparency and Explainability in the Processing of Personal Data in Artificial Intelligence’ (n 6).

[53] We focus on this limitation in the next section.

[54] Claudio Novelli and others, ‘How to Evaluate the Risks of Artificial Intelligence: A Proportionality-Based, Risk Model for the AI Act’ [2023] Social Science Research Network.

[55] P. Jonathon Phillips et al, ‘Four Principles of Explainable Artificial Intelligence’ (NISTIR 8312, National Institute of Standards and Technology 2021) https://doi.org/10.6028/NIST.IR.8312 accessed 29 June 2024.

[56] https://ieeexplore.ieee.org/document/9671745; https://arxiv.org/pdf/1802.07810

[57] https://www.law.kuleuven.be/citip/blog/the-right-to-explanation-in-the-ai-act-a-right-to-interpretable-models/

[58] ‘A/78/310: Principles of Transparency and Explainability in the Processing of Personal Data in Artificial Intelligence’ (n 6).