A policy analysis of Confidentiality obligation under Article 78 of the EU Ai Act

Text of Article 78 of the Act
“1. The Commission, market surveillance authorities and notified bodies and any other natural or legal person involved in the application of this Regulation shall, in accordance with Union or national law, respect the confidentiality of information and data obtained in carrying out their tasks and activities in such a manner as to protect, in particular:
(a) the intellectual property rights and confidential business information or trade secrets of a natural or legal person, including source code, except in the cases referred to in Article 5 of Directive (EU) 2016/943 of the European Parliament and of the Council;
(b) the effective implementation of this Regulation, in particular for the purposes of inspections, investigations or audits;
(c) public and national security interests;
(d) the conduct of criminal or administrative proceedings;
(e) information classified pursuant to Union or national law.
- The authorities involved in the application of this Regulation pursuant to paragraph 1 shall request only data that is strictly necessary for the assessment of the risk posed by AI systems and for the exercise of their powers in accordance with this Regulation and with Regulation (EU) 2019/1020. They shall put in place adequate and effective cybersecurity measures to protect the security and confidentiality of the information and data obtained, and shall delete the data collected as soon as it is no longer needed for the purpose for which it was obtained, in accordance with applicable Union or national law.
- Without prejudice to paragraphs 1 and 2, information exchanged on a confidential basis
between the national competent authorities or between national competent authorities and the Commission shall not be disclosed without prior consultation of the originating national competent authority and the deployer when high-risk AI systems referred to in point 1, 6 or 7 of Annex III are used by law enforcement, border control, immigration or asylum authorities and when such disclosure would jeopardise public and national security interests. This exchange of information shall not cover sensitive operational data in relation to the activities of law enforcement, border control, immigration or asylum authorities.
When the law enforcement, immigration or asylum authorities are providers of high-risk AI systems referred to in point 1, 6 or 7 of Annex III, the technical documentation referred to in Annex IV shall remain within the premises of those authorities. Those authorities shall ensure that the market surveillance authorities referred to in Article 74(8) and (9), as applicable, can, upon request, immediately access the documentation or obtain a copy thereof. Only staff of the market surveillance authority holding the appropriate level of security clearance shall be allowed to access that documentation or any copy thereof.
- Paragraphs 1, 2 and 3 shall not affect the rights or obligations of the Commission, Member States and their relevant authorities, as well as those of notified bodies, with regard to the exchange of information and the dissemination of warnings, including in the context of cross-border cooperation, nor shall they affect the obligations of the parties concerned to provide information under criminal law of the Member States.
- The Commission and Member States may exchange, where necessary and in accordance with relevant provisions of international and trade agreements, confidential information with regulatory authorities of third countries with which they have concluded bilateral or multilateral confidentiality arrangements guaranteeing an adequate level of confidentiality.
Relevant Recitals
- Recital 107 provides that “[i[n order to increase transparency on the data that is used in the pre-training and training of general-purpose AI models, including text and data protected by copyright law, it is adequate that providers of such models draw up and make publicly available a sufficiently detailed summary of the content used for training the general-purpose AI model. While taking into due account the need to protect trade secrets and confidential business information, this summary should be generally comprehensive in its scope instead of technically detailed to facilitate parties with legitimate interests, including copyright holders, to exercise and enforce their rights under Union law, for example by listing the main data collections or sets that went into training the model, such as large private or public databases or data archives, and by providing a narrative explanation about other data sources used.
- Recital 167 further provides that “ In order to ensure trustful and constructive cooperation of competent authorities on Union and national level, all parties involved in the application of this Regulation should respect the confidentiality of information and data obtained in carrying out their tasks, in accordance with Union or national law. They should carry out their tasks and activities in such a manner as to protect, in particular, intellectual property rights, confidential business information and trade secrets, the effective implementation of this Regulation, public and national security interests, the integrity of criminal and administrative proceedings, and the integrity of classified information.
Introduction
The European Union (EU) Artificial Intelligence Act (“Ai Act” or “Act”) introduces a broad confidentiality obligation designed to protect proprietary business information, secrets and data acquired during its implementation. This obligation is cast in a wide and broad scope of application, to reassure businesses that their proprietary information—including source code, intellectual property, and trade secrets—will be safeguarded throughout the Ai governance lifecycle. While this emphasis on confidentiality is understandable, particularly when viewing the Act as a product safety regulation,[1] The Act’s pronounced emphasis on confidentiality requires careful reading, especially given the nature of Ai and machine learning technologies, and their profound impact on public interests.
The current Ai landscape is characterised by a significant power imbalance in favour of private actors. This imbalance, I propose, stems from the largely [economic] productive role of Ai systems in industry, driven by a profit-oriented motive. AI allows for an unprecedented efficiency, profit maximisation, and labour minimization, allowing private actors to cut costs, accelerate production, and ultimately boost profits. Intellectual property (IP) emerges as a critical aspect of this dynamic. This is because IP protections incentivize private actors in the EU to invest in Ai research and development, since they can secure exclusive rights to their inventions and reap significant financial rewards.[2] This proprietary approach is crucial for their survival and growth in a competitive, data and idea driven market.
However, the Act’s pronounced emphasis on IP protection and confidentiality, while intended to foster innovation, creates challenges for transparency and accountability. If companies can shield their Ai models and algorithms behind broad confidentiality and IP claims, effective oversight and scrutiny by regulators and the public become difficult. This tension between protecting business interests and ensuring transparency is a recurring theme throughout the AI Act. Regulate too strictly in favour of fundamental rights and you risk stifling innovation, swing the opposite way, you risk unacceptable risks to fundamental rights, health and safety.
While the Act attempts to balance these competing interests and principles, its efficacy remains to be seen. This commentary will look closely at the Act’s confidentiality provisions starting at Article 78 of the Act, examining their implications within and without the Act. It will further explore the tensions between confidentiality and other core principles of the AI Act, and make some concluding remarks.
Article 78(1)
The Ai Act employs a comprehensive approach to confidentiality, extending protections to various actors and processes involved in the development, deployment, and oversight of Ai systems, across various articles in the Act. The primary confidentiality obligation stands at Article 78(1) of the Act which establishes a broad confidentiality obligation for parties involved in the Act’s implementation. It provides that all persons and actors in the Ai Act life cycle have a confidentiality obligation; this includes “the Commission, market surveillance authorities and notified bodies and any other natural or legal person involved in the application of this Regulation…”[3] This formulation at Article 78(1), extending confidentiality obligations to “any other natural or legal person involved in the application of this Regulation” is very broad.
The specific content of the obligation is for all natural or legal persons involved in the application of this Act “to respect the confidentiality of information and data obtained in carrying out their tasks and activities.” This must be done in such a manner as to protect “(a) the intellectual property rights and confidential business information or trade secrets of a natural or legal person, including source code, except in the cases referred to in Article 5 of Directive (EU) 2016/943 of the European Parliament and of the Council. (b) To protect the effective implementation of this Regulation, in particular for the purposes of inspections, investigations or audits; (c) “public and national security interests; (d) the conduct of criminal or administrative proceedings; (e) information classified [as confidential] pursuant to Union or national law.”[4]
I must note here that other than the inclusion of source codes, the Act does not define what constitutes “confidential business information” or “trade secrets.” Recital 167 of the Act offers some guidance, albeit indirectly. It mentions the need to protect “intellectual property rights, confidential business information and trade secrets,” suggesting that these categories of information are considered confidential. This interpretation aligns with the broader understanding of confidentiality in EU law, which generally encompasses information that is not publicly known, has commercial value, and is subject to reasonable efforts to maintain its secrecy.[5]
In the circumstances, recourse must be had to Directive (EU) 2016/943 on the protection of trade secrets.[6] Article 2(1) thereof, it defines a trade secret as information that is secret in the sense that it is not generally known among or readily accessible to persons within the circles that normally deal with the kind of information in question; has commercial value because it is secret; has been subject to reasonable steps under the circumstances, by the person lawfully in control of the information, to keep it secret.[7] This interpretation suggests that information, including source code, that is not publicly known, has commercial value due to its secrecy, and has been subject to reasonable efforts to maintain its secrecy would similarly be considered confidential business information under the Ai Act.[8]
The challenge is that the material and personal scope of application of Article 78(1) risks being too broad to coexist harmoniously with other/competing obligations. Its overbreadth risks having significantly diminutive implications for the transparency and accountability principles that sit at the centre of the EU’s commitment to trustworthy Ai and fundamental rights, and find expression through various sections of the Ai Act. That is to say, this broad formulation seems at odds with other provisions of the Act. I return later to this latter point.
Article 78(2)
Article 78(2) of the Ai Act seeks to balance confidentiality obligations with the necessity of information exchange for regulatory, legal, and safety purposes. It provides that authorities involved in the implementation of the Act, while permitted to request data from providers, should only request data that is “strictly necessary” for assessing the risks posed by Ai systems and exercising their powers under the Act and Regulation (EU) 2019/1020 on on market surveillance and compliance of products.[9] Further, it also requires them to implement “adequate and effective cybersecurity measures” to protect the security and confidentiality of the information and data obtained and to delete the collected data once it is no longer needed, in line with applicable Union or national laws.[10]
Presumably, this is aimed at preventing excessive data requests and potential misuse. However, it could inadvertently hinder effective oversight. By limiting the scope of data accessible to authorities, the provision risks impeding the ability to comprehensively assess the risks posed by Ai systems, especially those with complex and opaque functionalities.[11] We see this for example at Article 74(12) of the Act, which grants market surveillance authorities access to the documentation and data used for developing high-risk Ai systems. However, this access is “subject to security safeguards” and must be balanced with the confidentiality obligations outlined in Article 78.
Moreover, the requirement to delete collected data “as soon as it is no longer needed” (aimed at data minimisation) at Article 78(2) exposes a narrow conception of Ai oversight, and may as a result hamper further oversight efforts that require protracted, less instantaneos oversight timeframes. As we’ve come to learn, new risks and vulnerabilities emerge over time in the evolving field of Ai, and so time emerges as a useful resource in achieving a comprehensive understanding of Ai impacts and effects. If data crucial for understanding these evolving risks is deleted prematurely, it could impede future investigations and prevent authorities from adapting their oversight strategies to address new challenges.
Article 78(3)
Article 78(3) creates a specific safeguard for law enforcement, border control, immigration, and asylum authorities when they are providers of high-risk Ai systems. It stipulates that information exchanged between these authorities and the Commission, or among the national competent authorities themselves, cannot be disclosed without prior consultation with the originating authority and the deployer if such disclosure could jeopardise public or national security interests. Additionally, the technical documentation of these high-risk Ai systems must remain within the premises of the authorities, with access granted to market surveillance authorities only upon request and limited to staff with appropriate security clearance.[12]
In this context, Recital 58 of the Preamble to the Act recognises the significant power imbalance in favour of law enforcement agencies. By restricting the flow of information and limiting external scrutiny, the confidentiality obligation here could shield these law enforcement and State agencies from accountability and oversight, potentially perpetuating the misuse of Ai systems or discriminatory practices to go unchecked.
This concern is particularly salient given the historical context of surveillance technologies being disproportionately deployed against marginalized communities. The opacity surrounding Ai systems in law enforcement could exacerbate existing biases and inequalities, potentially leading to violations of fundamental rights. For instance, studies have shown that facial recognition algorithms often exhibit higher error rates for individuals with darker skin tones, leading to misidentifications and wrongful arrests. In the context of the Ai Act, Article 78(3) could inadvertently create a similar environment of opacity and unaccountability, where the potential for bias and discrimination remains concealed from public view.[13]
Moreover, other provisions within the Act further compound these concerns. Article 26(5) mandates that deployers of high-risk Ai systems monitor their operation and report any incidents to the relevant authorities. However, this obligation explicitly excludes “sensitive operational data of deployers which are law enforcement authorities.” This exception, too, prioritises the confidentiality of law enforcement data over transparency and accountability. In the same vein, Article 72(2) mandates post-market monitoring by providers, requiring them to collect and analyse data on the performance of high-risk Ai systems. However, this obligation excludes “sensitive operational data of deployers which are law enforcement authorities,” similarly preferring confidentiality over transparency in the law enforcement context. Considered from a different perspective, one might contend that it is precisely in the context of law enforcement where the implementation of high-risk Ai systems warrants heightened transparency obligations.[14]
While the protection of public and national security is undoubtedly important, it is crucial to ensure that such protections do not come at the expense of fundamental rights and freedoms. The lack of transparency surrounding Ai systems in law enforcement could create a breeding ground for potential abuses and discriminatory practices. Therefore, it becomes imperative to strike a delicate balance between safeguarding security interests and ensuring adequate oversight and accountability mechanisms to prevent potential harms and uphold the principles enshrined in the EU Charter of Fundamental Rights.
Confidentiality Obligations across the Act
Moreover, and beyond Article 78, the confidentiality obligation reverberates well across the Act, with several other articles of the Ai Act echoing and reinforcing the obligation for various actors and processes across the Ai Act, in a complementary relation to the main confidentiality obligation at Article 78. The table below offers a matrix of these complementary confidentiality obligations scattered throughout the Act.
Article in Ai Act | Confidentiality Obligation |
Article 21(3) | Any information obtained by a competent authority pursuant to this Article shall be treated in accordance with the confidentiality obligations set out in Article 78. |
Article 25(5) | Paragraphs 2 and 3 are without prejudice to the need to observe and protect intellectual property rights, confidential business information and trade secrets in accordance with Union and national law. |
Article 28(6) | Notifying authorities shall safeguard the confidentiality of the information that they obtain, in accordance with Article 78. |
Article 31(7) | Notified bodies shall have documented procedures in place ensuring that their personnel, committees, subsidiaries, subcontractors and any associated body or personnel of external bodies maintain, in accordance with Article 78, the confidentiality of the information which comes into their possession during the performance of conformity assessment activities, except when it’s disclosure is required by law. The staff of notified bodies shall be bound to observe professional secrecy with regard to all information obtained in carrying out their tasks under this Regulation, except in relation to the notifying authorities of the Member State in which their activities are carried out. |
Article 37(3) | The Commission shall ensure that all sensitive information obtained in the course of its investigations pursuant to this Article is treated confidentially in accordance with Article 78. |
Article 45(4) | Notified bodies shall safeguard the confidentiality of the information that they obtain, in accordance with Article 78. |
Article 52(6) | The Commission shall ensure that a list of general-purpose AI models with systemic risk is published and shall keep that list up to date, without prejudice to the need to observe and protect intellectual property rights and confidential business information or trade secrets in accordance with Union and national law. |
Article 53(1)(b) | Providers of general-purpose Ai models shall: Without prejudice to the need to observe and protect intellectual property rights and confidential business information or trade secrets in accordance with Union and national law, the information and documentation shall: (i) enable providers of Ai systems to have a good understanding of the capabilities and limitations of the general-purpose Ai model and to comply with their obligations pursuant to this Regulation; and…etc |
Article 53(7) | Any information or documentation obtained pursuant to this Article, including trade secrets, shall be treated in accordance with the confidentiality obligations set out in Article 78. |
Article 55(3) | Any information or documentation obtained pursuant to this Article, including trade secrets, shall be treated in accordance with the confidentiality obligations set out in Article 78. |
Article 57(8) | Subject to the confidentiality provisions in Article 78, and with the agreement of the provider or prospective provider, the Commission and the Board shall be authorised to access the exit reports and shall take them into account, as appropriate, when exercising their tasks under this Regulation… |
Article 68(4) | The experts on the scientific panel shall perform their tasks with impartiality and objectivity, and shall ensure the confidentiality of information and data obtained in carrying out their tasks and activities. |
Article 70(5) | When performing their tasks, the national competent authorities shall act in accordance with the confidentiality obligations set out in Article 78. |
Article 74(14) | Any information or documentation obtained by market surveillance authorities must be treated in accordance with Article 78. |
Article 75(3) | Market surveillance authorities shall safeguard the confidentiality of the information that they obtain in accordance with Article 78 of this Regulation. |
Article 77(4) | Any information or documentation obtained by the national public authorities or bodies referred to in paragraph 1 of this Article pursuant to this Article shall be treated in accordance with the confidentiality obligations set out in Article 78. |
Table 1: Matric of confidentiality obligation in the EU Ai Act, beyond Article 78 of the Act.
What do these obligations have in common? Considered together in their repetitive emphasis and structure, these confidential obligations emphasised throughout the Act construct a formidable shield around Ai systems in the Act’s ecosystem. The protection of business information and intellectual property is a concern the Act’s drafters clearly prioritised. Such a totalising protective policy to confidential business information and data relating to Ai systems necessitates a counterbalancing mechanism within the Act. To that end, Article 78(1) of the Act leaves open an exception to exclude from its application those cases referred to at Article 5 of Directive 2016/943 on the protection of trade secrets against their unlawful acquisition, use and disclosure.
This means that confidentiality obligation do not apply where the acquisition, use or disclosure of the trade secret was carried out “(a) for exercising the right to freedom of expression and information as set out in the Charter, including in context of respect for the freedom and pluralism of the media; (b) for revealing misconduct, wrongdoing or illegal activity, pursuant to protecting the general public interest; (c) disclosure by workers to their Union representatives as part of the legitimate exercise by those representatives of their functions in accordance with Union or national law; and (d) for the purpose of protecting a legitimate interest recognised by Union or national law.”[15]
On paper, this exception significantly restores the balance in favour of transparency and the protection of fundamental rights by suspending confidentiality obligations where it is necessary to protect the general public interest or uphold fundamental rights. In practice, however, its application will likely be more complex and fraught with practical and legal challenges. Providers and depolyers of Ai systems (and holders of proprietary information relating to them), incentivised to protect their intellectual property and trade secrets, are likely to resist disclosures that could potentially undermine their competitive advantage or expose them to liability. This could foreseeably lead to protracted legal battles, where the interpretation and scope of the exceptions are resisted and contested.
Further, the process of invoking these exceptions in practice could be cumbersome and resource intensive, potentially delaying the disclosure of critical information which in turn can have detrimental effects, especially where timely access to information is crucial for protecting fundamental rights or preventing harm. Therefore, while the exceptions under Article 5 offer a potential avenue for balancing confidentiality with transparency and fundamental rights, their practical implementation is likely to be complex and contested. There remains a need for guidance on balancing these competing interests of confidentiality and transparency.
Relationship between Article 78 and other Principles and Provisions of the Act
The Ai Act’s broad confidentiality obligations, while well intended, could potentially come into conflict with other crucial provisions within the regulation. Tension arises between the protection of confidential business information under Article 78(1) on the one hand, and obligations that seek to ensure transparency and access to information about Ai systems for oversight.[16] An expansive confidentiality obligation risks impeding the Act’s transparency and accountability goals.
First, the broad confidentiality obligations at Article 78 of the Act risk significantly limiting the scope and depth of explanations required under Article 86(1) of the Act, which establishes the right to explanation. If deployers of high risk Ai systems can withhold crucial information about their Ai systems’ decision-making processes under the guise of confidentiality and trade secrets when providing explanations to affected person, the latter may not receive explanations that rise to the level required by the Article 86(1) of the Act, that of “clear or meaningful.”[17] This risks undermining affected persons’ ability to access legal remedies by challenging adverse decisions based on the outputs of Ai decision systems.
Secondly, overly broad confidentiality protections may conflict with Article 77, which grants judicial authorities the power to request and access documentation related to high-risk Ai systems. While Article 78’s broad scope could be interpreted to restrict the flow of information to judicial authorities, hindering their ability to investigate and adjudicate cases involving potential discrimination or other harms caused by Ai systems.[18] This could undermine the effectiveness of the Act’s provisions on fundamental rights protection and legal remedies, as it may impede the ability of courts to fully assess the risks and impacts of Ai systems. The same follows with oversight and enforcement of obligations under the Act. Article 11(1) of the Act, for example, requires providers to maintain technical documentation that demonstrates compliance with the Act’s requirements. However, Article 78(1)(a) also protects the confidentiality of this documentation, including the source code, as it is considered confidential business information.[19]
Third, Article 13(3)(d) of the Act requires providers to include information in the instructions for use that facilitates the interpretation of high-risk Ai system outputs by deployers. However, this obligation may foreseeably come into conflict with the protection of confidential business information, since detailed explanations of the system’s functioning might reveal proprietary information where these are formulated broadly.[20]
This conflict between confidentiality and transparency is not unique to the Ai Act. It’s a recurring tension in regulatory frameworks that seek to balance the protection of sensitive information with the need for public accountability and scrutiny.[21] In the context of the Ai Act, this tension is particularly salient due to the complex and often opaque nature of Ai systems, as well as the significant public interest often involved.
Confidentiality in the specific context of Ai
The traditional understanding of confidentiality, as a static safeguard rooted in controlling access and preventing unauthorised disclosure,[22] faces unique challenges in the dynamic landscape of Ai. The Act’s provisions on confidentiality, while seemingly aligned with conventional legal principles, may not fully address the complexities and nuances introduced by Ai technologies, and the socio-technical impacts produced by Ai systems in society. Socio-technical changes refer to the intertwined evolution of society and technology, where technological advancements influence social structures, norms, and behaviours.[23]
First, the potential use of Ai by the very actors tasked with upholding the Act’s confidentiality obligations could paradoxically undermine these protections. One way oversight agents might cope with the vast volume and velocity of data and information, will be through Ai tools, employed to analyse the vast amounts of data these institutions must handle.[24] However, this same capacity to analyse extensive datasets and uncover hidden patterns could inadvertently expose sensitive or confidential information, even when these Act’s provisions are in place.[25]
Second, the transformative nature of Ai distinguishes it from previous technologies due to its profound socio-technical impacts. Ai’s unique challenges and risks have significant implications for the public interest, surpassing those of prior technologies.[26] This necessitates a robust counterbalance to protect transparency and fundamental rights. As Ai systems reshape society, confidentiality access to information and data about these systems becomes increasingly critical for the safeguarding of transparency and fundamental rights. In this context, confidentiality carries unprecedented implications for public interest and fundamental rights. As such, legal rules on intellectual property more broadly must adapt to confront this reality.
Finally, as the technology evolves, what might be considered confidential at one stage of development or deployment could become less sensitive or even publicly known as the technology advances.[27] For example, an Ai model’s initial training data might be considered highly confidential, but as the model is refined and updated, the specific data points become less critical to its operation. This fluidity challenges the static nature of traditional confidentiality protections and calls for a more adaptive approach. The Act missed this unique opportunity to adapt confidentiality to the unique context of Ai systems, and the socio-technical realities it creates.
Conclusion
The EU Ai Act’s emphasis on confidentiality, while critical for fostering innovation and protecting business interests, also introduces a complex interplay with the Act’s overarching goals of transparency and fundamental rights protection. While the Act includes exceptions to suspend confidentiality obligations in specific circumstances, their practical application and interpretation remain uncertain. As such, there is a lack of guidance on what constitutes “confidential business information” in the context of Ai, and on the parameters of these exceptions in the context of high risk Ai systems.
Furthermore, the provisions appear to disproportionately favour already-powerful actors, such as law enforcement agencies, and providers of Ai systems, potentially perpetuating the existing power imbalance in Ai development and use, while also limiting public oversight. Striking the right balance between protecting legitimate business interests and ensuring meaningful transparency and accountability is crucial for the successful governance of Ai and the responsible development and use of Ai technologies in the EU. Moving forward, it is essential to address the ambiguities and potential overreach in the Act’s confidentiality provisions.
The Ai Act, while a landmark achievement, is ultimately not a human rights instrument. Instead, it is an advanced product safety legislation focused primarily on Ai developers and deployers. It is not a human rights document. It focuses primarily on compliance bureaucracy. while important, does not fully address the complex socio-technical challenges posed by Ai. As some commentators have noted, the Act risks becoming a superficial “box-ticking” exercise rather than a robust framework for ensuring the responsible and ethical use of Ai.[28] The challenge lies in finding a way to meaningfully reconcile the protection of intellectual property and trade secrets with the need for transparency and protecting individual fundamental rights.
References
[1] Marco Almada and N Petit, ‘The EU AI Act: Between Product Safety and Fundamental Rights’ [2022] Social Science Research Network; Jessica Kelly and others, ‘Navigating the EU AI Act: A Methodological Approach to Compliance for Safety-Critical Products’ [2024] arXiv.org.
[2] Claudio Novelli and others, ‘Generative AI in EU Law: Liability, Privacy, Intellectual Property, and Cybersecurity’ [2024] Social Science Research Network; Fred C Zacharias, ‘Rethinking Confidentiality’ (1988) 74 Iowa Law Review 351.
[3] Own emphasis.
[4] “[t]he Commission, market surveillance authorities and notified bodies and any other natural or legal person involved in the application of this Regulation shall, in accordance with Union or national law, respect the confidentiality of information and data obtained in carrying out their tasks and activities in such a manner as to protect, in particular:
(a) the intellectual property rights and confidential business information or trade secrets of a natural or legal person, including source code, except in the cases referred to in Article 5 of Directive (EU) 2016/943 of the European Parliament and of the Council;
(b) the effective implementation of this Regulation, in particular for the purposes of inspections, investigations or audits;
(c) public and national security interests;
(d) the conduct of criminal or administrative proceedings;
(e) information classified pursuant to Union or national law.”
[5] Tanya Aplin and others, ‘The Role of EU Trade Secrets Law in the Data Economy: An Empirical Analysis’ (2023) 54 IIC – International Review of Intellectual Property and Competition Law 826.
[6] DIRECTIVE (EU) 2016/943 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL
of 8 June 2016 on the protection of undisclosed know-how and business information (trade secrets) against their unlawful acquisition, use and disclosure, available at https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32016L0943
[7] See Article 2(1) of Directive (EU) 2016/943 on Trade Secrets.
[8] ‘Information Rights : Law and Practice – Universität Wien’ <https://usearch.univie.ac.at/primo-explore/fulldisplay?docid=UWI_alma51603433330003332&context=L&vid=UWI&lang=de_DE&search_scope=UWI_UBBestand&adaptor=Local%20Search%20Engine&tab=default_tab&query=any,contains,What%20is%20confidential%20information%20in%20the%20eu&offset=0> accessed 8 July 2024.
[9] Regulation (EU) 2019/1020 of the European Parliament and of the Council of 20 June 2019 on market surveillance and compliance of products and amending Directive 2004/42/EC and Regulations (EC) No 765/2008 and (EU) No 305/2011.
[10] Article 78(2) of the EU Ai Act.
[11] Georgios Pavlidis, ‘Unlocking the Black Box: Analysing the EU Artificial Intelligence Act’s Framework for Explainability in AI’ [2024] Law, Innovation and Technology.
[12] Article 78(3) provides that subject to paragraphs 1 and 2, “information exchanged on a confidential basis between the national competent authorities or between national competent authorities and the Commission shall not be disclosed without prior consultation of the originating national competent authority and the deployer when high-risk AI systems referred to in point 1, 6 or 7 of Annex III are used by law enforcement, border control, immigration or asylum authorities and when such disclosure would jeopardise public and national security interests. This exchange of information shall not cover sensitive operational data in relation to the activities of law enforcement, border control, immigration or asylum authorities.”
[13] ibid; Alessandro Facchini and Alberto Termine, ‘A First Contextual Taxonomy for the Opacity of AI Systems’ (18 December 2021) <http://philsci-archive.pitt.edu/20376/> accessed 19 June 2023.
[14] Borja Sanz-Urquijo, Eduard Fosch-Villaronga, and M. Lopez-Belloso, ‘The Disconnect between the Goals of Trustworthy AI for Law Enforcement and the EU Research Agenda’ [2022] AI and Ethics; ‘Access Now Submission to the Consultation on the European Data Protection Boardʼs Guidelines 05/2022 on the Use of Facial Recognition Technology in the Area of Law Enforcement’; ‘EU’s AI Act Fails to Set Gold Standard for Human Rights’ (European Institutions Office, 3 April 2024) <https://www.amnesty.eu/news/eus-ai-act-fails-to-set-gold-standard-for-human-rights/> accessed 29 June 2024; ‘Packed with Loopholes: Why the AI Act Fails to Protect Civic Space and the Rule of Law | ECNL’ (3 April 2024) <https://ecnl.org/news/packed-loopholes-why-ai-act-fails-protect-civic-space-and-rule-law> accessed 30 June 2024.
[15] ‘DIRECTIVE (EU) 2016/ 943 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL – of 8 June 2016 – on the Protection of Undisclosed Know-How and Business Information (Trade Secrets) against Their Unlawful Acquisition, Use and Disclosure’.
[16] ‘A/78/310: Principles of Transparency and Explainability in the Processing of Personal Data in Artificial Intelligence’ (n 6); Nagadivya Balasubramaniam and others, ‘Transparency and Explainability of AI Systems: Ethical Guidelines in Practice’ in Vincenzo Gervasi and Andreas Vogelsang (eds), Requirements Engineering: Foundation for Software Quality (Springer International Publishing 2022); Themistoklis Tzimas, ‘Algorithmic Transparency and Explainability under EU Law’ [2023] European Public Law.
[17] See Commentary on Article 86 for discussion on the threshold of “clear and meaningful.”
[18] Marc M Anderson and Marc M Anderson, ‘Some Ethical Reflections on the EU AI Act’; Jacintha Walters and others, ‘Complying with the EU AI Act’ [2023] arXiv.org.
[19] See Article 11 and 78 of the Ai Act.
[20] Furthermore, the confidentiality obligations could also disadvantage researchers and civil society organisations who seek to scrutinise Ai systems for potential biases and discriminatory impacts. Limited access to technical information could impede the ability to conduct independent audits and assessments, potentially allowing harmful or discriminatory Ai practices to go unchecked.
[21]‘Using AI in Peer Review Is a Breach of Confidentiality – NIH Extramural Nexus’ (23 June 2023)<https://nexus.od.nih.gov/all/2023/06/23/using-ai-in-peer-review-is-a-breach-of-confidentiality/> accessed 11 April 2024.
[22] G Dal Pont and G Dal Pont, ‘Law of Confidentiality’; Alastair Hudson and Alastair Hudson, ‘Concepts of Property in Intellectual Property Law: Equity, Confidentiality and the Nature of Property’ 94.
[23] Ai Qiang Li and others, ‘Exploring Product–Service Systems in the Digital Era: A Socio-Technical Systems Perspective’ (2020) 32 The TQM Journal 897.
[24] Moira Paterson and Maeve McDonagh, ‘Data Protection in an Era of Big Data: The Challenges Posed by Big Personal Data’ (2018) 44 Monash University law review 1.
[25] Iglesias Portela Maria and others, ‘Intellectual Property and Artificial Intelligence – A Literature Review’.
[26] Adrianna Surmiak and others, ‘Should We Maintain or Break Confidentiality? The Choices Made by Social Researchers in the Context of Law Violation and Harm’ (2020) 18 Journal of Academic Ethics 229.
[27] ‘“Artificial Intelligence: A Modern Approach, Global Edition” by Stuart Russell”.
[28] ‘EU’s AI Act Fails to Set Gold Standard for Human Rights’.