Skip to content.

Navigating the European Union’s Artificial Intelligence Act

The European Union (“EU”) has made an unprecedented move towards asserting its position as a global leader in technology regulation by crafting comprehensive legislation to regulate Artificial Intelligence (“AI”). On December 8, 2023, after long hours of negotiations and significant deliberations, negotiating parties from the European Parliament (EP) and Council of the European Union (CEU) agreed on a ground-breaking preliminary agreement featuring over 90 articles for the EU’s Artificial Intelligence Act (“Act”)[1]. This represents a key step toward the adoption of the Act. Yet certain tensions remain within the EU, with member states such as France and Germany lobbying until the last minute to exempt general purpose AI systems from the scope of applicability of the Act, citing concerns that the regulation might stifle innovation in contrast to their peers in the United States.[2]

Although we do not yet have a final draft of the Act at the time of writing, the commitment at the heart of this legislation seeks to balance legal safeguarding fundamental rights whilst promoting innovation. As Thierry Breton, the European commissioner for the internal market, space, defence, and security, aptly summarized, the spirit of the EU’s approach is to “regulate as little as possible but, also, as much as needed in Europe”.[3] It remains to be seen whether the ambitious regulatory undertaking of the EU will successfully strike this delicate balance in practice.

Emphasis on Fundamental Human Rights and Risk-Based Classification of AI

The EU has taken the approach of differentiating certain prescribed practices based on the potential risk level associated with AI systems (prohibited, high-risk, limited risk, or minimal risk).[4] Prohibited AI systems include those that carry out “cognitive behavioral manipulation, the untargeted scraping of facial images from the internet or CCTV footage, social scoring and biometric categorization systems to infer political, religious, philosophical beliefs, sexual orientation and race”.[5] However, in the course of final negotiations, the EP and ECU introduced narrowly defined exceptions for the use of biometric identification systems (“RBI”) in publicly accessible spaces for law enforcement purposes, subject to prior judicial authorization and for strictly defined lists of crime[6] and the use of a “post-remote” RBI in the targeted search of a person convicted or suspected of having committed a serious crime”.[7]

The Act also sets out extensive compliance obligations for high-risk AI systems that could potentially jeopardize health, safety, or fundamental rights.[8] Providers, developers and users of these high-risk systems are obligated to implement various risk management processes, transparency requirements, and carry out a mandatory fundamental rights impact assessment to ensure that they do not replicate or multiply systemic inequalities.[9] AI systems classified as limited risk (a category that spans chatbots, certain biometric categorization systems, and systems with the capacity to generate deepfakes) will also be required to respect transparency requirements which include informing users when they are interacting with an AI system and ensuring that synthetic audio, video, and images are marked in a machine-readable format as being artificially generated.

Pyramid

(Source: European Commission)

In light of the rapidly evolving AI technological landscape, the EU has made an attempt to “future-proof” the legislation. The Act has built-in updating mechanisms which aim to keep the regulatory framework resilient and adaptive.[10][11]

Regulations of GenAI and Foundational Models

The first draft of the Act was published more than a year prior to the launch of OpenAI’s ChatGPT in late 2022, a development that has since transformed the AI landscape. Although the Act initially took a primarily risk-based regulatory approach, this has now been supplemented by new provisions focusing specifically on generative AI and foundational models (for additional insight into the initial draft of the EU AI Act as first proposed in April 2021, see our earlier blog on this subject: EU’s Proposed Artificial Intelligence Regulation: The GDPR of AI.) The Act outlines transparency requirements for all generative process models before their entry into the EU market.[12] These requirements encompass technical documentation, adherence to the EU’s copyright laws, and comprehensive summaries of content involved in training these models.[13] Enhanced transparency requirements may lead to increased litigation related to claims of intellectual property infringement, as copyright holders would now have a clearer understanding of the data used in the training of those systems. There have been a number of legislative developments worldwide which attempt to address whether there should be a “text and data mining” exception to copyright, and what the scope and limitations of such an exception should be. For example, EU’s Directive on Copyright in the Digital Single Market carves out exceptions to allow researchers access to copyrighted works for data mining without infringing copyright, provided it was for scientific research, but this matter will likely remain unsettled in the near term.[14]

Additionally, this legislation would only address General Purpose AI Systems and foundational models (such as large language models that power tools like OpenAI’s ChatGPT) that meet certain computational characteristics (this is determined by measuring the floating-point operations per second, or FLOPs, which represent the number of calculations a supercomputer can perform in a second) and create systemic risks.[15] For a more detailed discussion of this subject, see our colleague Barry Sookman’s blog EU AIA: agreement on Europe’s new AI regulatory opus from earlier this month.[16]

This tiered strategy of devoting special scrutiny to more powerful models posing significant risk is similar to the approach found in the United States President’s Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence published at the end of October 2023. However, the EU and US have thus far taken divergent paths to arrive at their respective legal frameworks for responsible AI:  while the EU has chosen to deploy a comprehensive and onerous, risk-based legal framework, the US has relied so far on executive orders setting guiding principles and on marshalling the vast apparatus of the federal government to develop industry standards and to enforce existing laws.

The Act also introduces some novel elements, such as the establishment of a Europe-wide AI office (“Office”).[17] This Office will, among other things, play a crucial coordinating role between national states. The details of the new regulation should be finalized in the coming weeks, with the text being submitted to member states’ representatives for endorsement thereafter.

Timeline and Sanctions

The Act is expected to be adopted early next year, but given its staggered approach to enforcement, it will only become fully applicable two years after its entry into force.[18] Certain provisions will therefore come into force in waves, with the provisions on prohibited practices becoming effective 6 months after the entry into force of the Act, and the obligations regarding general purpose AI governance and transparency becoming effective 12 months after the entry into force of the Act.

Phase

Timeline

Adoption of EU AI Act

Early 2024

Entry into force of EU AI Act

20th day following publication in official Journal

Provisions on prohibited practices come into effect

6 months after the Act enters into force

Provisions on general purpose AI governance and transparency come into effect

12 months after the Act enters into force

 

EU AI Act becomes fully applicable

24 months after the Act enters into force

 

This methodical process aims to facilitate the establishment of standards and voluntary compliance, helping pave the way towards preparedness.[19] Such preparedness will be key given the massive fines included in the Act. The severity of non-compliance is underscored by substantial fines ranging from 7.5 million euros or 1.5% of turnover to 35 million euros or 7% of worldwide annual turnover,[20] a stark increase from the maximum fines of 20 million euros or 4% of worldwide annual turnover found in the GDPR (as a clear a sign as any that the EU considers AI’s specific risks even greater than the privacy risks addressed in the GDPR). For each category of infringement, the threshold would be the lower of the two amounts for SMEs and the higher for other companies. Importantly, citizens will retain the right to launch complaints for violation of the Act.[21]

Category of Infringement

Penalty

Prohibited practices or non-compliance related to requirements on data

Up to €35m or 7% of the total worldwide annual turnover of the preceding financial year (whichever is higher)

Non-compliance with any of the other requirements or obligations of the Regulation

Up to €15m or 3% of the total worldwide annual turnover of the preceding financial year

Supply of incorrect, incomplete or misleading information to notified bodies and national competent authorities in reply to a request

Up to €7.5m or 1.5% of the total worldwide annual turnover of the preceding financial year

Agreement on the Act marks a significant development in AI regulation, setting a meaningful precedent. As the global community at large and policymakers in other jurisdictions observe closely, the EU has taken the lead in seeking to establish rigorous ethical and legal standards for the AI ecosystem, aspiring towards a future in which innovation may flourish without compromising our most fundamental human rights and values.

Bringing AI systems and practices into compliance with regulatory obligations can be an onerous undertaking; Canadian businesses with a presence in the EU should begin assessing the risks and impacts of their AI systems now, in addition to articulating responsible governance policies to mitigate potential harms. These measures are more pertinent than ever, given that Canadian legislators will almost certainly be taking a cue from the EU AI Act in mapping out the contours of Canada’s very own draft Artificial Intelligence and Data Act, which is currently before a parliamentary committee following second reading (for a more detailed examination of what we know about AIDA thus far, please see our earlier publication, One Step Closer to AI Regulations in Canada: The AIDA Companion Document, as well as our colleague Barry Sookman’s Analyzing AIDA 2.0: the problems with the proposed amendments to AIDA).

 

 

[1] Council of the European Union, Preliminary remarks by Carme ARTIGAS BRUGAL, State Secretary for Digitalization and Artificial Intelligence of Spain, during the press conference following the Artificial Intelligence Act Trilogue on 9 December 2023 in Brussels, 2023, online: https://newsroom.consilium.europa.eu/events/20231206-artificial-intelligence-act-trilogue/142864-1-press-conference-part-1-20231209

[2] David Matthews, “AI Act agreement gets mixed reaction from European tech”, Science/Business (December 12 2023), online: https://sciencebusiness.net/news/ai/ai-act-agreement-gets-mixed-reaction-european-tech.

[3] Council of the European Union, Preliminary remarks by Thierry BRETON, European Commissioner for Internal Market with responsibility for space, defence and security, during the press conference following the Artificial Intelligence Act Trilogue on 9 December 2023 in Brussels, 2023, online: https://newsroom.consilium.europa.eu/events/20231206-artificial-intelligence-act-trilogue/142864-4-press-conference-part-4-20231209

[4] Ibid.

[5] Supra note 3.

[6] Ibid.

[7] Ibid.

[8] Supra note 4.

[9] Ibid.

[10] Supra note 2.

[11] Council of the European Union, Additional remarks by Carme ARTIGAS BRUGAL, State Secretary for Digitalization and Artificial Intelligence of Spain, during the press conference following the Artificial Intelligence Act Trilogue on 9 December 2023 in Brussels, 2023, online: https://newsroom.consilium.europa.eu/events/20231206-artificial-intelligence-act-trilogue/142864-5-press-conference-part-5-20231209

[12] Foo Yun Chee, Martin Coulter and Supantha Mukherjee, “Europe agrees landmark AI regulation deal”, Reuters (December 11, 223), online: https://www.reuters.com/technology/stalled-eu-ai-act-talks-set-resume-2023-12-08/

[13] Europe, European Parliament, Artificial Intelligence Act: deal on comprehensive rules for trustworthy AI, 2023, online: https://www.europarl.europa.eu/news/en/press-room/20231206IPR15699/artificial-intelligence-act-deal-on-comprehensive-rules-for-trustworthy-ai

[14] European Parliament, The Exception for Text and Data Mining (TDM) in the Proposed Directive on Copyright in the Digital Single Market - Technical Aspects (February 2018), online: https://www.europarl.europa.eu/RegData/etudes/BRIE/2018/604942/IPOL_BRI(2018)604942_EN.pdf

[15] Cat Casey, ”Is the White House's AI Executive Order a FLOP?”, Law.com (November 3, 2023), online: https://www.law.com/legaltechnews/2023/11/03/is-the-white-houses-ai-executive-order-a-flop/?slreturn=20231120224311.

[16] Supra note 3.

[17] Ibid.

[18] Council of the European Union, Questions and answers during the press conference following the Artificial Intelligence Act Trilogue on 9 December 2023 in Brussels, 2023, online: https://newsroom.consilium.europa.eu/events/20231206-artificial-intelligence-act-trilogue/142864-6-press-conference-part-6-q-a-20231209

[19] Ibid.

[20] Ibid.

[21] Supra note 4

Authors

Subscribe

Stay Connected

Get the latest posts from this blog

Please enter a valid email address