A Law and Ethics Blog: “AI, EU Ready Kids?”

Photo by NASA on Unsplash

1. AI’s Global Regulation

 The ‘European AI Act’ is fast approaching,¹ and now that the text of the ‘Act’ has been finalised this article will elucidate its content and explore why it has been awaited with cautious anticipation across the globe.

 It is not unusual for the European Union to indirectly set global standards through the unilateral adoption of European law by outside nations.² With a population of around 448 million people in the European Union,³ it is arguable that tremendous market pressure and regulatory influence will induce the Big Tech giants to capitulate to regulation of their AI system technology.⁴ This is a phenomenon that has been observed across many EU regulations, described as the ‘Brussels Effect’.⁵ 

However, care should be taken in assuming that a ‘Brussels Effect’ will occur for the ‘European AI Act’.⁶ Unlike previous regulations such as the General Data Protection Regulation, the ‘AI Act’ is deliberately interwoven with an extensive list of other regulations that have been designed to operate holistically to achieve ‘Europe’s Digital Decade’.⁷ Therefore, it is difficult to extract the ‘AI Act’ out in isolation without creating statutory interpretation issues. 

It is possible that Australia may follow the Council of Europe’s newly adopted AI Convention instead,⁸ or other jurisdictions’ attempts — such as President Biden’s Executive Order concerning AI.⁹ Nonetheless, it remains an important regulation to have at least a rudimentary understanding of.

 — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — —

2. The ‘European AI Act’ 

Contrary to its colloquial name, the ‘European AI Act’ is not an ‘Act’ at all. It is a proposed regulation from the European Union. The term regulation for this jurisdiction is hard law, however, and is as binding as any Australian Act would be in Australia. As is the nature of any law, ironically this ‘AI Act’ is not intelligent. It is not intelligent in the sense that AI is undergoing an exceptionally fast rate of innovation and technological change, and yet the law is not itself able to adapt easily. This has been problematic for the drafting of this particular regulation, as it went through transformative amendments in June 2023 after the explosion in popularity of generative AI.¹⁰ 

a) Definition of Artificial Intelligence

 The regulation has defined what artificial intelligence is. While seemingly simple, there is no uniform international definition of AI.¹¹ Australia specifically has no legislative definition of AI. Any sort of stance on a definition can only be pieced together by various government websites and policy reports, all of which have a different definition. 

The finalised definition in the AI regulation is as follows: ‘

AI system’ means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments’.¹²

 This definition is adopting the OECD’s new updated definition of AI.¹³ The OECD is the most influential non-legal international organisation of its kind, frequently making recommendations that have been taken up by many jurisdictions across the world. Given the OECD’s widespread influence, further propagating this new definition encourages nations to agree upon a uniform definition, which can be viewed as a positive step towards eliminating cross-jurisdictional confusion. Australia may wish to consider supporting this goal by adopting a similar definition of AI. 

b) So, What Is It? 

The draft regulation is a market regulating instrument that will have a rollout effect of two years from the date of publication for many of its rules.¹⁴ Because of widespread commentary and attention, the regulation has been coaxed in discussions of protecting human rights and values.¹⁵ This discussion has resulted in some injections of human values into the regulation post the 2021 draft version, however the regulation more accurately reflects the European Commission’s familiarity with product safety legislation.¹⁶ 

Ultimately, despite the recital to the regulation containing references to the upholding of human rights and fostering AI innovation, its purpose is neither of these. Instead, its purpose is to assure European citizens that the AI system products that they are using are safe.¹⁷

 c) What is the Content of this Regulation?

 In achieving its purpose of facilitating the proper functioning of the European market, it has created harmonised rules on the development of AI.¹⁸ Further showing its alignment with product regulation as opposed to human rights legislation, the regulation has taken a risk-based approach, not a rights-based one.¹⁹ It is important to note that this regulation covers a tiny sliver of all AI technology, but of the AI systems it does cover the European Union has broken them down into four categories of varying risk:

  1. Unacceptable risks; 

  2. high risks;

  3.  limited risks; and 

  4. minimal risks.²⁰

In doing so the regulation recognizes that some AI systems are innocuous, while others can pose a serious risk of harm. To grasp how this would play out in practice, it is useful to explore examples of AI in the different categories of risk.

(i) Unacceptable Risk AI 

One clear example of AI technology in the unacceptable risk category is social scoring technology. This AI attributes a social score to individuals within the population based on key defined social factors such as occupation and residential living, as well as ingrained social biases.

 This score is used to assess an individual’s ‘trustworthiness’ for the purposes of many essential decisions, such as banks lending money or whether insurance is granted. Given AI systems are already vulnerable to perpetuating biases based on discriminatory inputs, this type of AI system is particularly susceptible to being utilised as a tool to further perpetuate inequality and injustice. These unacceptable risk technologies are therefore banned outright.²¹

 (ii) High Risk AI 

Most of the regulation is centred on controlling this high risk category.²² High risk technology describes AI systems that have potential for human rights abuse if left unregulated. For this reason, the regulation sets out varying degrees of requirements for the development of these types of technology. 

Annex III lists examples of high risk AI systems.²³ One of the examples listed is the use of AI for law enforcement. However it is relevant to note that since an AI system can be used for a wide spectrum of tasks, the regulation is filled with exceptions, and this is especially evident when this type of technology is used for military or national security purposes.²⁴

 (iii) Limited Risk AI 

These technologies also have the ability to breach human rights but are seen as a step down in threat and likelihood of harm compared to the high risk systems. Chatbots are a common example of AI systems in this category. Limited risk AI systems are subject to transparency obligations which can require that users are informed when they are interacting with an AI system.²⁵ 

(iv) Minimal Risk AI 

Minimal risk AI are AI systems that are so innocuous that voluntary codes of conduct are seen as adequate to ensuring that the technology is used in a safe way.²⁶ AI used for inventory management or video games are a clear example of these relatively harmless AI technologies.

 — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — —

3. Conclusion

The ‘European AI Act’ is an EU regulation designed to regulate the AI technology market. It does this by adopting a risk-based approach of product control, rather than a rights-based one for European citizens. There will be a rolling out period of two years for much of the regulation, and so the effects will be felt years from now.

The regulation primarily focuses on what it classifies as ‘high risk AI’, providing extensive regulation of this kind of technology. However, the AI systems it does cover is a niche of all total AI technology. In totality, the regulation is one piece in a network of EU regulations pre-existing and yet to come, and it will need to be read in conjunction with these other regulations. 

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — 

¹Regulation (EU) 2024/138 of the European Parliament and of the Council laying down harmonised rules on artificial intelligence and amending Regulations [2024] Official Journal TA (‘EU AI Act’).

²Anu Bradford, ‘The Brussels Effect: How the European Union Rules the World’ (2020) Oxford University Press, 25–26 (‘The Brussels Effect’).

³‘Facts and Figures on Life in the European Union’, European Union (Webpage) .

⁴The Brussels Effect (n 2) 26–36.

⁵Ibid 2–4.

⁶Chris Marsden and George Christou, ‘What Europe’s AI Regulation Moment Will Mean for the World’ (2 August 2023) 360info, [15]-[17].

⁷Monash Law, ‘EU AI Act A Trustworthy Framework that Respects Human Rights’ (YouTube, 20 March 2024), 00:11:27–00:12:17 .

⁸‘Council of Europe Adopts First International Treaty on Artificial Intelligence’, Council of Europe (Webpage, 17 May 2024) . 

⁹88 FR 75191 — Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (2023) Federal Register 88(210) 75191, 75193.

¹⁰EU AI Act (n 1) ch V.

¹¹Giusella Finocchiaro, ‘The regulation of artificial intelligence’ (2023) AI & Soc, 1–2.

¹²EU AI Act (n 1) art 3.

¹³OECD, ‘Explanatory memorandum on the updated OECD definition of an AI system’ (Research Report No 8, OECD Publishing, 2024), 4.

¹⁴ EU AI Act (n 1) 157 [179]. 

¹⁵ Jean Monnet Centre of Excellence JUST-AI, ‘JUST-AI JMCE ‘Meet the Expert Podcast’ n°2 — ‘The AI Act: Behind the scenes’ with Luca Bertuzzi’ (YouTube, 23 February 2024), 00:22:22–00:22:42 (‘Meet the Expert Podcast’). ¹⁶ Ibid 00:07:50–00:08:45, 00:12:26–00:12:45. 

¹⁷ Ibid 00:22:22–00:24:59. 

¹⁸ EU AI Act (n 1) 7 [8]. 

¹⁹ Meet the Expert Podcast (n 15) 00:12:26–00:12:45. 

²⁰ ‘High-Level Summary of the AI Act’, EU Artificial Intelligence Act (Webpage, 27 February 2024) .

²¹ EU AI Act (n 1) ch II art 5.

 ²²Ibid ch III.

 ²³Ibid Annex III.

 ²⁴Ibid art 2.

 ²⁵Ibid ch IV.

²⁶ Ibid 148 [165].

 — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — 

Previous
Previous

Library Artefact Digitisation Q&A

Next
Next

Neural Cellular Automata Q&A