Over the past several years, there has been a significant increase in public awareness of the broad applications of artificial intelligence (AI) and machine learning technologies. 

This has led to a growing demand for ethical protections and openness around the usage of AI-based systems. 

Aiming to achieve this goal, the European Union declared in December 2023 that it had arrived at a provisional agreement over the fundamental provisions of the upcoming Artificial Intelligence Act (AI Act or Act). 

Since its release, the proposed legislation—which is anticipated to go into effect in May or July of 2024—has given interested parties a sneak peek into the AI Act's framework.

The AI Act seeks to develop a "comprehensive legal framework on AI worldwide" aimed at "foster[ing] trustworthy AI in Europe and beyond, by ensuring that AI systems respect fundamental rights, safety, and ethical principles and by addressing risks of very powerful and impactful AI models.”

AI Act Basics

First and foremost, whether or whether the providers and creators of AI systems are based in the EU or another nation, the proposed AI Act covers them if their systems are sold or utilised within the EU (including free-to-use AI technology).

Companies with headquarters in the United States that sell or supply AI-based technology to the European Union risk fines for breaking the Act. 

The Act states that current EU law on the protection of personal data, privacy, and confidentiality applies to the gathering and use of any such information for AI-based technologies, but it does not specifically address AI systems that handle the personal information of EU individuals. Committee Draft of the AI Act, Art. 2(5a).

The AI Act uses a risk-based system to divide AI systems into four groups. 

These categories typically correlate to two things: 1) the specific AI use case or application, and 2) the sensitivity of the data involved.

AI practices that pose "unacceptable risk" are expressly prohibited under the Act. 

These prohibited practices include marketing, providing or using AI-based systems that:

  • Make a decision that someone else would not have made in a way that does or may cause that person or other people serious harm by using manipulative, deceptive, and/or subconscious techniques.
  • Take advantage of a person or people's vulnerabilities because of their age, a disability, or a particular social or economic circumstance in order to change their behaviour in a way that harms them or has the potential to seriously hurt others.
  • Utilize biometric information to classify people according to their sexual orientation, sex life, political views, race, trade union membership, and religious or philosophical views.
  • use untargeted face picture scraping from the internet or closed-circuit television (CCTV) footage to build or grow facial recognition databases.

The maximum penalty, which can be as much as €35 million or 7% of a company's annual sales, whichever is higher, is imposed on artificial intelligence approaches that pose an unacceptable risk.

The "high risk" system category is far more expensive than the "unacceptable risk" system category, and it most likely contains a sizable portion of now in use AI applications. 

A few instances of high-risk AI technology applications are biometric identity systems, systems for educational or vocational training or assessment, systems for employment evaluation and recruitment, financial evaluations, or systems related to insurance.

However, it's unclear exactly where high-risk AI technology should be used. 

According to the Act, systems that "do not pose a significant risk of harm[] to the health, safety or fundamental rights of natural persons" will not normally be classified as high-risk. 

However, it gives the AI Office and the European Commission eighteen months to create "practical" guidelines that implementers can use to make sure this requirement is met. 

Committee Draft of the AI Act, Art. 6(2a), (2c). By 

1) carrying out an appropriate evaluation prior to the system or service being placed on the market and 

2) additionally giving that assessment to national authorities upon request, companies may be able to bypass some of the restrictions that apply to high-risk systems. Id. at 6(2b).

Developers and implementers whose technology is classified as high-risk should, at the very least, be ready to adhere to the following AI Act requirements:

  • Sign up with the EU's central database.
  • Establish a quality management system that complies.
  • Keep sufficient records and logs.
  • Conduct pertinent conformance evaluations
  • Respect the limitations on using AI that poses a high danger.
  • Maintain regulatory compliance and be ready to provide evidence of it upon request.

The AI Act also establishes limitations on the use of general-purpose AI models and mandates transparency requirements for AI use. 

For instance, unless it is evident from the context, the Act mandates that AI systems meant for direct human interaction be designated as such.

See Committee Draft of the AI Act, Art. 52. Furthermore, additional limitations might apply to general-purpose AI models with "high impact capabilities," which are defined as general-purpose AI models whose cumulative training compute usage, as measured in floating point operations per second, is higher than 1025 floating point operations (FLOPs).

Among other things, suppliers of these models have to enforce procedures to abide with EU copyright laws, keep up-to-date technical documentation of the model and training outcomes, and give the AI Office a thorough summary of the training content.


This article summarizes the most significant modifications to the EU's upcoming AI Act, but it by no means includes all of the suggested changes—some of which are still up in the air. 

The AI Act might have a profound impact on how businesses function both domestically in the EU and internationally, regardless of whether it is applied to a provider, developer, or implementer of AI technology. 

It is advisable for leaders to allocate sufficient time to comprehend the potential implications of these regulations and devise appropriate measures to guarantee that they fulfil their responsibilities in accordance with the new legislation.