Artificial Intelligence Act (AI Act)

What is the AI Act?

The Artificial Intelligence Act, also known as AI Act, is an European text created to regulate AI based on a risk classification approach. This text is only a proposition and is currently being amended by the legislators, the European Parliament and the European Council.

The information provided below is subject to change as the text undergoes revisions. The iliomad Team will make efforts to keep it updated, however, there may be instances where some modifications are not immediately represented.

Who is concerned?

The AI Act mandates compliance from AI System providers. A provider is defined as any individual or entity, including public authorities, agencies, or other bodies, responsible for developing an AI System or commissioning its development, with the intention of marketing or deploying it under their own name or brand, whether for profit or not. This applies to providers who market or deploy AI Systems within the Union, regardless of their establishment within the Union or in a third country. The regulation is also pertinent when the output of the system is utilized within the Union. Furthermore, it encompasses scenarios where the users of the AI Systems are based within the Union.

Which AI Systems ?

The Act characterizes an AI System as "software created using one or more techniques and approaches outlined in Annex I, capable of producing outputs like content, predictions, recommendations, or decisions that impact their interacting environments, based on predetermined human objectives" (article 3).

The specified techniques encompass various methods, including machine learning, logic and knowledge-based, and statistical approaches. The AI Act categorizes AI Systems based on the level of risk associated with their usage, dividing them into three categories: unacceptable risk, high risk, and low or minimal risk.

Prohibited practices

Prohibited AI practices are those uses of AI that are deemed unacceptable due to their significant risk to the values of the Union, including practices that contravene these values, such as infringing on fundamental rights. The document specifies the types of practices to be banned, which include those capable of manipulating individuals through subconscious techniques or exploiting the vulnerabilities of particular groups like children or persons with disabilities, in ways that could significantly alter their behavior and potentially cause them or others psychological or physical harm.

The explanatory memorandum elaborates that practices impacting adults may be governed by other laws, such as those related to data protection or consumer rights. For instance, regulations like the General Data Protection Regulation (GDPR) ensure that individuals are adequately informed and can make free choices.

High risk AI Systems

High risk AI Systems are described in the memorandum as "AI Systems that pose a significant threat to the health, safety, or fundamental rights of individuals." According to the text of the proposal, AI Systems are considered high risk if they meet two criteria. Firstly, the system must be used either as a safety component of a product or as a product itself that falls under the legislation specified in Annex II of the proposal.

Secondly, the system must be subject to a third-party conformity assessment before it can be marketed or put into service, as per the aforementioned legislation. Additionally, Annex III of the proposal enumerates specific areas where AI Systems are automatically classified as high risk. This includes domains like biometric identification and categorization of individuals, education and vocational training, or law enforcement. The proposal also outlines specific obligations that must be adhered to by high risk AI Systems.

Low or minimal risk AI Systems

Low or minimal risk AI Systems encompass those AI Systems that do not fall under the high risk or unacceptable risk categories. Referred to as non-high risk AI Systems in the document, these systems are not subject to all the regulatory requirements imposed on high-risk systems. Nonetheless, it is recommended that they adopt and implement codes of conduct that include voluntary commitments.

The impact on Healthtech

Annex III of the proposal delineates various sectors where AI Systems are deemed high-risk. The sole health-related area is identified in the fifth point: "Access to and enjoyment of essential private services and public services and benefits." This section includes AI Systems designed for coordinating or prioritizing emergency response services, including those for firefighters and medical aid.

Furthermore, two health-related legislations in Annex II are pertinent. AI Systems that fall under these legislations and undergo third-party conformity assessments are classified as high-risk. These legislations are:

  1. The Medical Devices Regulation (Regulation (EU) 2017/745), which covers general medical devices, and
  2. The legislation concerning in vitro diagnostic medical devices (Regulation (EU) 2017/746).

These laws also categorize devices into different risk classes, where only specific classes require assessment. For the Medical Devices Regulation, devices in Classes IIa, IIb, and III need evaluation. AI Systems integrated into these devices are therefore considered high risk. Similarly, for in vitro diagnostic medical devices, Classes B, C, and D are subject to assessment, and AI Systems associated with these devices are also seen as high risk.

For instance, a low-risk AI example could be an AI System sorting patients for clinical trial recruitment based on medical and biomarker criteria from Electronic Medical Records (EMR). This system would be low-risk as the device itself falls under Class I in medical devices regulation. On the other hand, a high-risk AI example could be an AI System diagnosing skin lesions, which would be categorized as a Class IIa medical device, thus rendering it high risk under the AI Act.

Obligations of Providers and Users of AI Systems

Providers of high-risk AI systems are required to adhere to specific mandates, with potential fines for non-compliance. These mandates include:

  • Implementing risk and quality management systems, along with ensuring appropriate levels of accuracy, robustness, and cybersecurity.
  • Creating and maintaining technical documentation for a set period and keeping a log of events.
  • Upholding principles like data governance and management, and ensuring transparency.
  • Informing users and conducting a conformity assessment.

Users of high-risk AI Systems also have their own set of obligations, such as using the system as intended and notifying the provider in case of a serious incident or malfunction.

Additionally, certain AI Systems are subject to a transparency obligation. This particularly applies to AI Systems designed to interact with people. These must be engineered in a manner that makes it clear to individuals that they are engaging with an AI System, unless this is apparent from the context and circumstances of use (as per article 52). For users of generative AI Systems that create deepfakes, there is a requirement to disclose that the content has been artificially generated or manipulated.


Similarities with Data Protection Legislations

The AI Act exhibits certain parallels with data protection laws, such as those seen in GDPR. Key similarities include:

  • Mandatory notification to the competent authority in case of violations of Union laws that safeguard fundamental rights. This is akin to the GDPR's requirement for data controllers to report breaches.
  • In the GDPR, data controllers must keep a detailed record of their data processing activities. Similarly, the AI Act stipulates that providers must maintain records of events, preferably through automatic recording.
  • Both legislations mandate informing users and implementing suitable technical and organizational measures to ensure accuracy, robustness, and cybersecurity of the system.
  • In the Life Sciences sector, principles of data governance and management are particularly crucial under both the GDPR and the AI Act.
  • Both the AI Act and GDPR require a designated representative within the European Union for providers or controllers based outside the EU.


AI regulatory sandboxes

Member states or the European Data Protection Supervisor have the ability to offer providers a "regulated setting that supports the development, testing, and validation of novel AI Systems for a limited duration prior to their market introduction or deployment, in accordance with a detailed plan" (article 53). For AI Systems operating within these regulatory sandboxes, there are distinct rules concerning the processing of personal data.

Seamus Larroque

CDPO / CPIM / ISO 27005 Certified