left-caret

PH Privacy

Artificial Intelligence: Preparing for the EU AI Act

February 28, 2024

By Kimia Favagehi

On February 2, 2024, EU member states approved the final text of the EU's Artificial Intelligence Act, providing an official framework for the use of AI. According to the European Parliament, its priority is to "make sure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory and environmentally friendly." While other countries, such as the U.S. and China, have established AI frameworks of their own, the EU's AI Act is set to become the world's first ever AI law.

Risk Categories

The Act classifies different categories of risk present in AI systems, each with their own set of obligations. The final text also provides several examples of the risk categories discussed below. To best prepare, AI system providers and deployers should carefully review these categories to better understand the Act's risk-based framework.

Unacceptable Risk. Systems falling into this category present a threat to individuals, and will be banned. Unacceptable risk AI systems may include those involving biometric identification/categorization of individuals and social scoring.

High risk. High risk systems "negatively affect safety or fundamental rights," and include products covered by the EU's product safety legislation. Additionally, the Act states that systems in certain categories must be registered in an EU database (i.e., law, critical infrastructure, employment, credit scoring, etc.). Systems classified as high risk must be assessed before being put on the market.

General Purpose and Generative AI. Under the AI Act, generative AI tools must comply with specified transparency requirements, including:

  • Disclosing that the content was generated by AI;
  • designing the model to prevent it from generating illegal content; and
  • publishing summaries of copyrighted data used for training.

Limited Risk. Systems in this category should provide individuals with notice of the AI system, and "should comply with minimal transparency requirements that would allow users to make informed decisions."

Finally, under the AI Act, systems that have minimal or no risk may be freely used.

Enforcement

Violations of the EU AI Act can have potential fines of up to €35 million or 7% annual worldwide turnover (significantly higher than the GDPR's fines of up to €20 million or 4% of a firm's worldwide annual revenue).

Next Steps

Almost three years since it was first introduced in April 2021, the Act will receive a plenary vote on its final text in April. Although the Act will likely not go into effect for some time (until at least 2025-2026), companies can start preparing by following these steps:

  • Assess AI systems for category of risk;
  • conduct adequate due diligence to ensure quality of data sets, privacy and security safeguards, and overall ethical use; and
  • consult with experts to ensure your company complies with the various legal, business, and ethical considerations associated with AI.

The Paul Hastings Data Privacy and Cybersecurity group continues to monitor the EU AI Act and other developments as we support our clients. If you have any questions, please do not hesitate to contact any member of our team.

Practice Areas

Data Privacy and Cybersecurity


For More Information

Image: Kimia Favagehi
Kimia Favagehi

Associate, Litigation Department

Get In Touch With Us

Contact Us