The European Union's groundbreaking artificial intelligence law is set to shake up the tech industry, especially for American giants, as it officially takes effect on Thursday.
The AI Act, a historic regulation, was approved by EU member states, lawmakers, and the European Commission in May. It aims to govern how companies develop, use, and apply AI, addressing potential negative impacts and creating a comprehensive, harmonized regulatory framework across the EU.
Primarily targeting large U.S. technology companies—like Microsoft, Google, Amazon, Apple, and Meta—who lead in AI development, the law will also affect various other businesses, including non-tech firms. Tanguy Van Overstraeten, head of Linklaters’ technology practice in Brussels, highlights the AI Act as a world-first, likely to impact many businesses involved in AI development or deployment.
The legislation employs a risk-based approach, regulating AI applications differently based on their societal risk levels. High-risk AI systems, such as autonomous vehicles, medical devices, loan decision systems, and biometric identification, face stringent obligations. These include rigorous risk assessments, mitigation systems, high-quality training datasets to reduce bias, activity logging, and mandatory detailed documentation sharing with authorities.
The AI Act also bans "unacceptable" AI applications, like social scoring, predictive policing, and emotional recognition technology in workplaces or schools.
For U.S. tech firms, the AI Act imposes significant scrutiny, especially regarding operations in the EU market and the use of EU citizen data. Charlie Thompson of Appian emphasizes that the Act applies globally to any organization with EU operations or impact, increasing scrutiny on tech giants.
Meta, for instance, has restricted its AI model's availability in Europe due to regulatory concerns, though not directly linked to the AI Act. The company cited uncertainties regarding compliance with the EU's General Data Protection Regulation (GDPR).
Generative AI, labeled as "general-purpose" AI in the Act, includes tools like OpenAI’s GPT, Google’s Gemini, and Anthropic’s Claude. These systems face strict requirements, including adherence to EU copyright law, transparency disclosures, and routine testing. Open-source models, such as Meta’s LLaMa and Stability AI’s Stable Diffusion, have some exceptions but must make parameters publicly available and enable access, usage, modification, and distribution. Models posing systemic risks are not exempt.
Breaching the AI Act could result in hefty fines—up to 35 million euros or 7% of global annual revenues, significantly higher than GDPR penalties. Oversight will fall under the European AI Office, established by the Commission in February 2024.
Jamil Jiva of Linedata notes the EU's commitment to enforce significant fines to ensure compliance, mirroring their global regulatory influence demonstrated with GDPR. However, many provisions of the AI Act, including restrictions on general-purpose systems, won't come into effect until at least 2026, allowing a transition period for compliance.
This new law marks a significant step in regulating AI, potentially setting a global precedent and reinforcing the EU's role as a leader in digital regulation.