Introduction
The AI Pact, an initiative supporting organizations in preparing for the EU AI Act, highlighted the importance of planning ahead for the regulation’s implementation. FemAI, a member of the AI Pact and a part of the General-Purpose AI Code of Practice, reaffirmed its commitment to advancing responsible AI practices. In this blog we outline key factors to stay ahead of the upcoming legislation.
On November 28, 2024, the AI Pact hosted a webinar titled “Exploring the Architecture of the EU AI Act.” Moderated by Arnaud Latil, the session brought together three experts—Laura Jugel, Irina Orssich, and Kilian Gross—who delved into the Act’s objectives, governance, and implications for businesses. The webinar provided a comprehensive overview of the EU AI Act’s key pillars, emphasizing its role in harmonizing AI governance across Europe while safeguarding sensitive areas.
The AI Act’s objectives and architecture
The EU AI Act is built around three primary objectives. First, it aims to harmonize the internal market with a uniform set of regulations, fostering consistency and clarity across member states. Second, it prioritizes the protection of sensitive areas where the risks of AI failures could have severe consequences—think healthcare, critical infrastructure, or education. Finally, it seeks to establish a clear quality standard for AI systems to build trust between users, businesses, and regulators.
Unlike sector-specific legislation, the EU AI Act is intentionally horizontal, meaning it applies across industries. The Act complements other EU frameworks such as the General Data Protection Regulation (GDPR) and copyright law, creating a seamless legislative ecosystem for AI governance.
One of the Act’s defining features is its risk-based approach, which categorizes AI systems into four tiers of risk:
- Unacceptable risk AI systems, such as social scoring or exploitative surveillance, are outright prohibited.
- High-risk AI systems, including medical devices or automated hiring tools, must comply with strict requirements.
- AI systems with transparency risks, like chatbots, require disclosures to inform users they are interacting with AI.
- Minimal-risk AI systems face no specific requirements, ensuring innovation is not stifled where risks are low.
To foster innovation, the Act includes mechanisms such as regulatory sandboxes, where high-risk AI systems can be tested under real-world conditions. Additionally, an “ecosystem of support” and innovation hubs will provide guidance and resources to organizations navigating the regulations. Importantly, the Act is designed to evolve, incorporating updates to remain relevant in a rapidly changing technological landscape.
The Act is guided by principles that balance regulation with progress: ensuring safety, fostering responsible innovation, and maintaining a future-proof approach. Its governance framework includes coordinated oversight between national and EU-level bodies, which will be critical for consistent enforcement.
Practical key-takeaways
For businesses, the Act represents both a challenge and an opportunity. Its phased implementation timeline gives organizations time to prepare, but it also requires them to act decisively.
Phased Implementation Timeline:
- February 2025: Prohibitions on unacceptable AI take effect.
- August 2025: Regulations for governance and general-purpose AI come into force.
- August 2026: Rules for self-standing AI systems become binding.
- August 2027: Regulations for embedded AI systems take effect.
To ensure compliance organizations should carefully review Article 5, which outlines prohibited practices such as manipulating people’s decision making or predicting a person’s risk of committing a crime. The Act also encourages businesses to participate in ongoing public consultations, open until December 11, 2024, by submitting use cases or seeking clarifications on specific prohibitions through this online form.
Challenges and feminist perspective
However, implementing the EU AI Act is not without challenges. Staying informed about updates, particularly those related to environmental regulations and general-purpose AI models, is a significant hurdle for businesses. Additionally, enforcement will largely occur at the national level, which may strain smaller member states with limited resources or expertise.
The EU AI Act is a landmark regulation aiming to safeguard fundamental rights and foster innovation, but a feminist perspective highlights areas for deeper consideration. Feminist analysis emphasizes inclusivity, justice, and the protection of marginalized groups, urging policymakers to ensure diverse representation in governance. Without active participation from marginalized, underrepresented, and underprivileged people, regulatory outcomes risk perpetuating existing inequalities.
The Act’s risk-based approach, particularly its prohibitions and high-risk categories, must address the disproportionate impact of AI failures on vulnerable communities. While fairness is a key principle, feminist perspectives call for justice, recognizing structural inequalities and ensuring standards account for systemic barriers. Additionally, the Act must address gaps like the relegation of environmental protection and societal sustainability to optional codes of conduct, elevating them to core obligations that reflect their critical importance for a just and equitable future.
Public consultations provide an opportunity to include marginalized voices, but outreach must go beyond well-funded stakeholders to include representatives of affected communities.
As the Act evolves, its adaptability should prioritize identifying and mitigating new risks for the most vulnerable. If you want to deepen your knowledge on the feminist analysis, check out our blog a feminist vision for the EU AI Act. By embedding these principles, the EU AI Act can better achieve its vision of a trustworthy, equitable AI ecosystem that benefits all.
Conclusion
The AI Pact webinar underscored the EU AI Act’s potential to transform AI governance, but it also highlighted the importance of careful oversight. Businesses must prepare proactively, adapting to the Act’s evolving requirements. At the same time, policymakers must ensure that the regulations are inclusive and equitable, reflecting the needs of all segments of society.
Further regular webinars from the AI Pact allow further engagement into the topic.