What you need to know about the EU AI Act

The EU’s AI Act is the first comprehensive legislation targeting the regulation of AI technologies, taking effect from August 1, 2024. The EU aims to create a framework for the ethical and safe use of AI within the EU, setting a precedent for global regulations.

The primary objective of the Act is to mitigate risks associated with AI, making sure these technologies are used responsibly. The EU also seeks to inspire similar regulatory efforts worldwide, potentially becoming a model for other regions to follow in the coming years.

The core elements shaping AI regulations

The AI Act introduces a risk-based classification system for AI technologies, which categorizes them into four distinct levels of risk:

  • Unacceptable risk: AI systems that threaten safety, rights, and livelihoods, such as biometric surveillance and social scoring systems, fall into this category. Applications like these are poised for outright bans due to their potential for significant harm.
  • High-risk: AI systems, which include applications in education, infrastructure, employment, public services, and law enforcement, are subjected to rigorous oversight and regulation which need to undergo strict scrutiny to make sure they do not pose undue risks to society.
  • Limited-risk: AI technologies in this category must comply with specific transparency requirements. Users must be informed when they are interacting with AI systems, promoting informed decision-making and accountability.
  • Minimal-risk: Most AI systems fall into this category, requiring minimal regulatory oversight. These are generally considered safe for widespread use without extensive scrutiny.

How the EU will keep AI in check

The Act mandates the establishment of supervisory authorities within each EU member state to monitor compliance and implementation. Additionally, a European Artificial Intelligence Advisory Board (EAIB) will be set up to coordinate these efforts at a continental level.

A dual-layered approach means that AI regulations are uniformly enforced across the EU, leading to a cohesive regulatory environment.

Meeting the EU’s AI standards

High-risk AI systems face stringent compliance requirements before they can be deployed. These include:

  • Human oversight: Having human involvement in the operation of AI systems to prevent autonomous decision-making that could lead to adverse outcomes.
  • Risk management systems: Implementing comprehensive frameworks to identify, assess, and mitigate risks associated with AI technologies.
  • Data quality and governance: Maintaining high standards for data accuracy, integrity, and security to ensure reliable AI outcomes.
  • Clear information: Providing users with comprehensive information about the AI systems, their functioning, and their implications.
  • Technical documentation: Maintaining detailed records of AI system designs, operations, and assessments to facilitate audits and regulatory reviews.

Mandatory transparency rules

Transparency lets users understand how AI systems operate and the decisions they make, building trust and accountability. To complement this, the Act makes sure users have clear pathways for redress and accountability, letting them challenge and seek remedies for adverse decisions made by AI systems.

Transparency is a foundation of the AI Act. It demands clear and accessible information about AI applications and their results.

What to expect over the next three years

2024-2025, establishment of the EAIB and national supervisory authorities

During the initial phase from 2024 to 2025, the focus is on establishing the European Artificial Intelligence Advisory Board (EAIB) and the supervisory authorities in each EU member state. This foundational step is key for creating the infrastructure needed to enforce the AI Act’s provisions.

The EAIB will coordinate efforts across the EU, providing a centralized body for guidance and consistency. Member states’ supervisory authorities will handle the local implementation and monitoring, ensuring compliance within their jurisdictions.

2025-2026, implementation of risk management and conformity assessment procedures for high-risk AI systems

From 2025 to 2026, the emphasis changes to implementing risk management and conformity assessment procedures for high-risk AI systems. Companies using these systems will need to develop robust frameworks to identify, assess, and mitigate risks, which involves rigorous testing and validation to make sure AI systems operate safely and effectively.

Compliance assessments will be thorough, requiring detailed documentation and proof of adherence to the AI Act’s stringent requirements. 2025 to 2026 will be key for setting up the processes that will make sure high-risk AI systems are used responsibly and safely.

2026-2027, enforcement of transparency and accountability measures

The final phase from 2026 to 2027 focuses on enforcing transparency and accountability measures. At this stage, all AI systems must meet transparency requirements, including clear communication to users about their interactions with AI. Companies must provide accessible information about the AI systems’ functionality, decision-making processes, and potential impacts.

Accountability measures will be in full effect, giving users clear paths for redress if they are adversely affected by AI decisions. Development through 2026 and 2027 will aim to build trust in AI technologies by making their operations transparent and holding developers and users accountable for their actions.

How the AI Act will change business and technology

The AI Act necessitates a comprehensive reevaluation of business operations to comply with the new regulations. Companies must scrutinize their AI development pipelines to integrate compliance at every stage which includes revisiting data governance practices to make sure data used in AI systems is of high quality, secure, and ethically sourced.

Risk management frameworks will need to be comprehensive, including risk identification, assessment, and mitigation strategies.

Businesses must also establish processes for human oversight in AI operations, ensuring decisions made by AI systems can be monitored and controlled by humans. Adjustments like these are important for aligning business practices with the new regulatory environment.

The financial impact of adhering to the AI Act

Companies must allocate resources for thorough documentation, including detailed records of AI system designs, operations, and compliance assessments. Organizations need rigorous risk management frameworks which also add to operational costs, requiring investments in tools, technologies, and personnel to manage these frameworks effectively.

Compliance with the AI Act introduces additional costs for businesses, particularly those using high-risk AI systems.

Compliance assessments, involving both internal audits and external reviews, will further increase expenditures. While these costs may be substantial, they are necessary to meet the stringent requirements of the AI Act and avoid potential penalties for non-compliance.

How compliance can boost your business

Adhering to the AI Act can offer a competitive advantage by creating trust and reliability among customers. When complying with the stringent requirements, companies can demonstrate their commitment to ethical and safe AI practices which builds customer confidence and can differentiate compliant businesses from those that lag in regulatory adherence.

As consumers become more aware of AI’s potential risks, they are likely to prefer companies that prioritize safety and transparency.

Complementary to this, compliance can open new market opportunities, particularly in regions where similar regulations may be adopted, positioning compliant companies as leaders in ethical AI deployment.

Setting the stage for global AI regulations

The EU AI Act is expected to serve as a benchmark for global AI regulations, influencing other nations to develop similar legislative frameworks. As countries observe the implementation and outcomes of the EU AI Act, they are likely to adopt comparable measures to regulate AI technologies which will create a more complex global legislative environment, requiring companies to navigate multiple regulatory frameworks.

Staying compliant with the EU AI Act can help businesses prepare for these global changes, making it easier to adapt to new regulations as they emerge. Companies that proactively align with these standards will be better positioned to operate globally with reduced regulatory friction.

Striking the perfect balance

The EU’s AI Act aims to regulate AI technologies ethically while supporting innovation. Through setting clear standards for safety, transparency, and accountability, the Act provides a framework that encourages responsible AI development.

Balance is important for promoting trust in AI technologies, meaning they are developed and used in ways that benefit society while minimizing risks.

The EU’s proactive stance sets a global precedent, encouraging other nations to follow suit and create their regulatory frameworks. International alignment can drive the development of global standards for AI, facilitating innovation while safeguarding public interests.

Alexander Procter

August 9, 2024

7 Min