What to know about AI regulations in the US and EU

Both the US and the EU are setting firm rules to guide AI development, aiming to ignite innovation while protecting society from potential harms. Their regulations prioritize transparency, security, and accountability, impacting everyone from developers to end users.

Through classifying AI applications based on their risk level, these rules determine how AI systems should be developed, deployed, and managed to prevent misuse and protect fundamental rights.

The US and EU divide AI systems into categories based on their potential to cause harm, shaping the regulatory space for AI innovation.

  • Transparency and security: Both regions require that developers disclose how AI models are trained, the data sources used, and potential risks involved. The aim here is to build trust and facilitate compliance. Security requirements focus on protecting AI systems against unauthorized access, malicious attacks, and data breaches.
  • Security by design and by default: High-risk AI systems must integrate security measures from the very beginning—requiring developers to proactively address security concerns during the AI’s development, rather than attempting to add safeguards after deployment.

Breaking down the latest AI rules and what they mean

How US AI regulations could impact your development

In the US, new regulations set specific roles and responsibilities for federal agencies, aiming to buttress oversight and governance of AI.

All federal agencies are now required to appoint a Chief AI Officer who oversees AI activities, to guarantee compliance and responsible use. Agencies must submit annual reports outlining the AI systems in use, associated risks, and strategies to mitigate those risks. The focus here is on risk management, continuous testing, transparency, and oversight to maintain public trust in AI technologies.

Managing the EU’s AI laws for a smooth compliance journey

The EU’s regulatory framework mandates comprehensive security measures and risk management practices for AI systems, especially those deemed high-risk.

“Security by design” is a non-negotiable requirement, particularly for high-risk AI applications. Under Article 15 of the EU AI Act, developers must implement rigorous testing to detect and control risks like data poisoning or model tampering. AI applications are classified into three risk tiers:

  • Unacceptable risk: AI systems that pose major threats to human safety or rights are banned, including technologies like social scoring, biometric surveillance, or facial recognition, except in specific law enforcement cases.
  • High risk: This category includes AI applications that could impact safety or fundamental rights, such as those used in critical infrastructure, education, employment, or law enforcement—requiring strict oversight, testing, and compliance before deployment.
  • Low risk: Most current AI applications fall into this category, such as AI-enabled games or spam filters. While these face minimal regulation, they must still adhere to transparency and basic security standards.

Security by design is now a must for AI developers

Practical steps for building AI with security at its core

Building secure AI systems requires integrating comprehensive security measures at every stage of development. Developers should focus on safeguarding the integrity of code and tools, as well as the data used to train AI models.

The rise of “hallucinated” or “poisoned” software packages, where malicious actors insert flawed or harmful code, have reinforced the need for continuous monitoring and validation of all components.

Developers must implement encryption, controlled access, and automated monitoring to protect sensitive data from threats.

Special attention is needed in environments with large-scale data exchanges, such as platforms like Hugging Face, where securing data and code is vital to prevent unauthorized access or manipulation.

How to keep your AI code safe from threats and attacks

EU’s Article 15 focuses on preventing malicious code from infiltrating AI systems. Developers should establish comprehensive code review processes, use vulnerability scanning tools, and adhere to best practices to identify and eliminate threats early.

The US also mandates that government-owned AI models, code, and data be made publicly available unless operational risks exist, adding another layer of transparency and security responsibility for developers.

How to innovate with AI while staying on the right side of the law

What different industries should watch out for in AI regulations

Different industries face unique regulatory challenges due to the nature of their data and the potential impact of AI systems.

Healthcare organizations must comply with stringent regulations protecting data privacy and security due to the sensitive nature of medical data. They need to maintain the highest standards for data integrity and transparency across all AI outputs, in line with laws like GDPR in the EU and HIPAA in the US.

Financial services firms must balance AI-driven benefits, like predictive monitoring, against strict privacy, fairness, and anti-discrimination regulations. The use of AI in these contexts must be carefully managed to avoid introducing bias or violating consumer rights.

Why privacy and transparency matter more than ever in AI

Both the US and EU place a high priority on protecting individual privacy and ensuring transparency in AI applications.

Developers must provide clear, accessible information on data usage, consent mechanisms, and user rights to opt out of AI-driven decisions. Transparency builds trust and is key for regulatory compliance, helping stakeholders understand how AI systems function and their potential impacts.

How to staying ahead of AI security risks

Identifying weaknesses early is a must to prevent them from becoming larger vulnerabilities. Developers should maintain a secure codebase through regular audits, employ both static and dynamic analysis tools, and follow secure coding practices to counter emerging threats.

Using weak or unverified AI libraries can create security gaps that may be exploited. To address these risks, developers should prioritize vetted libraries, keep dependencies up-to-date, and continuously monitor for known vulnerabilities.

Major AI compliance challenges developers need to clear

How developers can master AI compliance

High-risk AI applications face rigorous compliance requirements, including thorough risk management, extensive testing, data governance, human oversight, and cybersecurity protocols.

Compliance for these applications often involves extensive documentation, validation, and continuous monitoring to make sure they meet regulatory standards.

For low-risk applications, the focus is primarily on maintaining transparency and basic security standards. Even these simpler applications may require periodic reporting and adherence to ethical guidelines to develop trust and guarantee compliance with evolving best practices.

Acting on AI security flaws can’t wait

Addressing security challenges proactively is now mandatory under new AI regulations. Developers must implement continuous monitoring, regular updates, and robust incident response plans to keep AI systems secure. Immediate action is necessary to align with both current and future regulatory expectations.

US and EU regulatory data developers should know

In the US, all federal agencies must appoint a Chief AI Officer and submit annual reports that identify AI risks and outline mitigation strategies—for a more consistent oversight mechanism across government bodies.

The EU’s Article 15 mandates stringent measures for risk control, including the prevention of data and model poisoning. High-risk AI applications must meet comprehensive security and transparency standards before deployment.

Final thoughts

As AI regulations continue to evolve, ask yourself: Is your development strategy agile enough to meet these new demands while driving innovation? Staying compliant should not focus on avoiding penalties; it’s a chance to build trust, grow your brand’s value, and gain a competitive edge.

Tim Boesen

September 6, 2024

6 Min