Why the EU can’t agree on AI and privacy rules

The European Union just hit pause on two major regulations, AI liability and digital privacy. Officially, they said there was “no foreseeable agreement.” Unofficially? The real reason is that EU member states are locked in a battle over AI’s future.

France wants to go all in on AI innovation. President Macron made that clear at the recent AI summit. On the other hand, Germany and several other nations remain deeply skeptical, pushing for heavy regulations. These two countries, the economic powerhouses of the EU, are completely at odds. And when France and Germany don’t agree, nothing moves forward.

Governments fear that if they regulate too soon, they’ll kill their AI industry before it even takes off. But if they wait too long, the risks of unchecked AI, data misuse, biased algorithms, security flaws, could explode. It’s a high-stakes balancing act, and right now, the EU is choosing to delay rather than decide.

This divide is growing. The EU isn’t operating as a unified entity on AI anymore. Instead, we’re seeing individual countries carve out their own approaches, which could lead to a fragmented regulatory landscape across Europe. That’s not great for businesses trying to operate across borders, but it also means companies have some breathing room before the rules tighten.

The problem with overregulating AI too soon

One of the biggest fears about regulating AI is that it could stifle innovation before the industry has a chance to fully develop. The EU has a reputation for being overly bureaucratic, turning simple ideas into endless red tape. There’s a saying in tech circles: The EU can take a two-sentence problem and turn it into a 14.5-page regulation that contradicts itself multiple times.

The challenge here is speed. AI is evolving at an exponential rate, and the last thing Europe wants is to build a regulatory framework that’s outdated the moment it’s implemented. Overregulating now could put European AI companies at a huge disadvantage against competitors in the U.S. and China, where regulatory environments are more flexible.

If you apply heavy AI regulations too early, you risk handicapping your own companies. While the EU debates its approach, AI development will continue elsewhere, likely in places with fewer restrictions. This could leave European businesses struggling to keep up.

But here’s the irony: The EU also doesn’t want to be caught flat-footed. If AI moves too fast without oversight, it could create massive legal and ethical headaches down the line. That’s why we’re seeing this cautious approach, regulators are trying to buy time.

The EU, once known as the world’s strictest tech regulator, is beginning to acknowledge that if they don’t loosen up, they’ll fall further behind in the AI race. That’s a major departure from their usual playbook, and it suggests that global competition is pushing even Europe to rethink its regulatory strategy.

Why the EU’s decision matters beyond Europe

The EU has historically led the charge in setting global regulatory standards, look at GDPR, which reshaped privacy laws worldwide. But pulling back on AI liability means one thing: There’s no clear global leader in AI governance right now.

This matters because when the EU enforces rules, other regions tend to follow. Companies adjust their global policies to stay compliant with European law, which has a ripple effect on how AI is handled elsewhere. Now that the EU is hesitating, it creates two possible outcomes.

First, we could see regulatory fragmentation, different countries implementing different AI rules, making it a nightmare for businesses operating internationally. Second, we might see a “race to the bottom,” where governments with the least restrictions attract AI development, prioritizing speed over oversight.

Without clear rules from the EU, other countries may hesitate to introduce their own regulations, leading to uncertainty and inconsistent standards. And if companies start flocking to jurisdictions with the weakest oversight, we could end up in a situation where AI advances rapidly but without the necessary safeguards.

“This is a pivotal moment. The EU’s decision sets the tone for AI governance worldwide. Right now, we’re in a wait-and-see phase, but it won’t last forever. AI is moving too fast. At some point, regulation will catch up. The only question is when, and who will take the lead.”

Why the EU dropped its privacy law and why that’s a good thing

The EU just pulled the plug on a major privacy regulation, often called the “cookie law.” On paper, this might look like a setback for digital privacy. In reality, it’s a smart move.

Here’s why: The original proposal was already outdated. The internet moves fast, and tech giants like Google have made changes to how user data is handled. Regulators recognized that the law, in its current form, wouldn’t actually improve privacy, it would just add another layer of complexity without meaningful benefits.

The EU has bigger priorities to address, and this law wasn’t keeping up with the modern digital landscape. Instead of pushing forward with ineffective regulations, the EU is choosing to refocus on what actually matters, like AI governance and broader data protection policies.

This decision also reflects a broader shift in regulatory thinking. In the past, the EU has been quick to introduce strict rules, assuming more regulation automatically leads to better outcomes. But now, there’s an acknowledgment that timing and adaptability matter. Regulators don’t want to enforce laws that will be obsolete the moment they’re enacted.

That doesn’t mean privacy isn’t a concern, far from it. But instead of rolling out another rigid framework, lawmakers are waiting to see how the broader EU AI Act plays out before making further decisions. They want to make sure the entire regulatory structure fits together, rather than rushing fragmented policies.

For businesses, this means fewer unnecessary compliance headaches, for now. But make no mistake, data privacy remains on the agenda. The EU isn’t backing off, they’re just recalibrating.

The unfinished business of AI liability

“If there’s one thing the EU’s decision doesn’t fix, it’s the question of who is responsible when AI goes wrong. This is the biggest unresolved issue in AI governance today.”

Let’s say an AI system makes a critical mistake, misdiagnoses a patient, causes financial losses, or even leads to a fatal accident. Who takes the blame? The company that built the AI model? The business that fine-tuned it? Or the end user who interacted with it? Right now, there’s no clear answer.

There is a fundamental problem: GenAI operates in a way that traditional liability laws weren’t designed to handle. AI is stochastic, meaning it doesn’t always give the same answer to the same question. If developers themselves can’t predict exactly how their models will behave in every scenario, how can you hold them legally accountable for AI’s decisions?

That’s what the EU’s proposed AI liability rules were meant to solve. The idea was to create a framework that establishes legal responsibility beyond just private contracts. But now that these regulations are on hold, businesses are left with uncertainty.

There is another key issue: AI-generated data logs. One of the scrapped proposals would have required logging every AI interaction, an attempt to create an audit trail for accountability. But implementing such a system at scale is a logistical nightmare. AI interactions generate massive amounts of data, and securely storing and managing that data comes with its own privacy and security challenges.

So where does that leave businesses? For now, AI liability will be handled on a case-by-case basis through private contracts and legal disputes. But that’s not a long-term solution. As AI becomes more embedded in everyday decision-making, the need for clear, enforceable rules will only grow.

The EU isn’t ignoring this issue, it’s just buying time. The real question is whether they’ll be able to create a legal framework that works before AI advances even further. Because one thing is certain: AI isn’t slowing down. Regulation, on the other hand, still is.

Key takeaways

  • Divergent regulatory views: EU member states are split on AI governance, with France favoring innovation and Germany pushing for strict oversight. Decision-makers should monitor policy developments as regulatory fragmentation may impact cross-border operations.

  • Risks of overregulation: Overly rigid AI regulations could stifle innovation and diminish competitive edge for European companies. Leaders should prioritize a balanced approach that supports technological advancement while ensuring accountability.

  • Global regulatory impact: The EU’s hesitancy in enforcing new AI liability rules could lead to inconsistent international standards or a race to the bottom in regulation. Executives need to prepare for a potentially fragmented global regulatory landscape affecting AI deployment.

  • Unresolved AI liability: The absence of clear guidelines on AI liability creates legal uncertainties, particularly when errors could lead to financial or reputational damage. Stakeholders should implement interim contractual measures and actively engage in shaping forthcoming regulations.

Alexander Procter

February 20, 2025

8 Min