Enforcement of the online safety act faces criticism for its perceived lack of vigor
The Online Safety Act was meant to be a game-changer, especially for how we manage risks online and protect vulnerable users. But right now, the enforcement side isn’t keeping up. Ofcom, the UK’s communications regulator, has the right tools: it can fine companies up to 10% of their global revenue or £18 million, whichever is greater. That’s serious leverage. However, up to this point, the use of these powers has felt procedural and overly cautious. Critics, including media expert Iona Silverman from Freeths, say Ofcom’s approach comes across more like ticking boxes than disrupting harmful practices.
What’s missing here is urgency. Social platforms change fast. Harmful content doesn’t wait. And if regulators can’t, or won’t, act decisively, the risks expand. You leave the door open for ongoing exposure to harmful content, especially for kids and young adults who are the most impacted. If people start feeling like the government can’t safeguard its citizens online, the pressure builds for more extreme policies, like blanket bans for under-16s. That’s already being actively considered in countries like Australia.
Here’s the broader context for any executive watching this unfold: regulation doesn’t hold back innovation, stagnation does. Waiting for platforms to self-correct ignores economic and social realities. The lesson is simple: if we want trust in the digital space, enforcement must be real, immediate, and consistent. C-suite leaders should operate with the assumption that mild penalties will be ignored unless real impact touches revenue or user growth. For compliance to work at scale, enforcement needs to scale with it, and move as fast as the platforms it’s trying to regulate.
Investors care about predictability. Regulators care about public safety. There’s no conflict when both sides commit to clear, actionable, and transparent standards. Delays or diluted enforcement only invite risks, and ultimately erode user trust in both platforms and regulators.
Social media algorithms contribute to minors’ exposure to harmful and inappropriate content
Let’s stop pretending this issue starts and ends with underage users lying about their age. That’s only part of the problem. The bigger concern is the algorithmic systems these platforms run—specifically, how they target and deliver content. These systems aren’t neutral. They’re optimized for engagement, not safety. And when a 12-year-old gets past an age gate, the next interaction is dictated by algorithms that prioritize time-on-platform over protection.
Iona Silverman, media expert at Freeths, made this point clearly: these platforms aren’t just passive tools. Once users are onboard, algorithms shape their entire experience. For underage users, that often means being fed content and ads they should never have seen, ads that push age-restricted products, messages that distort body image, or political messages that exploit emotion. According to an Advertising Standards Authority study involving 100 children, misrepresenting age is common. Accessing inappropriate content is almost inevitable once they’re in the ecosystem.
It’s important for executive teams and product decision-makers to understand this: the problem isn’t just the backdoor access, it’s what happens after that access is granted. Every data point, every swipe, every click feeds back into a system that knows how to keep users locked in, including minors. That feedback loop has huge implications for user safety and regulatory liability.
From a compliance standpoint, waiting for legislation to catch up is shortsighted. The more responsible move is to overhaul recommendation engines and embed accountability into the way platforms serve content.
There’s a clear opportunity right now for platforms to lead with transparency. Show exactly how algorithms are being adjusted to flag high-risk content. Create real-time tools to audit exposure and introduce age-aware content models that gate the front door and everything beyond it. These actions should be standard, not optional. And they need to be driven by leadership, not forced by fines.
Exposure to unregulated content on social media adversely impacts the mental health of minors
The correlation between unfiltered digital exposure and declining mental health among young users is well-established. When minors are pulled into unmoderated environments, ones where harmful content, predatory messaging, or unrealistic social comparisons are pushed to the top of their feeds, the psychological impact is direct and measurable.
We already know children frequently bypass age verification systems to access social media platforms. That’s been confirmed by research from the Advertising Standards Authority, which found that many underage users easily misreport their age to join and engage. Once inside, the system isn’t designed to distinguish user maturity. It operates on behavior-based input and engagement metrics, not ethical consideration. That’s a flaw. And one that product and policy leads inside social media companies need to own.
The result? Youth anxiety spikes. Self-esteem erodes. Social validation becomes addictive. These outcomes have been documented by experts in child psychology, independent watchdog groups, and now governments. Yet platforms still operate as though the risk is fringe or manageable with minimal intervention. That assumption no longer holds up legally, socially, or operationally.
For executives leading strategy, product, or risk teams, this trend should trigger immediate evaluation of content exposure paths. Driving short-term engagement without accounting for long-term user health isn’t sustainable. Investors and regulators are moving in the same direction: greater accountability, especially around harm linked to minors.
The business risk now runs far beyond optics. If platforms fail to address these patterns, they invite aggressive policy responses, higher compliance costs, and long-term reputational damage. A more strategic move is committing resources toward in-platform mental health controls, content filtering tools designed around age-appropriate exposure, real-time reporting mechanisms, and partnerships with independent safety experts.
Social media platforms must assume direct responsibility for moderating harmful content
Relying on users to flag content after harm happens isn’t scalable, and it’s not credible anymore. That model worked when platforms were smaller, content volumes were manageable, and expectations around responsibility were lower. That time is over. Users shouldn’t be treated as free labor for moderation. Social media companies have both the technological capability and the legal obligation to build systems that catch the threats in real time, before they spread.
The Online Safety Act makes this responsibility explicit. Iona Silverman, media expert at Freeths, has been clear, platforms no longer have the option to remain passive. They are now legally bound to prevent online harm proactively, especially for minors. That means automated tools must be paired with human oversight, not buried behind a user reporting system that shifts blame and delays action.
Waiting for a notification to remove hateful, illegal, or damaging content is a guaranteed failure, and regulators know it. Beyond user safety, it creates a delayed-response liability that hits harder the longer companies ignore it. Content spreads fast. Harm happens quickly. Mitigating it after the fact compromises trust and attracts scrutiny.
For leadership teams in tech, legal, or governance roles, the priority must be to integrate prevention mechanisms into every stage of product architecture, not bolted on as afterthoughts. Machine-learning models should have thresholds tailored to legal standards. Moderator teams should be equipped with intervention protocols, not optional workflows. Content pipelines need to embed classification layers that flag threats automatically without compromising legitimate dialogue.
For companies with global operations, this also means aligning internal processes with growing regulatory requirements, because those expectations aren’t scaling back. Inaction is no longer an inexpensive choice. Deploying advanced moderation systems, publishing transparency reports, and engaging with regulators early signals control, not compliance fatigue. It’s the base level of operational maturity regulators now expect from any platform managing large audiences.
Acting now preserves optionality. Platforms that lead in proactive moderation retain more freedom in how they operate and innovate. Those that resist find that decisions get made for them.
Ofcom must continuously update its guidelines to remain effective against emerging technologies like AI
Technology doesn’t wait. AI evolves fast, content moves faster, and threat patterns shift daily. Regulatory frameworks built five years ago, or even last year, don’t match the pace of change in today’s digital platforms. That’s why static enforcement models won’t deliver effective results. For the Online Safety Act to have real traction, Ofcom needs to adopt a live, iterative approach to its guidelines.
Iona Silverman from Freeths made this point, calling for Ofcom to remain adaptable and forward-thinking. She’s right. Applying outdated technical assumptions to modern digital systems reduces the impact of regulation and increases the compliance gap. Social media platforms use deep learning, predictive AI, and behavioral modeling to shape user experiences. If regulation trails behind that curve, it’s irrelevant before it’s implemented.
Executives in tech, risk, and governance roles should take this as a clear signal: compliance will not be a single document. It will be an ongoing process that demands the same flexibility and speed typically reserved for product development and security. If platforms want to scale responsibly, they need to engage with regulators as strategic partners, actively contributing to the evolution of rules, rather than waiting for penalties once they’re out of alignment.
There’s also long-term upside in helping shape the regulatory roadmap. Companies that assist regulators in understanding emerging tools like generative AI, real-time content synthesis, and immersive media will be in a stronger position when the next layer of compliance is introduced. They’ll be more prepared, less reactive, and with far stronger internal systems already in place.
The goal isn’t to slow tech down, but to guide its trajectory so that innovation doesn’t undercut safety. Future-proofing regulations doesn’t mean forecasting every specific threat, it means building adaptability into the system. A regulator capable of adjusting guidelines as fast as platforms ship features is a competitive asset, not an obstacle. Companies that understand and support that dynamic will scale cleaner, move faster, and avoid expensive overcorrection later.
International concerns over free speech underscore debates around the online safety act’s implications
The conversation about the Online Safety Act isn’t limited to the UK. The United States, through its State Department, has raised concerns that the act could infringe on free speech. That matters. When global democracies start questioning whether protecting users online also restricts expression, it signals a challenge that regulators and tech companies both have to solve directly, not defer.
Iona Silverman, media expert at Freeths, addressed these points, stating that the UK government has been clear: the intent of the act is to combat criminal content, not silence public discourse. That distinction is critical. Governments are trying to stop platforms from enabling crime, abuse, and systemic harm while still keeping open dialogue intact.
C-suite leaders in tech, especially those operating across jurisdictions, should pay attention to how content moderation rules interact with expectations around freedom of expression. It won’t be enough to say you’re in compliance. Stakeholders—including regulators, users, legal observers, and investors, will demand visibility into how those standards are being balanced and enforced across different markets.
This is where strategic clarity matters. The goal isn’t to decide between safety and speech. It’s to design policies, internal and external, that show those rights can coexist. Companies that dismiss the concerns raised by international stakeholders risk being seen as either evasive or negligent. That’s not sustainable.
There’s momentum building globally around controlled digital environments that protect users but still uphold rights. Public trust in platforms relies on showing that protections don’t automatically mean censorship. Silverman pointed to recent media, such as the TV drama “Adolescence”, to demonstrate what happens when regulation is absent: real harm, often targeting younger users, goes unchecked.
Executives need to be part of the solution architecture, not sitting in the middle. That means supporting rules that are narrowly applied to criminal behavior, transparent in execution, and consistently reviewed. The debate over the Online Safety Act isn’t just legal, it’s strategic. And companies that get it right will gain leverage across both compliance and reputation in global markets.
Main highlights
- Weak enforcement limits impact: Ofcom’s current slow and checklist-driven enforcement undermines the Online Safety Act’s intent. Leaders should push for assertive, penalty-backed regulation to drive platform accountability and preempt politically driven overreach.
- Algorithms drive user risk: Platforms are enabling harmful content exposure through algorithmic feeds, especially for underage users. C-suite leaders should invest in age-aware content delivery systems and align algorithmic design with user safety standards.
- Unregulated exposure hits mental health: Inadequate content filters are linked to rising anxiety and self-esteem issues in minors. Executive teams should prioritize mental health safeguards and integrate risk mitigation into platform growth strategies.
- Reactive moderation is no longer viable: Leaving content moderation to users is outdated and ineffective. Leaders must transition to proactive, tech-driven moderation systems backed by clear human oversight to meet both legal and ethical standards.
- Regulations must evolve with tech: Static guidelines can’t keep pace with AI-driven platforms and emergent online threats. Decision-makers should advocate for—and participate in—regulatory co-development that enables real-time adaptability.
- Free speech concerns signal global scrutiny: International reactions suggest regulatory efforts must balance safety and expression effectively. Executives should ensure transparency in content governance and design policies that reinforce both open discourse and harm prevention.