Legacy systems and insufficient infrastructure as major barriers
The conversation around artificial intelligence in government has become operational. But public sector agencies across the UK are still shackled by outdated IT infrastructure. That’s not sustainable. These legacy systems block the practical use of AI where it matters most, regulatory compliance, resource allocation, public service automation, and real-time decision making. To get serious about AI, you’ve got to start by modernizing the foundations. Otherwise, you’re just running advanced software on obsolete machines. Not smart.
Nicky Furlong at SAS has it right. She leads Public Sector, Health and Life Sciences in Northern Europe, and she’s not being dramatic when she says outdated systems are a major hurdle. According to a recent SAS-backed survey cited by the Public Accounts Committee, 69% of UK government officials confirm that these aging systems are stopping AI from gaining traction. If the base isn’t stable and scalable, everything stacked above it will underperform.
And let’s be clear, the issue isn’t the availability of AI technology. That’s advancing quickly. What’s missing is the integration layer: real-time data flow, API compatibility, dynamic compute environments. These are basic, functional elements that AI needs to operate efficiently. Without them, what you get is a surface-level deployment of AI, often limited to pilot projects or internal assessment tools, with very little public impact. If you’re leading a governmental or enterprise-scale operation and skipping infrastructure investment while pursuing AI, you’re putting the cart before the horse.
C-suite leaders should view infrastructure upgrades not as sunk costs but as accelerators, critical enablers for operational intelligence, regulatory compliance, and citizen trust. AI can’t just be plugged into old systems and expected to perform miracles. It demands modern architecture designed to support real-time responsiveness, flexibility, and security. With increasingly complex public sector challenges, the need is urgent. The choice is clear: modernize now or fall behind fast.
Fragmented data sharing and limited Public-Private collaboration curtail AI effectiveness
AI is information-driven. If the information is fractured, outdated, or inaccessible, you can’t expect the systems to deliver meaningful outcomes. That’s where the UK government is running into problems, data is fragmented across departments. It’s not connected, and in many cases, not shared at all. That creates silos. When government departments sit on isolated datasets without proper interoperability, AI can’t learn, predict, or act with precision.
Nicky Furlong of SAS calls this issue out directly: poor data sharing is locking out opportunity. Without disciplined data governance and a framework for transparency, AI in the public sector won’t go far. Decision-makers who want to see results need to accept that AI needs clean, accurate, real-time data. Not tribal knowledge stored in outdated systems. This kind of fragmentation makes it hard to spot patterns, manage risk, or optimize public services effectively.
Another missing gear in this system is collaboration with the private sector. Right now, many government AI deployments operate in isolation, missing out on proven tools and faster development cycles that the private sector brings. Bringing private expertise into public implementations doesn’t reduce control, it increases capability. There’s technical maturity and domain-specific innovation in the private space that public sector AI efforts can directly benefit from.
C-suite leaders in government and enterprise positions should see this as a straight-forward challenge with a clear fix: enable data-sharing standards and open up more structured partnerships with external AI vendors. The focus should be on trust, clearly defined objectives, and regulatory alignment. If leadership fails to prioritize this, AI development will continue to stall in the bureaucracy. With the right infrastructure and shared intelligence, scale and performance improve fast. Without it, all you have is isolated tools and unrealized goals.
Necessity of a coherent, Citizen-Focused data strategy
If AI is going to improve public services in a measurable way, it needs to be linked to outcomes that matter, outcomes that affect people directly. That only happens when you start with a clear data strategy. Hicham Mabchour, UKI Country Leader & Regional Vice President at Dynatrace, made this point in response to recent government investment initiatives. He emphasized that without a data-first approach, AI just becomes another technology project with no guarantee of impact.
The right strategy aligns AI with specific areas where citizen-facing services are underperforming, whether it’s delays in public service delivery, inefficiencies in case management, or gaps in emergency response systems. That kind of alignment doesn’t happen with vague goals or abstract metrics. It requires clearly defined objectives, mapped to data sources that are consistent, accurate, and structured for high-speed analysis.
Mabchour also made it clear that the government shouldn’t expect AI tools to function effectively on broken systems. Deploying AI on top of legacy platforms without fixing the core problems just compounds inefficiencies. What’s needed is a full onboarding process for AI that starts with strategic data gathering, followed by smart integration into relevant workflows.
Leaders need to ask the right questions early: What are we trying to solve? Where are the gaps in service quality? What data do we need to track progress in real time? A data strategy done well ensures that public sector AI is more than just innovation theatre. It’s a practical tool with a defined purpose and measurable return on investment. Without that framework, adoption won’t scale, because it won’t deliver.
Importance of a skilled workforce and transparent AI governance
Technology alone doesn’t solve critical problems, people do. Even with substantial AI investment, progress in the public sector will stall if the workforce doesn’t have the right expertise. Hicham Mabchour at Dynatrace laid this out clearly: AI success hinges on the people operating, overseeing, and interpreting those systems. That requires training, fluency, and a deep understanding of both the potential and the limitations of AI models.
Governments can’t rely on a few technical specialists to carry this forward. They need broad, operational adoption supported by upskilling across departments. That means creating teams fluent in AI who understand real-world data flows, model behaviors, ethical frameworks, and output validation. This is especially important in areas like healthcare, justice, and social services, where decisions influenced by AI must meet a high standard of scrutiny.
Transparency is also critical. If agencies use AI to support decisions without clearly communicating how those systems produce results, they risk losing public trust. Transparency comes from making sure the entire AI process is open to review, comprehensible to leadership, and guided by accountable governance structures. Citizens expect that when algorithms support decision-making, those systems are fair, explainable, and accountable.
For C-level leaders, the playbook is simple: deliver AI investments alongside workforce strategy and clear governance protocols. Treat skills and trust as non-negotiables, not as optional add-ons. An AI-literate team makes faster, better-informed decisions, and when systems are transparent, those decisions earn public confidence. That’s how institutional momentum builds.
The need for holistic reform to establish leadership in public sector AI
The UK government has taken some early steps toward AI adoption, but isolated improvements won’t produce long-term transformation. If the objective is to be a global leader in public sector AI, then what’s needed is a coordinated, large-scale execution. That means aligning infrastructure upgrades, clear policy, strong partnerships, defined data strategies, and workforce development around a single operational vision.
Industry experts like Nicky Furlong of SAS and Hicham Mabchour of Dynatrace agree on this point: AI will fail to scale if it’s treated as a standalone tool rather than a cross-cutting enabler. Legacy systems, disconnected data, and inconsistent policy are not technical issues alone, they’re structural. Keeping AI deployments segmented isn’t a matter of caution; it’s a limitation that prevents scale, accuracy, and measurable outcomes.
There’s already momentum. Initiatives such as the AI Action Plan give the government a framework to build from. But the follow-through is where impact is made. That means funding must match ambition. Programs must deliver results, not just promises. And leadership must be visible, not reactive. This is where C-suite managers and high-level policymakers have to lead decisively, supporting digital transitions and actually driving them across departments and services.
For executives evaluating current readiness, the message is clear: AI maturity requires coordination, not experimentation. There will be no leadership position in global AI governance without investment in the systems, people, and strategies to match. Setting policy isn’t enough. Execution makes the difference. Leaders who commit to integrated reform, and deliver it with speed and accountability, will shape the next generation of public sector performance. Everyone else will be catching up.
Key executive takeaways
- Upgrade core systems to unlock AI: Leaders should prioritize modernizing legacy infrastructure, as 69% of UK officials cite outdated systems as the biggest barrier to effective AI adoption.
- Break down data silos now: Public sector AI initiatives are falling short due to poor data sharing and weak private sector collaboration—both must improve to enable scalable, high-impact deployments.
- Build strategy around citizen outcomes: AI investment must start with a targeted data strategy that addresses real citizen needs and integrates clean, usable data into service delivery operations.
- Train for impact and transparency: Executives must develop AI-literate teams and embed clear governance to build systems that are both effective and trusted by the public.
- Align reform for sustained leadership: UK leaders aiming to lead in AI must coordinate infrastructure, policy, data, people, and execution—piecemeal fixes will only limit future capability.