The hard-to-kill reality of legacy systems

You’d think by now, with all the technological leaps we’ve made, enterprises would have fully transitioned away from outdated systems. Yet, here we are—banks still running COBOL, hospitals relying on clunky electronic health records, and government agencies operating on tech that predates the internet.

These systems are old and deeply embedded in the core functions of industries that can’t afford downtime. In many cases, the older tech was built for reliability over flexibility, and as new tools get layered on top, the old systems remain at the heart of operations.

Nick Godfrey, Senior Director at Google Cloud, puts it simply: technology doesn’t vanish when something new comes along. Instead, enterprises end up with a patchwork of old and new, making their IT stacks increasingly complex.

Joel Burleson-Davis, SVP at Imprivata, adds that industries like banking and healthcare still depend on these systems because they form the core of daily operations. The numbers back this up—a 2019 Government Accountability Office report found that critical federal IT systems were anywhere from 8 to 51 years old, costing $337 million annually just to keep running. At the same time, 74% of manufacturing and engineering firms still rely on legacy systems.

“So, while innovation is happening, legacy systems aren’t going anywhere overnight. They’re too deeply integrated, too critical, and too costly to replace in one shot.”

The cost of change vs. the cost of standing still

If money weren’t an issue, companies would have upgraded long ago. But replacing legacy tech isn’t just concerned with the price tag. It’s about downtime, disruptions, and making sure the new system actually works. Enterprises operate in a real-time world, and even an hour of downtime can mean millions lost. Austin Allen, Director at Airlock Digital, compares it to changing a tire while driving—technically possible, but incredibly risky.

The financial burden is no joke. In 2023, organizations spent an average of $2.7 million upgrading legacy systems, according to SnapLogic. And that’s just the cost of making the switch—it doesn’t include lost productivity, training staff, or fixing the inevitable glitches that come with new software rollouts.

Sticking with legacy systems isn’t cheap either. Old systems require specialized maintenance, vendors may no longer support them, and when something breaks, fixing it is both expensive and time-consuming. And then there’s the biggest risk of all—security.

Legacy systems and the biggest cybersecurity gamble

Imagine securing your business with a security system designed in the 1980s. That’s essentially what enterprises are doing when they continue to rely on outdated technology. Legacy systems were built in an era when cyber threats were far less advanced. They weren’t designed to handle today’s sophisticated attacks, making them the weakest link in a company’s security posture.

Cybercriminals know this. They actively look for vulnerabilities in outdated systems because they’re easier to exploit. Once inside, they move laterally, gaining access to more critical data. And when a breach happens, it’s a financial and reputational disaster. According to IBM’s 2024 Cost of a Data Breach Report, the average data breach costs $4.88 million.

The challenge is that many organizations don’t even have a clear picture of how their legacy systems are integrated into their broader IT infrastructure. Jen Curry Hendrickson, SVP at DataBank, points out that many businesses lack proper documentation of how their systems interact. Without this visibility, patching vulnerabilities becomes a guessing game.

There’s no magic button to fix this, but companies can take smart steps. Zero-trust access—where no system or user is automatically trusted—is one way to limit exposure. Isolating legacy systems from the rest of the network and disabling unnecessary features also helps. And sometimes, the fix is surprisingly simple—Burleson-Davis notes that many companies run multiple versions of the same virtualization software without realizing it. Just consolidating those versions can significantly reduce security risks.

“If a legacy system can’t be replaced immediately, it must be locked down. Otherwise, it’s just a matter of time before it becomes an open door for attackers.”

Know your tech stack or pay the price

You can’t fix what you don’t understand. That’s the core issue when it comes to managing the risk of legacy systems. Every business wants to believe their IT infrastructure is neatly mapped out and well-documented. The reality? It’s usually a mess of overlapping systems, undocumented integrations, and outdated processes that no one remembers setting up in the first place.

The first step to securing legacy systems is understanding exactly what’s running, how data moves between systems, and where the weak spots are. It sounds simple, but as Jen Curry Hendrickson, SVP at DataBank, points out, most enterprises don’t have full documentation of their tech stack. That’s a huge blind spot when it comes to risk management.

Once you’ve mapped out the system, you have to ask the hard questions. Can this system be patched? Is it still supported by the vendor? If an attacker gains access, how easily can they move through the network? Nick Godfrey at Google Cloud notes that legacy tech wasn’t designed for today’s threat landscape, meaning businesses are dealing with a larger and more complex attack surface than ever before.

So what’s the fix? Start with basic security hygiene. Austin Allen at Airlock Digital suggests hardening legacy systems by shutting off unused operating system features—if a function isn’t necessary for business operations, it shouldn’t be running.

Joel Burleson-Davis also points out that companies often discover easy wins, like consolidating redundant software versions, which instantly reduces risk. The key is visibility. Without a full understanding of your tech stack, you’re going in blind, and that’s a dangerous place to be.

Replacing legacy systems without breaking everything

Let’s say a company decides to rip the Band-Aid off and replace a legacy system. Theoretically, this is the best move for long-term security and efficiency. But in practice, it’s rarely that simple.

First, there’s the cost. We’ve already established that replacing legacy tech isn’t cheap, but beyond that, enterprises need to consider workforce challenges. Older systems require specialized knowledge—often from employees who’ve been with the company for decades or external consultants who charge a fortune. At the same time, new technologies require a completely different skill set. Finding people who understand both is rare, and training existing employees takes time.

Then there’s the logistical challenge of actually making the switch. A Change Advisory Board (CAB)—a team of experts tasked with overseeing the transition—can help answer critical questions:

  • How long will the transition take?

  • What’s the backup plan if things go wrong?

  • How will this impact data flow across different systems?

Things will go wrong. Austin Allen emphasizes the importance of having a rollback strategy—if the new system fails, there has to be a way to revert to the old one without chaos. That’s where many companies make mistakes, assuming the transition will be smooth.

Beyond IT teams, the biggest hurdle is often end-user adoption. It doesn’t matter how advanced the new system is—if employees refuse to use it, the whole project is dead on arrival. Joel Burleson-Davis points out that in healthcare, for example, introducing new software without consulting doctors and nurses is a guaranteed failure. People resist change, especially when it disrupts their workflow.

And then there’s the risk of vendor lock-in. Jen Curry Hendrickson warns that companies often get excited about new technology, only to realize too late that they’ve become dependent on a single vendor’s ecosystem. Once you’re locked in, switching again later can be even harder.

“So what’s the best strategy? Move carefully, plan for failure, and make sure the transition is driven by business needs, and not just IT upgrades.”

The bigger picture

At the end of the day, replacing legacy systems is more than an IT problem, it’s an organizational transformation. And like any major shift, it requires leadership at every level, from the boardroom to the IT department.

Nick Godfrey at Google Cloud frames it well: modernization is a long-term play, and there will be trade-offs along the way. Businesses need a “north star”—a clear vision of what success looks like and a strategy to get there.

That means getting buy-in from the CISO, CIO, CTO, and business leaders to align on priorities. If security teams push for modernization but executives don’t see the immediate ROI, the project stalls. Likewise, if executives push for fast upgrades without considering the security and operational risks, things break.

More importantly, modernization is all about shifting how the organization thinks about agility, security, and resilience. The companies that do this well don’t swap out old systems for new ones. Instead, they build a culture of continuous improvement, making sure that today’s innovation doesn’t become tomorrow’s legacy problem.

Key executive takeaways

  • Embedded legacy dependence: Many enterprises rely on outdated systems deeply integrated into core operations, making quick replacements impractical. Leaders should assess how these systems impact operations and plan strategic upgrades.

  • High transition costs: Upgrading legacy systems involves significant direct costs and potential downtime that can result in millions in lost revenue. Decision-makers must weigh these costs against long-term security and efficiency gains.

  • Cybersecurity vulnerabilities: Legacy systems lack modern security features, exposing enterprises to costly cyberattacks and data breaches. Prioritize implementing robust controls and patch management to mitigate risks.

  • Strategic modernization imperative: Transitioning from legacy systems requires a comprehensive approach that includes inventorying existing tech, planning for downtime, and securing stakeholder buy-in. Leaders should form cross-functional teams to guide this transformation while managing operational disruptions.

Alexander Procter

February 18, 2025

8 Min