A necessary evolution or a price surge?
AI is reshaping cybersecurity, but not without a hefty price tag. IT leaders are seeing security tools flooded with AI features, and they’re concerned about the financial impact. The cost of implementing AI-driven security is due to the computing power, data infrastructure, and specialized personnel required to keep these systems running effectively. And yet, the value of AI in cybersecurity is undeniable.
A survey by Sophos found that 80% of IT security decision-makers believe AI will significantly increase security costs. That aligns with Gartner’s projection that global tech spending will rise by 10% in 2024, largely due to AI-driven upgrades. Microsoft’s 45% price increase for Office 365, thanks to its AI assistant Copilot, is a clear sign of how companies are monetizing AI features.
But the real question is about value, not cost. If AI makes security more proactive, scalable, and accurate, then businesses need to think beyond the sticker shock. The challenge is making sure AI’s added complexity and cost bring real, measurable improvements rather than just inflating vendor pricing.
Will AI’s efficiency gains justify the cost?
For many businesses, AI in cybersecurity is seen as a key investment. AI has the potential to detect threats faster, automate responses, and eliminate time-consuming manual tasks. That means security teams can focus on high-level strategy rather than chasing false alerts. If done right, the efficiency gains could offset the rising costs.
The data backs this up. 87% of security leaders surveyed by Sophos believe that AI’s efficiency improvements will outweigh its costs over time. That’s why 65% of companies have already adopted AI-driven security solutions. And with low-cost AI models like DeepSeek R1 emerging, there’s hope that prices will stabilize as competition increases.
The focus here isn’t on AI replacing human security teams, but rather on augmenting their capabilities. AI can process massive amounts of security data in real time, spotting patterns that humans would miss. The key is to use AI as a force multiplier and not another tool that adds cost without clear ROI.
AI-powered cybersecurity and workforce concerns
AI can make security more efficient, but there’s a side effect that’s making cybersecurity professionals uneasy: workforce reductions. When AI starts automating threat detection, response workflows, and anomaly analysis, companies might see an opportunity to reduce headcount. That’s a big concern for security teams.
84% of IT security leaders fear that expectations around AI’s capabilities will pressure organizations to cut jobs. But there’s another risk—AI isn’t perfect. 89% of security professionals worry that flaws in AI-driven security tools could introduce new vulnerabilities rather than eliminate them. False positives, missed threats, and biased decision-making could all create more problems than they solve.
“Instead of manually hunting threats, security teams will need to focus on overseeing AI models, fine-tuning algorithms, and investigating sophisticated attacks that AI can’t handle alone.”
AI in cybercrime is less threatening than expected
AI-powered cybercrime sounds like a sci-fi nightmare, but the reality is far less dramatic. Despite predictions that hackers would quickly adopt AI to launch devastating attacks, most aren’t even using it. Instead, they’re sticking to traditional cybercrime methods like exploiting vulnerabilities, stealing credentials, and trading access to compromised systems.
Sophos researchers dug into cybercrime forums and found fewer than 150 AI-related discussions over the past year. In comparison, there were 1,000+ posts about cryptocurrency scams and 600+ about selling access to hacked networks. Even a Russian-language cybercrime forum that has had a dedicated AI section since 2019 only contains 300 discussions—a drop in the ocean compared to malware and exploit development.
Why? Because AI isn’t easy to control. Hackers prefer reliable, proven attack methods, and AI models still have too many unknowns. Some even see AI as a security risk to themselves, with one forum user warning that using a GPT-based tool could compromise operational security.
This doesn’t mean AI-driven cybercrime won’t happen—it just means that for now, hackers trust their own skills more than AI-generated exploits. The real risk lies in automation of social engineering attacks, not AI-built malware (yet).
A tool for spamming and social engineering, not advanced hacking
Hackers may not be building AI-powered cyber weapons—yet—but they’re certainly using AI for low-level attacks like phishing, spamming, and intelligence gathering. The reason is simple: AI makes it easier, cheaper, and faster to create convincing fake messages at scale.
Phishing attacks used to require manual effort—crafting emails, mimicking writing styles, and avoiding detection. AI has automated that process. Cybercriminals can now generate endless variations of phishing emails in seconds, making them harder to detect with traditional filters. AI also helps scrape public data for Open-Source Intelligence (OSINT), letting attackers personalize scams and make them more believable.
The numbers tell the story. Vipre detected a 20% increase in business email compromise (BEC) attacks in Q2 2024 compared to the same period in 2023. Even more concerning, AI was responsible for 40% of these attacks. That means nearly half of modern phishing attacks are now AI-assisted.
“AI is making low-effort, high-volume attacks much more effective. Businesses need to adapt by training employees to recognize AI-generated deception and strengthening authentication protocols.”
AI-generated malware is still in its infancy
There’s a lot of hype around AI-powered malware, but the reality is underwhelming. While some cybercriminals have experimented with using AI to generate malicious code, most attempts have been primitive and ineffective. Even in hacker forums, AI-generated malware is often mocked for being low quality.
Sophos researchers found very few successful attempts at using AI to create real attack tools. Hackers still prefer handwritten exploits because AI-generated scripts tend to be full of errors, inefficient, or easily detectable by modern security software. In fact, HP intercepted an AI-generated malware campaign in June 2024, and the script was so poorly written that it was immediately flagged and blocked.
Cybercriminal forums reflect this skepticism. One user sarcastically responded to an AI-generated script by asking, “Did ChatGPT write this? Because it definitely doesn’t work.” That’s the general sentiment—AI malware is currently a tool for amateurs, not professionals.
That said, it would be a mistake to assume AI-generated malware won’t evolve. Right now, it’s unpolished and unreliable, but as AI models improve, attackers could eventually use them to develop more effective exploits. The cybersecurity industry needs to stay ahead of that curve before it becomes a real threat.
The future of AI in cybercrime is a matter of “when,” not “if”
Hackers might not be fully embracing AI today, but many of them see the potential. Some forum posts discuss the idea of AI-powered autonomous attack tools, even if they admit they don’t have the capability to build them yet.
One post titled “The world’s first AI-powered autonomous C2” (Command-and-Control system) made it clear: this isn’t happening now, but it’s only a matter of time. Some hackers are already thinking about how to automate cyberattacks using AI, even if they don’t yet have the tools to make it a reality.
AI is already being used for small-scale automation. Some cybercriminals rely on it for basic tasks like scanning for vulnerabilities or automating reconnaissance. While AI isn’t driving cybercrime today, its eventual impact is inevitable. The question is how fast the technology will advance and whether cybersecurity defenses can evolve quickly enough to keep pace.
For now, AI is more of a tool for defenders than attackers. But that won’t last forever. Businesses need to future-proof their security strategies, invest in AI-driven threat detection, and train security teams to counter AI-assisted attacks before they become mainstream.
Key executive takeaways
- AI-driven cybersecurity costs are rising: IT leaders are seeing security expenses surge due to AI-powered tools, with 80% expecting significant cost increases. Decision-makers must assess whether AI investments deliver measurable protection improvements to justify the spend.
- Efficiency gains may offset AI expenses: While AI raises costs, 87% of security leaders believe its automation benefits outweigh the investment. Organizations should focus on AI solutions that reduce manual workloads and enhance real-time threat detection to maximize ROI.
- Cybercriminals are not widely using AI yet: Despite concerns, hackers prefer traditional attack methods over AI-generated exploits, citing reliability issues. Security teams should stay ahead by preparing for AI-assisted threats before they reach mainstream adoption.
- AI is a growing tool for phishing and social engineering: While AI-generated malware remains primitive, hackers are successfully using AI to scale phishing and business email compromise (BEC) attacks. Businesses should strengthen authentication protocols and employee training to counter AI-driven deception.