AI coding tools are changing the game

AI-powered coding assistants, like GitHub Copilot, are transforming software development. They fundamentally change how code gets written. Since late 2022, these tools have gone mainstream. Microsoft reports that GitHub Copilot usage has surged by 50% in the past two years, now serving 150 million developers. 

The numbers tell the story. Apiiro reports a 70% surge in pull requests (PRs) since Q3 2022, far outpacing repository growth (30%) and developer count increases (20%). More code is being written, faster than ever. This is a major efficiency boost, but it also raises questions. Are organizations ready to handle the rapid influx of AI-generated code?

Faster coding is great, but speed without control creates risk. The challenge is ensuring that quality and security don’t fall behind. Businesses that harness AI-driven development without a solid plan for oversight are moving fast—but they might not be moving in the right direction.

Security risks are growing with AI-generated code

AI is writing more code than ever, but security teams can’t keep up. The sheer volume of AI-generated code is overwhelming traditional security review processes. Sensitive API endpoints have nearly doubled, exposing valuable data to risks that many companies aren’t fully prepared for.

AI doesn’t recognize security policies the way human developers do. It doesn’t automatically account for compliance rules or organizational risk factors. That’s why insecure code is slipping through—faster than security teams can catch it.

Gartner confirms what many security leaders already know: manual security workflows are bottlenecks in an AI-driven development world. Without automation and new security models, businesses will keep falling behind. Companies need security solutions that scale as fast as their AI-driven code output.

AI is exposing more customer data than ever

AI-generated code is moving sensitive data around in ways that businesses can’t afford to ignore. Since Q2 2023, there’s been a threefold increase in repositories containing Personally Identifiable Information (PII) and payment details, according to Apiiro.

Regulations like GDPR in the EU and UK, and CCPA in the US impose strict penalties for mishandling customer data. Violations lead to massive fines and long-term reputational damage. The problem is, AI doesn’t understand compliance. It’s generating code that accidentally embeds sensitive data into repositories, creating security blind spots that teams might not detect until it’s too late.

For executives, this is a cybersecurity issue and a business risk. Organizations need to integrate automated scanning and compliance monitoring into AI-driven workflows. If they don’t, they’re gambling with customer trust and regulatory fines.

A 10X surge in insecure APIs is a problem

APIs are the backbone of modern applications. They connect systems, enable transactions, and power entire business models. But AI-generated code is pushing out APIs faster than security teams can secure them.

Apiiro’s research found a 10X increase in repositories containing APIs with missing security basics like authorization and input validation. This means more weak points, more opportunities for attackers, and more potential data breaches.

Executives need to recognize the trade-off. AI-driven development is unlocking speed and efficiency, but it’s also expanding the attack surface. Security strategies that worked five years ago aren’t enough anymore. 

“Businesses need automated API security solutions that can scale at AI speed. Without them, they’re leaving the doors wide open for attackers.”

Traditional security governance is failing

Most security frameworks weren’t built for AI-generated code. A single AI-assisted pull request can generate thousands of lines of new code. Manual review processes aren’t built to handle that scale. The result? Security debt piles up, and vulnerabilities slip through undetected.

Gartner’s research highlights what many companies are experiencing firsthand: outdated security workflows are becoming the biggest bottlenecks to innovation. Businesses that rely on manual code reviews and traditional governance models are falling behind.

Executives should see this as a wake-up call. Security governance needs to be re-engineered for the AI era. Automated code scanning, AI-driven threat detection, and scalable compliance solutions are key. They’re essential for keeping up with the reality of AI-driven development.

AI can’t be an excuse for poor security

AI coding tools aren’t slowing down. Companies using them effectively will see massive efficiency gains. Those ignoring the security risks will see massive liabilities.

The solution is to build smarter. Security has to scale with AI-driven development. Automated security audits, real-time compliance monitoring, and AI-aware governance models are the way forward.Companies that invest in these now will be the ones that thrive in the AI-driven future.

AI is changing the game. The question is whether businesses are playing to win—or setting themselves up for failure.

Key executive takeaways

  • AI-driven development is accelerating, but security is lagging: AI coding tools like GitHub Copilot have dramatically increased developer output, with pull requests surging by 70%. Leaders must ensure security practices evolve at the same pace to prevent unchecked vulnerabilities.
  • AI-generated code is outpacing traditional security reviews: Sensitive API exposures have nearly doubled, as AI assistants lack awareness of compliance and risk. Organizations must integrate automated security audits to keep up with AI-driven development.
  • Sensitive data exposure is a growing compliance risk: Repositories containing personally identifiable information (PII) and payment data have tripled since 2023. Executives must implement real-time monitoring and automated compliance enforcement to avoid regulatory fines.
  • Insecure APIs are multiplying at an alarming rate: A 10X rise in APIs lacking security basics like authorization and input validation exposes businesses to major threats. Leaders should prioritize API security automation to prevent breaches.
  • Outdated security governance is a bottleneck for innovation: Manual review processes cannot handle AI-generated code volume, leading to growing security debt. Businesses must transition to AI-driven security frameworks to maintain speed without compromising protection.
  • AI productivity gains must be balanced with proactive security: While AI coding tools enhance efficiency, failing to secure them creates serious financial and reputational risks. Leaders must invest in scalable security solutions to sustain long-term growth in an AI-driven landscape.

Tim Boesen

March 13, 2025

5 Min