Shortage of skilled tech talent is a major concern
Digital transformation doesn’t pause. When you’re navigating enterprise-scale shifts, like migrating infrastructure, upgrading ERP systems, or ramping up AI deployment, you need the right people to execute. Right now, we don’t have enough of them. That’s a daily reality for leaders across industries.
Kostas Georgakopoulos, CTO and CISO at Mondelēz, leads a $1.2 billion transformation encompassing workload migrations and generative AI implementation. He’s not worried about ambition or vision, those are in place. What keeps him up at night is human capacity. The work isn’t lacking, but the workforce is stretched. You can design all the systems you want, but without capable, focused people to build and run them, you stall.
This is a talent acquisition problem, that’s also being about retention, balance, and longevity. That’s why seasoned tech teams are looking for challenge, and for a work environment that respects their time and mental bandwidth. According to ISACA’s recent survey, about one in three tech professionals have changed employers in the last two years. These moves are driven mostly by a search for better work-life balance, not just pay. If you don’t prioritize that, you lose people fast.
Even now, the demand remains high. Nearly 500,000 tech job openings were active in February alone, based on CompTIA’s review of U.S. Bureau of Labor Statistics data. While some employers may believe market conditions give them negotiation power, that’s short-term thinking. There’s still a structural shortage of digital talent, especially in areas like AI, cybersecurity, cloud, and data architecture.
The path forward is simple. Upskilling must become operational strategy. Not a side project or a training week, an always-on process embedded in culture. If you’re building future-ready systems, you need future-ready humans to operate them. Otherwise, all your tech plans are just slideware.
Balancing speed and safety in technology adoption is challenging
Every executive wants to move fast, speed is a competitive asset. But going fast without stability leads to system risk, operational noise, and regulatory exposure. That tension is becoming more intense as AI capabilities expand and adoption accelerates across industries.
David Glick, SVP of Enterprise Business Services at Walmart, is clear on the issue: push too slow, you fall behind; push too fast, you open vulnerabilities. At Walmart, the team is deploying AI-powered coding tools across development teams in North America and India. The project is strategic, but the question remains, how fast is too fast? Leaders need to decide adoption speed based on readiness, not just enthusiasm.
Pressure to deploy AI is real. According to Informatica, over 90% of leaders are concerned about AI pilots moving forward despite earlier warnings or unresolved issues. Almost 60% say they feel pressure to advance those projects anyway. When systems scale before infrastructure or governance are ready, breakdowns are inevitable. Most executives know that, but often lack room to slow down.
What’s needed here is disciplined execution. AI transformation should move quickly, but with oversight. You need internal controls, transparent reporting models, and cross-functional readiness. Legal, security, IT, and operations all need to move in sync. If one lags, the system is at risk. Ultimately, your ability to perform under pressure comes from how prepared your foundation is, not how aggressive the launch timeline looks on a slide.
Too many companies are driving timelines based on boardroom urgency instead of product maturity. That disconnect is where failure usually starts. A better approach is to build velocity into systems only when resiliency is built in first. Speed becomes an advantage when the process can absorb it, and that starts at the leadership level.
Governance and compliance challenges in AI implementation stress leaders
AI is advancing fast, but regulation is lagging. That gap creates complexity for any company operating across regions. Executives are navigating fragmented laws, shifting ethical standards, and unclear compliance requirements, all while trying to scale global operations.
Steve McCrystal, Chief Enterprise Technology Officer at Unilever, manages a global portfolio of over 500 AI projects. His focus is on governing that growth responsibly. With around 23,000 employees trained on AI, Unilever isn’t avoiding innovation, but the company is pushing forward with structure. McCrystal emphasizes doing business “the right way,” connecting performance directly to ethical implementation. That means aligning technology with existing and future regulatory frameworks, before problems arise.
The U.S. lacks federal AI regulation. Instead, responsibility is falling on individual states, leading to a regulatory patchwork that complicates deployment at scale. In the EU, the AI Act is progressing toward enforcement, forcing companies to meet both strict development guidelines and evolving ethical benchmarks. International teams have to manage competing compliance regimes, each with different expectations, enforcement mechanisms, and legal risks. Waiting for clarity is not an option.
Shiyi Pickrell, SVP of Data and AI at Expedia Group, reinforces this. According to her, responsible governance is what allows her team to sleep at night—not innovation alone, but how it’s managed. That’s a signal more leaders are acknowledging: rushed AI rollouts without clear accountability can backfire.
Research backs this up. A joint study from Accenture and AWS found that over 80% of organizations believe responsible AI frameworks improve employee trust and help unlock innovation. That’s an operational strategy. Companies that scale ethical infrastructure alongside AI adoption are better positioned to pivot when regulation catches up.
There’s a wider takeaway here: governance means establishing control over your technology before external forces force control on you. Leaders who embed governance early will have more room to innovate later—with fewer reputational risks and more organizational clarity.
Cybersecurity threats and the risk of AI-Powered attacks pose growing concerns
Technology improves, and so do the threats. The same innovations that drive efficiency, automation, and intelligence are being adapted by attackers. AI is changing cybersecurity, in how we defend, and in how we’re attacked. Executives who underestimate this shift are going to be caught off guard.
Joe Depa, Global Chief Innovation Officer at EY, is clear: EY sees AI-based threat vectors as a growing concern, fueled by increased automation, faster breach attempts, and more sophisticated adversaries. The challenge now is to build better tools and to anticipate problems at machine scale. Depa stresses the urgency of having the frameworks in place now, not after attack models evolve further.
The market is noticing. Bloomberg Intelligence forecasts that the global cybersecurity market will more than double, reaching $338 billion by 2033, up from $152.5 billion in 2023. That scale reflects how central risk management has become to enterprise technology. If your systems are connected, intelligent, and cloud-based, then your exposure is non-trivial.
Confidence in protection is growing, but uneven. According to Darktrace’s 2025 State of AI Cybersecurity report, over 60% of CISOs and CIOs currently believe they’re adequately prepared for AI-powered cyberattacks. That’s a 15% increase year over year. But the confidence drops among people deeper in the stack. Only a minority of security architects, engineers, and SOC analysts report the same level of confidence. That disconnect matters. Strategic leaders may feel prepared based on visibility into tools and teams, but ground-level readiness is what determines defense success.
The talent gap is part of the problem. Skillsoft data shows AI and cyber skills are among the most in-demand but hardest to find. Without a pipeline of professionals who understand both AI and cybersecurity, companies will struggle to defend systems built on those platforms.
Executives should treat cybersecurity as core infrastructure. That includes zero-trust security design, continuous testing, rapid response systems, and human-centered training programs. But it also involves cultural mindset. The faster AI progresses, the faster attackers move. The only sustainable position is one where defense evolves at the same pace as innovation. Not behind it.
Responsible AI deployment is viewed as both an ethical imperative and tactical responsibility
AI is scaling, faster than expected. New research, products, and deployment tools are entering the market every week. That kind of momentum creates urgency. But more importantly, it creates responsibility. Technology leaders are shaping questions around inclusion, fairness, and long-term societal impact.
Francesco Marzoni, Chief Data and Analytics Officer at IKEA Retail, doesn’t frame AI implementation as only a technical milestone. For him, it’s personal. As one of the first enterprises to launch a tool on OpenAI’s GPT Store and institute company-wide AI literacy efforts, IKEA has moved quickly. But Marzoni sees the deeper challenge: making sure this technology to reflect a broader, more inclusive future.
The pace itself is overwhelming. According to Accenture, agentic AI, systems capable of goal-directed operations, saw a steep spike in research. Fewer than 600 studies existed in 2023; by October 2024, that number exceeded 1,500. The shift is rapid enough that even experts feel the pressure to stay current. That pace introduces risk, if deployment outpaces understanding, unintended outcomes escalate.
EY’s research confirms what many senior leaders are feeling: over half report they’re struggling to stay ahead of the changes. That signals a deeper issue. When strategic leadership feels like it’s falling behind, operational risk compounds. Misalignment between leadership, implementation, and public expectation creates reputational exposure.
Responsible AI is a strategic decision. Companies that hardwire inclusiveness, security, and transparency into their deployment pipelines will create more resilient systems with fewer long-term liabilities. That includes ensuring datasets represent real-world diversity, applying clear usage policies, and allowing frontline employees visibility into how AI models make decisions.
Marzoni puts it bluntly: he doesn’t want future generations, his daughters included, to look back and wonder why this moment wasn’t handled better. That’s accountability. The way AI scales today will define how people view this technology ten years from now. Leaders who understand that will deploy faster, more ethically, and with more lasting value.
Key highlights
- Talent shortages impact execution readiness: Leaders should invest in continuous upskilling and retention strategies, as nearly 500,000 tech roles remain unfilled and work-life balance remains a core driver of employee loyalty.
- Speed without structure increases risk: Executive teams must set clear adoption thresholds and cross-functional safeguards before accelerating AI deployment to avoid costly missteps and reputational damage.
- Regulatory clarity is lagging behind AI growth: Leaders operating in global markets should proactively build responsible AI frameworks now, reducing compliance risk while increasing trust and innovation readiness.
- AI-powered threats are reshaping security priorities: Executives must embed cybersecurity into core strategy, including talent acquisition, zero-trust models, and threat detection systems that evolve alongside AI capabilities.
- Responsible AI is an executive-level accountability: Leaders need to champion fair, inclusive, and transparent AI systems to preserve public trust and ensure long-term value for both their organizations and society.