To attain real ROI from generative AI (genAI), organizations must rethink both what ROI means in this context and how generative AI should be utilized.

Traditional ROI metrics often fail to capture the unique benefits and challenges of genAI. Instead of viewing ROI solely through a financial lens, consider broader impacts like enhanced decision-making, operational efficiency, and strategic insights.

GenAI’s potential includes both cost savings and revenue generation; it brings intangible benefits that can transform business processes.

Many organizations can experience poor ROI from generative AI efforts.

Poor ROI is a widespread issue that comes from a fundamental misunderstanding of generative AI’s capabilities and unrealistic expectations for quick financial returns. Generative AI is a complex technology that requires careful planning, strategic implementation, and ongoing management.

Experts suggest that businesses need to rethink how they measure ROI for genAI projects. Traditional ROI metrics often do not account for the learning curve, experimentation phase, and long-term potential of genAI.

Organizations should also focus on deploying genAI in areas where it can truly add value, rather than jumping on the bandwagon without a clear strategy.

Industry reactions and consequences

The growth in genAI projects following OpenAI’s ChatGPT popularity in early 2023 created a frenzy of activity.

Boards and CEOs, captivated by the potential and hype, issued top-down mandates for AI deployments across various sectors. AI enthusiasm is comparable to the web euphoria of the mid-1990s.

IT departments found themselves under immense pressure from senior management to implement AI solutions quickly, often without a clear understanding of their practical applications or potential ROI.

Many business units pushed for genAI projects independently, further complicating the landscape for IT departments which has led to hasty implementations and unrealistic expectations, setting the stage for widespread disappointment in the actual ROI delivered by these projects.

AI ROI paradox

Atefeh “Atti” Riazi, CIO of Hearst, which reported $12 billion in revenue last year, highlighted the paradox of AI ROI. Despite extensive experience in measuring IT project returns, AI’s disruptive nature makes it challenging to predict long-term impacts.

An inability to fully understand and measure AI’s implications means that traditional ROI metrics often fall short, leading to misaligned expectations and project outcomes.

Problems with initial deployments

Rajiv Shah of Snowflake pointed out that top-down pressure from boards and CEOs complicated the traditional ROI analysis.

Unlike previous IT initiatives where ROI could be more predictably forecasted, generative AI projects were launched without sufficient groundwork. A top-down approach led to misaligned objectives and failed expectations, as the projects were often not tailored to the specific needs and realities of the organization.

Misalignment with core priorities

AI projects frequently focused on non-core processes, such as chatbots and support agents. While these applications can be beneficial, they often do not directly impact the core business functions that drive revenue and growth.

As a result, resources were diverted from more critical areas, diminishing the overall potential ROI of genAI initiatives. According to Kelwin Fernandes, CEO of AI consultant NILG.AI, these projects lacked long-term engagement and organizational support, further reducing their effectiveness.

Scalability issues

Initial small-scale AI projects demonstrated impressive results, creating a false sense of optimism. However, when these projects were scaled up, they encountered significant challenges.

Open-source genAI technologies, suitable for small deployments, often became inefficient and costly at scale. KX’s Conor Twomey highlighted that systems that worked well with a few hundred documents struggled with hundreds of thousands, leading to bloated costs and diminished returns.

Scalability issues are a key factor in the disappointing ROI experienced by many enterprises.

Inflated expectations

Early successes of genAI, such as the impressive performance of ChatGPT in initial applications, set unrealistic expectations for broader deployments.

Organizations expected similar results on a larger scale without fully understanding the complexities involved. Patrick Byrnes, an AI consultant for DataArt, noted that enterprises often skipped the necessary incremental steps and launched high-impact, customer-facing projects prematurely, resulting in disappointing outcomes.

Operational costs of generative AI are substantial

IDC reported that NVIDIA’s GPUs, essential for AI computations, cost approximately $10,000 each. Monthly operational expenses can range from $4 million to $5 million, with model training costs expected to exceed $5 million.

These figures do not include additional expenses such as electricity and datacenter management. Many organizations underestimated these costs, leading to budget overruns and poor ROI.

Hallucinations bring risks

Generative AI’s tendency to “hallucinate” – generating incorrect or fabricated information – presents a significant risk.

Hallucination is especially dangerous in sectors like healthcare, finance, and aerospace, where accuracy is paramount. Every AI-generated output requires human verification, which erodes the productivity gains AI is supposed to deliver.

Hearst’s Riazi believes that, while hallucinations are a temporary issue, the current need for extensive oversight diminishes the immediate ROI of genAI deployments.

Overestimated customer acquisition costs

Many AI vendors offered low initial costs to attract customers, but these costs are expected to increase significantly.

Enterprises often overlook this aspect, leading to financial strain when prices inevitably rise. Oversight contributed to the miscalculation of the true ROI of genAI projects.

Strategic recommendations for better ROI

To achieve meaningful ROI from genAI, it is important to understand and control the total cost of ownership (TCO). Initial genAI deployments should be viewed as experimental, with the primary goal of learning and adaptation rather than immediate financial returns.

Secondary ROI factors, such as market perceptions and customer engagement, should also be considered. A holistic approach to ROI measurement will provide a more accurate picture of genAI’s value.

Intelligent experimentation

Conducting experiments with genAI should involve clear guidelines and specific criteria. An approach like this means that the experiments are focused and provide valuable insights.

Extensive training and detailed instructions should be given to the AI, similar to how new employees are trained, reducing the risk of errors and increasing the reliability of AI outputs, improving overall ROI.

Focused, smaller-scale projects

Rather than launching large-scale, high-stakes AI projects, organizations should focus on smaller, more manageable applications with clear objectives.

Examples of effective small-scale projects include analyzing vehicle damage reports, auditing sales calls, and recommending e-commerce products based on content descriptions.

Targeted projects are easier to manage and scale, providing a more controlled environment for achieving positive ROI.

Governance and control measures

Establishing comprehensive governance and control measures is key for understanding the complexities and risks associated with generative AI (genAI).

Control measures provide a structured framework that helps manage AI initiatives efficiently, addressing both strategic and operational challenges. Two key components in this governance structure are AI committees and fallback options.

AI committees

Creating dedicated AI committees within organizations is a strategic move to oversee and guide AI projects.

Committees should consist of specialists from various disciplines, including data science, legal, security, and business strategy. Their primary function is to review, approve, or veto AI project proposals based on a comprehensive assessment of potential risks and benefits.

AI committees can scrutinize proposals for compliance with legal and regulatory requirements, assess potential security vulnerabilities, and make sure that projects align with the organization’s strategic goals.

Oversight helps mitigate risks associated with genAI deployments, such as data breaches, compliance issues, and misalignment with business objectives.

When requiring project proponents to present their AI initiatives to the committee, organizations can foster a culture of accountability and transparency. Processes like these make sure that only well-considered and strategically sound projects proceed, thus improving the chances of achieving meaningful ROI.

Fallback options

Fallback options involve treating AI-generated insights as educated guesses rather than absolute truths which means incorporating human oversight and validation processes to review and confirm AI outputs before they are used for decision-making.

For critical tasks, having a fallback plan means operations can continue smoothly even if the AI system fails or produces erroneous results.

Integrating fallback options, organizations can mitigate the risks associated with genAI inaccuracies and maintain operational continuity which, in turn, safeguards against potential failures and builds a resilient framework for AI adoption.

To maintain trust and reliability, organizations must implement comprehensive fallback mechanisms.

Key takeaways

IT leaders should focus on deploying genAI in areas that directly impact core business functions.

Prioritizing initiatives that match with strategic goals makes sure that resources are invested in projects with the highest potential for positive returns. Strategic alignment is key for realizing the true value of genAI.

Conducting thorough experimentation and providing proper training are important steps for successful genAI deployment.

Experimentation should be guided by clear objectives and criteria, allowing organizations to learn and adapt without risking significant resources. Training programs should equip employees with the skills to manage and utilize AI technologies effectively, so that the organization can fully leverage genAI capabilities.

Building strong governance frameworks is also important when managing genAI projects effectively. Frameworks should include AI committees to provide oversight and approval, as well as fallback options to ensure reliability and continuity.

Establishing these structures means organizations can mitigate risks, improve decision-making, and increase the overall success of their AI initiatives.

Alexander Procter

August 5, 2024

7 Min