GenAI systems differ from traditional software by predicting and generating outputs based on the data they’ve been exposed to, which can lead to both powerful and problematic outcomes.

Unlike traditional software, GenAI can behave unpredictably, sometimes making decisions or inferences that were never intended by its creators.

In HR, a GenAI tool might infer from historical hiring data that certain demographic patterns are preferred, leading to biased recommendations that could perpetuate discrimination and create legal challenges.

Unpredictability is compounded by GenAI’s ability to act autonomously. If not carefully controlled, GenAI can make decisions or take actions that have unintended consequences, leading to regulatory breaches, legal liabilities, and harm to stakeholders. CIOs must establish clear boundaries, or “guardrails,” to make sure GenAI operates within safe and ethical parameters, continuously monitoring its behavior to prevent it from causing unintended harm.

The minefield that is GenAI compliance

One of the most pressing challenges for CIOs using GenAI is the uncertainty surrounding data collection and usage. GenAI’s ability to process vast amounts of data and make inferences can lead to situations where CIOs are unsure about what data the AI has accessed, how it has been used, and what conclusions it has drawn.

Uncertainty complicates regulatory compliance, particularly under frameworks like the General Data Protection Regulation (GDPR), which requires organizations to be transparent about their data practices.

The legal implications of GenAI’s inferences are significant, especially when those inferences are based on sensitive or personal data. If a GenAI system makes decisions based on inferred data, such as targeting marketing campaigns based on a user’s inferred race or gender—this can lead to allegations of discrimination or breaches of privacy laws.

Risks are exacerbated by the opaque nature of many GenAI systems, making it difficult for organizations to understand how the AI arrives at specific decisions.

CIOs must be proactive in managing these risks by making sure that GenAI systems are transparent in their operations, accurate in their inferences, and compliant with legal and ethical standards.

Implementing comprehensive monitoring and reporting mechanisms is key to maintain visibility into the AI’s operations and ensure data usage aligns with regulatory requirements.

The global maze of GenAI laws

The global regulatory environment for GenAI is increasingly complex, with laws such as Europe’s GDPR, the upcoming EU AI Act, and various US state regulations imposing stringent requirements on AI systems.

New regulations demand transparency, fairness, and accountability in AI operations, particularly in high-risk sectors like healthcare, finance, and law enforcement.

Strictly complying with these regulations is particularly challenging when dealing with GenAI systems that operate as opaque “black boxes.” GenAI systems often lack transparency, making it difficult for organizations to understand how they arrive at specific decisions or outputs.

A lack of visibility complicates efforts to comply with regulations that require organizations to explain their AI’s decision-making processes.

In order to address these challenges, CIOs must work closely with AI vendors to gain as much insight as possible into how their systems function. Implementing AI governance frameworks that include mechanisms for monitoring and auditing AI systems is also crucial.

In doing so, CIOs can make sure that their organizations remain compliant in an increasingly regulated environment while mitigating the risks associated with using opaque AI systems.

Accountability and the focus on outcomes

With the evolving regulatory landscape, there is a growing emphasis on holding organizations accountable for the actions of their AI systems, even when those actions are unintended or unpredictable. This is particularly true in jurisdictions that have adopted strict liability principles, where an organization can be held liable for any harm caused by its AI systems, regardless of precautions taken.

Instead of focusing solely on the data used by AI systems, regulatory bodies are increasingly concerned with the outcomes that these systems produce. CIOs must shift their focus from understanding data inputs to scrutinizing the outputs generated by GenAI systems.

In order to manage these risks, CIOs must make sure their organizations have comprehensive AI governance frameworks in place, conduct thorough due diligence, and continuously monitor AI performance.

When prioritizing transparency in outcomes and maintaining comprehensive oversight, organizations can better navigate the regulatory landscape and mitigate the risks associated with GenAI.

The hidden pitfalls of skimping on GenAI due diligence

One of the most overlooked risks in deploying GenAI systems is the lack of communication between different departments within an organization. When departments like IT and Marketing operate in silos, key information about how GenAI systems are being used, or could potentially be used, is often not shared effectively.

A disconnect can result in non-compliance with legal and regulatory standards, especially if AI systems are deployed or used in ways that are not fully understood or vetted.

In order to avoid these pitfalls, CIOs must build a culture of collaboration across all departments that interact with GenAI systems. It will involve creating cross-functional teams that include representatives from IT, Marketing, Legal, Compliance, and other relevant departments to make sure that everyone is aligned regarding the deployment and use of AI technologies.

Practical steps to make GenAI compliance less scary

In order to safeguard sensitive data and prevent GenAI from accessing information that could lead to compliance breaches, CIOs must implement strong boundaries—often referred to as “guardrails.”

Boundaries restrict what GenAI can access and how it can use the data it processes. Regularly reviewing and updating these boundaries as GenAI systems evolve and as the regulatory environment changes is crucial for maintaining data protection.

Equally important is making sure of vendor accountability. When working with third-party GenAI vendors, CIOs must thoroughly review all available documentation, including service terms, privacy policies, and technical specifications.

Direct engagement with vendors is also critical to clarify how their AI systems work, what data they require, and how they ensure compliance with relevant regulations.

When setting clear boundaries for GenAI and maintaining open lines of communication with vendors, CIOs can better manage the risks associated with deploying AI systems and make sure their organizations remain compliant.

Is GenAI worth the risk? What every CIO should consider

For some organizations, the risks associated with GenAI may outweigh the benefits, particularly in high-risk sectors such as education, employment, finance, and healthcare. In these areas, the potential for harm, whether through biased decision-making, privacy breaches, or legal non-compliance, can be huge and the consequences severe.

CIOs must carefully evaluate whether deploying GenAI in these contexts is necessary and whether the benefits justify the risks. It involves conducting a thorough risk assessment, considering factors such as the potential for regulatory scrutiny, the likelihood of legal challenges, and the impact on the organization’s reputation.

In some cases, it may be more prudent to limit GenAI deployment to lower-risk areas where the potential for harm is less pronounced.

When taking a comprehensive approach to risk assessment and involving key stakeholders in the decision-making process, CIOs can make informed choices about where and how to deploy GenAI, maximizing the benefits while minimizing the risks.

Alexander Procter

August 26, 2024

6 Min