Two recent AI safety summits, one on each side of the Atlantic, raised concerns about the potential risks associated with AI. While visions of sci-fi dystopias with killer robots and detailed guidelines on AI regulation may dominate the discourse, practical challenges persist for organizations seeking to harness the power of generative AI.
Among the most prominent complaints about generative AI are objections from authors and artists who feel their work is used without permission to train large language models (LLMs). These concerns recently led to concessions in Hollywood, where studios limited the use of AI tools to replace human writers and performers. However, the pitfalls of generative AI extend beyond creative industries, potentially resulting in embarrassing outcomes and legal consequences for businesses.
1. Overuse of AI in high-stakes areas
While Microsoft’s transparency guidelines for Azure OpenAI wisely advise against deploying generative AI in high-stakes fields such as healthcare, finance, and legal contexts due to the potential repercussions of errors or misjudgments, it’s important to recognize that the challenges associated with generative AI extend beyond these domains.
Generative AI – in it’s early stages – has shown a propensity to generate low-quality and, at times, nonsensical content even in less critical areas.
This raises concerns because such content can harm a company’s reputation, irrespective of the context. While the consequences may not be as severe as in healthcare or finance, the damage to an organization’s image can be significant. Imagine a scenario where AI-generated blog posts, conference biographies, or marketing materials sound impressive at first glance but ultimately make no sense upon closer examination. Such instances can undermine a company’s credibility and professionalism.
Businesses, regardless of their industry, must exercise vigilance when integrating generative AI into their operations. It’s not sufficient to merely adhere to the cautionary guidelines in high-stakes sectors. Rather, organizations should establish a robust system of oversight, review, and quality control to ensure that the content generated by AI aligns with their standards of accuracy, relevance, and coherence. This entails not only technological precautions but also an organizational commitment to maintaining reputational integrity.
2. Potential reputation damage
Microsoft’s encounter with an AI-driven news system serves as a great example of the severe repercussions that arise when AI misses the mark. In this instance, Microsoft bore the brunt of a public relations nightmare when its AI-generated news story contained an extraordinarily insensitive poll centered around a woman’s tragic death. This poll was not only deeply offensive but also demonstrated a complete lack of empathy and ethical judgment.
A lack of moral compass
What makes this situation even more concerning is that it wasn’t an isolated incident. The very same AI tool responsible for the ill-fated poll had previously been involved in creating a series of controversial and inappropriate polls. These polls posed questions that were not only offensive but also morally reprehensible. For instance, they asked whether refusing to save a woman who was later shot dead was the right decision, questioned the correct identification of human remains found in a national park, and inquired whether people in an area devastated by fire really needed to follow emergency evacuation advice.
The AI-powered Bing Chat, a platform that should provide reliable information and services, included links to malware in its advertisements. This puts users at risk, not only compromising their trust but also potentially harming their devices and data security.
These errors, for the most part, were not subjected to human oversight. They were published on Microsoft’s platforms, which receive millions of visitors, by automated systems, exacerbating the gravity of the situation.
The absence of human intervention in such critical content generation processes directly contradicted principles of responsible AI usage, including informing people when they interact with an AI system and ensuring guidelines for human-AI interaction.
3. Generative AI’s isn’t always accurate
Generative AI tools, by their very nature, operate on probabilistic principles. They do not provide absolute, definitive answers but rather generate outputs based on patterns and data they have been trained on. This inherent probabilistic nature means that AI can produce results that are inaccurate, unfair, or even offensive, all while presenting them convincingly. This potential for generating misleading or problematic content underscores the importance of not placing blind trust in AI-generated outputs.
Organizations need to approach generative AI with caution
Instead, organizations should view generative AI as a tool for inspiration and brainstorming rather than an oracle of truth. It should serve as a source of ideas and creativity, sparking discussions and prompting further exploration. This perspective shift allows businesses to harness the strengths of AI while mitigating the risks associated with its fallibility.
Microsoft’s approach to addressing this challenge provides a valuable example. They emphasize human control and oversight through tools like Copilot. By involving humans in the loop, organizations can ensure that AI-generated results are not taken at face value but are subject to critical evaluation and refinement by human experts. Copilot facilitates experimentation with AI suggestions, enabling employees to fine-tune and improve the outputs generated by AI.
4. Transparency in AI usage
It’s a must that companies adopt a clear and open approach regarding the involvement of AI in content generation processes. Even when AI-generated content attains high levels of quality and sophistication, organizations should not obscure its origin but rather make it explicit to both internal stakeholders and external users.
This transparency entails informing users when AI is responsible for generating content. By doing so, organizations demonstrate a commitment to ethical and responsible AI deployment. Users benefit from knowing that they are interacting with AI systems, which fosters trust and ensures that expectations are appropriately set. Clarity regarding AI involvement becomes particularly vital in sensitive contexts where human judgment, empathy, and nuanced understanding are paramount.
Providing users with the option to escalate to human support in AI-driven interactions is a key component of AI transparency.
While AI can handle various tasks efficiently, there are scenarios where human intervention is necessary, such as when dealing with complex or emotionally charged issues. Offering users the ability to transition from AI assistance to human support not only enhances transparency but also underscores the organization’s commitment to customer satisfaction and responsible AI usage.
5. Generative AI is easily misapplied
Generative AI’s effectiveness is tied to the availability of domain-specific training data. In essence, the AI model learns from existing data to generate responses or solutions. When faced with well-defined problems within a familiar domain, generative AI can excel, providing predictable, desirable, and verifiable outcomes.
However, where generative AI encounters uncharted territories or novel issues lacking adequate training data, it can falter. This limitation is particularly evident in complex and dynamic fields like IT operations. IT problems often involve intricate, multifaceted challenges that may not have readily available training data. In such scenarios, generative AI’s propensity for producing inaccurate or irrelevant results becomes a concern.
AI is not a one-size-fits-all solution… yet
Instead of relying on generative AI as a standalone problem solver in these situations, organizations should view it as a valuable advisor. By training the AI engine to recognize patterns, known issues, and established solutions within defined disciplines and knowledge repositories, it can serve as a complementary tool. Generative AI can assist in diagnosing known problems, identifying inefficiencies, and suggesting remediations, provided that the problem falls within its trained parameters.
6. Making more work for humans
While it’s true that AI-generated content can be helpful in certain scenarios, it often introduces an extra layer of work in terms of review and correction. Writers, editors, and content creators frequently encounter AI-generated suggestions that are unhelpful or require substantial refinement.
One of the key challenges organizations face is the need to establish robust processes for handling these errors and refining AI-generated content. This involves creating a workflow that includes human oversight and intervention to ensure the final output aligns with the organization’s quality and brand standards. Such processes should not only focus on correcting individual errors but also on improving the AI’s performance over time by providing feedback and training data.
Additionally, transparent communication about the use of AI within the organization is paramount. Team members need to be aware of when AI is used, its limitations, and the roles it plays in reviewing and enhancing AI-generated content. This transparency ensures that employees understand the AI’s purpose and can effectively collaborate with it, rather than relying on it blindly.