Convenience meets skepticism of generative AI
Generative AI is undeniably one of the most transformative tools of our time. It’s reshaping industries by creating content, automating processes, and unlocking new efficiencies. But here’s the challenge: while consumers are embracing its convenience, trust remains a sticking point. A Deloitte survey of 3,800 consumers found that 70% of respondents struggle to trust online content due to the rise of generative AI. Two-thirds of them are worried about being scammed or deceived by AI-generated content.
This is understandable. When a tool is powerful enough to generate hyper-realistic deepfakes or mimic human interactions seamlessly, it raises serious concerns. People want to know: What’s real? What’s artificial? Companies that address this by labeling AI-generated material and deploying deepfake detection systems will be the ones that stand out.
Trust issues don’t mean people are rejecting the technology. It just means businesses need to be proactive. By showing consumers how generative AI works and ensuring safeguards are in place, we can turn skepticism into confidence. When used responsibly, generative AI streamlines processes and builds entirely new possibilities.
Transparency is vital
Both Deloitte and Gartner agree that the key to winning over skeptical consumers lies in being open and honest about how AI is used. The numbers back this up: according to Gartner’s October 2024 survey, 40% of respondents would be upset if they found out they were interacting with AI in a customer service context without being informed upfront. No one likes to feel tricked.
Now think about what this means for your brand. Customers want to trust you—not just your products but your processes. Transparency bridges the gap between what customers perceive and what’s actually happening behind the scenes. That’s why it’s critical to label AI interactions clearly and give customers the option to speak with a real person. It’s simple, but it makes a huge difference.
Data privacy is another area where transparency pays dividends. Right now, only 20% of consumers feel that tech companies are clear about how they handle data. That’s not good enough. Companies need to simplify privacy policies so they’re easy to understand. Make it clear what data you’re collecting, how it’s used, and how you’re protecting it.
“People appreciate honesty, and in a world where trust is currency, the ROI on transparency is undeniable.”
The balance between personalization and the human touch
AI is incredible at processing information and delivering tailored experiences. But there’s one thing it’s not great at: being human. That’s why, despite all the advancements in AI, people still prefer human interactions in many situations. Gartner’s Nicole Greene nailed it when she said, “People still prefer a human connection with customer service.” She’s right. There’s something irreplaceable about the empathy and nuance that only a person can provide.
So what does this mean for businesses? It’s about finding the right balance. Use AI where it excels—handling repetitive queries, providing instant responses, or analyzing data to personalize experiences. But always give customers the choice to connect with a person. When customers know they have that option, their trust in your brand goes up. It’s that simple.
This balance focuses on more than customer service, and looks at how you position your entire AI strategy. Show customers that AI enhances the experience without replacing the personal touch. In doing so, you’re using AI both to solve problems and to deepen relationships.
Building AI governance for the future
AI is evolving fast—faster than most companies or regulations can keep up. That’s why building a strong governance framework now is essential. Nicole Greene from Gartner advocates for creating multidisciplinary AI councils within organizations. These councils bring together experts from different areas—risk management, customer experience, IT—to make sure your AI strategy is comprehensive and future-proof.
Why does this matter? Because governance is about anticipating tomorrow’s challenges. For example, regulations around AI and data security are becoming stricter, with frameworks like the European Union’s AI Act setting the stage for global standards. Companies that wait to react will find themselves scrambling to catch up, while those with proactive governance will lead the pack.
AI governance isn’t just about compliance, though. It’s also largely about trust. Employees need to understand how AI fits into their work, customers need to feel confident in the technology, and regulators need to see that you’re taking this seriously. A solid governance framework addresses all of this, ensuring that your AI operations are ethical, secure, and aligned with your brand’s values.
“Establish AI principles now, invest in oversight, and align your strategies with both your goals and consumer expectations.”
Takeaways for key decision-makers
- Consumer skepticism persists: Although generative AI adoption is growing, 70% of consumers report difficulty trusting online content due to AI’s potential to mislead. Leaders should prioritize transparency measures, such as labeling AI-generated content and deploying deepfake detection tools.
- Transparency in data practices: Only 20% of consumers believe technology companies are clear about how they handle data. Simplifying privacy policies and offering clear, user-friendly explanations of data collection and protection can bridge this trust gap.
- Human connection remains essential: Nearly 40% of consumers expressed frustration when customer service interactions with AI weren’t disclosed. Companies should combine AI’s efficiency with human options to build trust and deliver personalized, empathetic service.
- Strong AI governance is crucial: As regulations evolve, establishing multidisciplinary AI councils can ensure ethical use, risk mitigation, and regulatory compliance. Proactive governance frameworks aligned with AI principles will help organizations stay ahead of legal and consumer demands.