There’s a possibility of winning big – or losing sight of what matters most to your customers when building GenAI-empowered customer experiences, claims Kuudes Senior Service Designer Katariina Tikkala.
In this blog series we delve into the common pitfalls associated with integrating generative AI into customer service. We also offer some strategic insights for overcoming these challenges and leveraging AI to create outstanding and memorable customer experiences.
If you missed it, click here for part I and part II of this blog series.
Part III: The Pitfall of Losing Trust
Another user experience risk related to the deployment of GenAI lies in the expectation management. It’s been shown that up to 40-50% of customers indeed cannot tell the difference between the human and AI generated content online. (Image: In Nexcess study, 53.4% of participants could accurately detect the AI-generated image below, while AI-generated copy text was correctly identified by 57.3%)
The feeling of reciprocity is an important building block of trust, and a customer misidentifying their counterpart in online interactions is likely to shake up trust towards their service providers. AI systems are not (yet) very good at identifying social and emotional cues, easily leading to experiences of insensitivity or inauthenticity, especially in a situation in which people don’t know they are dealing with AI.
A lot of healthy suspicion has also emerged around GenAIs tendency to perpetuate existing social injustices. AI systems reproduce patterns picked up from their training data, which has already surfaced some uncomfortable social biases. A story in the WIRED points out how image-generation app Midjourney regularly portrays queer people as white, able-bodied and for some reason – purple-haired. (Image: Reece Rogers via Midjourney AI, published in WIRED)
Strategy: Focus on Fairness
To build and maintain trust, companies must focus on ethical AI practices that promote fairness and inclusivity. Ensuring ethics in all AI development begin with having diversity in the teams developing and testing the systems, adopting ethics guidelines, promoting transparency, measuring impacts, and encouraging stakeholder involvement throughout the process.
A nice example by a French ride-hailing app Heetch was their Greetings From La Banlieue -campaign, that aimed to counter the negative bias in depictions of Parisian suburbs on Midjourney. The company invited the residents of banlieues to submit real pictures of their neighbourhoods to help train Midjourney’s AI model to generate more realistic and nuanced imagery. (Image: Greetings from la Banlieue campaign for Heetch by BETC)
In turn, a US based non-profit called Common Sense Media has devised a framework for assessing the suitability of AI solutions for children’s use. It maps services against principles such as putting humans first, promoting learning and helping people connect. This is a time for companies to really challenge themselves in considering the implications of their planned GenAI projects and preparing for both the desired as well as the potentially undesired impacts on user experience.
And some companies are already going public with their commitments. The personal care brand Dove has recently launched an AI Playbook that aims to better equip the usage of GenAI to foster diversity in representations of beauty. The playbook educates about bias, and gives detailed instructions for prompting image-generation in a way that produces more realistic and inclusive imagery. Dove themselves also pledge never to use AI-generated images to represent or replace real people. (Image: Dove)
We at Kuudes are designers of impactful services and customer experiences. If you want to dive deeper into the experiences your customers are after and how to win them over in the future, let’s talk more!
Katariina Tikkala, Senior Service Designer
katariina.tikkala@kuudes.com
+358 50 564 7697
Click here for more of this blog series:
Part 1: The Pitfall of Settling for Basic Improvements
Part 2: The Pitfall of Irrelevance