Human vs. humane: How companies pursue AI for enhanced CX

We’ve all been there. Your laptop breaks down, you miss a flight, or you need to call an insurance company. You hope for a quick conversation to resolve the matter but instead, you’re greeted by a disembodied voice delivering scripted questions and answers with no empathy, passing you through one list of choices after another, leaving you even more frustrated than when you started.After such a fractured experience, you begin asking harder questions like what biases might be embedded in this technology, how is it processing my data, how often is it audited to ensure fairness and transparency, and can it actually be taught to behave ethically.As AI takes on a larger role in customer service, these questions are becoming more urgent — and CIOs and other executives are well aware of it. Two thirds of the CX practitioners, service leaders, analysts, and consultants from around the world who took part in CX Network’s Global State of CX 2024 agreed or strongly agreed they’re increasingly concerned about the ethical use of AI and its future development.On top of that, 38% identified transparency around how AI uses their data as one of the top three concerns customers have today, while 55% strongly agree that data privacy and security are major concerns for customers.AI is quickly transforming customer service, and it’s no longer just about mimicking human interactions — it’s about creating fair experiences that feel natural. “One of the biggest mistakes organizations can make when implementing AI for customer experience is prioritizing efficiency over ethical, humane interactions,” says Nisreen Ameen, senior lecturer in digital marketing and director of the Digital Organisation and Society Research Centre at Royal Holloway, University of London.Varsha Jain, AGK Chair professor of marketing at Mudra Institute of Communications in India, agrees. “Humans have to drive AI, and AI should not drive humans,” she adds.For CIOs and other C-level executives tasked with enhancing customer experience, the real challenge lies in teaching AI to be more humane and making sure it behaves properly and ethically, with a sense of empathy and fairness.Have a set of principlesIn recent years, CIOs and the companies they work for have found several ways to leverage AI in customer support. Banks and other financial institutions, especially, are integrating AI to streamline customer interactions and improve service efficiency. Many, for example, use AI-powered chatbots to handle routine tasks like balance inquiries, transaction histories, and even loan applications, freeing up human agents for more complex issues. AI is also being employed in fraud detection, analyzing transaction patterns, and flagging suspicious activity in real-time, far faster and more accurately than manual systems. ING Group’s customer-facing chatbot alone can handle up to 5,000 inquiries daily in the Netherlands, and it’s been enriched with gen AI features to improve customer satisfaction and deflection rate, says Bahadir Yilmaz, the multinational’s chief analytics officer.He adds that ING has this set of step-by-step guiding principles when it comes to AI:Fairness — ensure decision-making processes are free from prejudice.Explainability — ensure that logic is understood by target audiences.Transparency — explain and justify the entire process of model development.Responsibility — attribute accountability for each decision made.Security — ensure that models do not produce unintended outcomes.Having robust ethical standards and policies during the AI development process can help. “This ensures that ethical considerations are woven into the fabric of AI systems from the ground up, rather than being an afterthought,” says Ameen.CIOs, as well as CTOs, should advocate for measuring how humane their AI-powered services are because, typically, we’re more prone to improving what we decided to measure, Jain adds.Build a humane teamMany organizations think of using AI in customer support in the context of cutting costs. That wasn’t entirely the case with AirHelp, according to its chief technology officer, Tim Boisvert.AirHelp, which helps airline passengers secure compensation for delayed, canceled, or overbooked flights, is recognized for its AI-powered chat system that appears to effectively handle common service-related inquiries.“The goal of our chat-based AI isn’t to replace humans or deprioritize our investment in human customer service agents,” he says. “As a result, we haven’t set it up so it’s difficult for customers to talk to a human if they want to. Much of what our customers ask is easily recallable using AI and bots, but we’ve worked to prevent users from feeling like they’re walled off from our customer service agents.”One rule when designing a chat-based AI system is knowing exactly what it can and can’t do, so it doesn’t become frustrating for the customer.“Our customer service bot is only aimed at providing certain kinds of information to customers, rather than falling into the temptation of thinking there’s some version of a bot that could come close to replacing the type of relationship people can have with human agents,” Boisvert says. Building easy offramps, or being proactive about routing a customer to a human when they’re looking for something their bot can’t easily provide is another principle the company lives by.AirHelp also uses other AI-powered tools to interpret automated messages sent by airlines with status updates concerning customer claims, or analyze information from boarding passes and eTickets as customers upload them.“These two alleviate much of the processing burden off our agents’ shoulders, and remove menial tasks from their plate, enabling them to focus more on working directly with customers on challenging or non-standard scenarios,” Boisvert says.ING’s AI-powered chatbot also works hand-in-hand with humans. “ING has colleagues review every conversation to make sure the system doesn’t use discriminatory or harmful language, or hallucinate,” Yilmaz says. “No matter how AI is deployed, one thing remains true that people are at the heart of our process.”For now, that’s the way to go, ethics scholars agree. “AI is very helpful but needs to be used under the strict supervision of humans,” Jain says.Stay mindful of biasesAI can be leveraged in a variety of ways, including for customer acquisition purposes. ING, for instance, uses gen AI tools to create personalized marketing campaigns, tailoring content and offers to expats, young couples, and Gen Z. “Always with consent from the customer,” Yilmaz adds.Both ING and the Commonwealth Bank of Australia (CBA) also use AI-powered tools to boost cybersecurity and create a safer banking experience for their customers, particularly those in vulnerable circumstances.“We can scan unusual transactional activity and identify patterns and instances deemed to be high-risk so the bank can investigate and take action,” says Luiz Pizzato, AI Labs Centre of Excellence lead at CBA. “We’ve used AI to keep customers safe from fraud and scams, respond in real-time with support to customers in natural disaster zones, and even connect customers with more than $1.2 billion of government benefits and rebates, to which they were entitled, through our Benefits finder tool.”In cases like these, when dealing with customer data, organizations should prioritize user privacy and data protection, making sure the organization is compliant with the latest regulations.Equally vital is ensuring that user privacy and data protection remain a central focus. “As AI systems often rely on vast amounts of personal data, safeguarding this information is crucial to maintain trust and uphold ethical standards,” Ameen says. “This means implementing stringent data protection measures and being transparent about data usage.”Another important focus should be making sure AI development, decision-making, and application are inclusive. This means involving diverse teams to create AI systems, and using a wide range of data and perspectives to train and refine them.And of course, organizations need to commit to regularly testing their AI for any potential biases and taking steps to fix them — and CIOs should be mindful of that. “Bias can creep in at various stages of AI development and deployment, from data collection to algorithm design,” Ameen says. “Regular audits and bias testing should be an ongoing process, with mechanisms in place to address and correct any biases discovered.”Put the AI in representationWhen it comes to using AI to augment customer experience, the responsibility on leaders’ shoulders is huge. “CIOs and C-level executives can enable more humane AI by championing ethical initiatives at the leadership level and fostering a culture of responsible innovation,” Ameen says, adding that they should work closely with AI experts and invest in inclusive ethics training for all staff.“Crucially, executives must prioritize diversity in AI development teams, for example, by actively employing women and people from ethnic minorities who are underrepresented in AI jobs,” she says. “Ensuring diverse perspectives in AI-related decision making and representative datasets is essential to create fair, unbiased AI systems that effectively serve all users.”Since we’re not yet particularly efficient at teaching AI to be humane, it becomes even more important to implement mechanisms to gather feedback on AI-driven customer interactions and enhance such capacities.“This includes real-time feedback options, customer surveys focused on AI experiences, and analysis of service logs,” she says. Establishing focus groups and dedicated channels for reporting AI-related issues can help gain valuable insights about when ethical concerns should be addressed.Boisvert adds that CIOs, CTOs, and CEOs shouldn’t start with the assumption that AI can simply replace things and then find out that, in certain cases, AI isn’t appropriate at all. He suggests starting at the other end of the spectrum: assume no AI could ever be as humane as a human, he says.“Search for specific scenarios that provide small or marginal value and integrate them in,” Boisvert says. “Never make a workflow completely reliant on AI, such that an AI-based system would have any direct communications with customers that aren’t reviewable and approvable by humans.”Yilmaz agrees, saying that AI isn’t the cure to immediately unlock business benefits. “If you have a non-functioning business process, injecting an AI element would only complicate things more, limiting the value,” he adds. “Real value is created when business, tech, data, and analytics teams work together to streamline a process and create a journey that our customers want.”

featured-image

We’ve all been there. Your laptop breaks down, you miss a flight, or you need to call an insurance company. You hope for a quick conversation to resolve the matter but instead, you’re greeted by a disembodied voice delivering scripted questions and answers with no empathy, passing you through one list of choices after another, leaving you even more frustrated than when you started.

After such a fractured experience, you begin asking harder questions like what biases might be embedded in this technology, how is it processing my data, how often is it audited to ensure fairness and transparency, and can it actually be taught to behave ethically. As AI takes on a larger role in customer service, these questions are becoming more urgent — and CIOs and other executives are well aware of it. Two thirds of the CX practitioners, service leaders, analysts, and consultants from around the world who took part in CX Network’s Global State of CX 2024 agreed or strongly agreed they’re increasingly concerned about the ethical use of AI and its future development.



On top of that, 38% identified transparency around how AI uses their data as one of the top three concerns customers have today, while 55% strongly agree that data privacy and security are major concerns for customers. AI is quickly transforming customer service, and it’s no longer just about mimicking human interactions — it’s about creating fair experiences that feel natural. “One of the biggest mistakes organizations can make when implementing AI for customer experience is prioritizing efficiency over ethical, humane interactions,” says Nisreen Ameen, senior lecturer in digital marketing and director of the Digital Organisation and Society Research Centre at Royal Holloway, University of London.

Varsha Jain, AGK Chair professor of marketing at Mudra Institute of Communications in India, agrees. “Humans have to drive AI, and AI should not drive humans,” she adds. For CIOs and other C-level executives tasked with , the real challenge lies in teaching AI to be more humane and making sure it behaves properly and ethically, with a sense of empathy and fairness.

Have a set of principles In recent years, CIOs and the companies they work for have found several ways to leverage AI in customer support. Banks and other financial institutions, especially, are integrating AI to streamline customer interactions and improve service efficiency. Many, for example, use AI-powered chatbots to handle routine tasks like balance inquiries, transaction histories, and even loan applications, freeing up human agents for more complex issues.

AI is also being employed in fraud detection, analyzing transaction patterns, and flagging suspicious activity in real-time, far faster and more accurately than manual systems. ING Group’s customer-facing chatbot alone can handle up to 5,000 inquiries daily in the Netherlands, and it’s been enriched with gen AI features to improve customer satisfaction and deflection rate, says Bahadir Yilmaz, the multinational’s chief analytics officer. He adds that ING has this set of step-by-step guiding principles when it comes to AI: Having robust ethical standards and policies during the AI development process can help.

“This ensures that ethical considerations are woven into the fabric of AI systems from the ground up, rather than being an afterthought,” says Ameen. CIOs, as well as CTOs, should advocate for measuring how humane their AI-powered services are because, typically, we’re more prone to improving what we decided to measure, Jain adds. Build a humane team Many organizations think of using AI in customer support in the context of cutting costs.

That wasn’t entirely the case with AirHelp, according to its chief technology officer, Tim Boisvert. AirHelp, which helps airline passengers secure compensation for delayed, canceled, or overbooked flights, is recognized for its AI-powered chat system that appears to effectively handle common service-related inquiries. “The goal of our chat-based AI isn’t to replace humans or deprioritize our investment in human customer service agents,” he says.

“As a result, we haven’t set it up so it’s difficult for customers to talk to a human if they want to. Much of what our customers ask is easily recallable using AI and bots, but we’ve worked to prevent users from feeling like they’re walled off from our customer service agents.” One rule when designing a chat-based AI system is knowing exactly what it can and can’t do, so it doesn’t become frustrating for the customer.

“Our customer service bot is only aimed at providing certain kinds of information to customers, rather than falling into the temptation of thinking there’s some version of a bot that could come close to replacing the type of relationship people can have with human agents,” Boisvert says. Building easy offramps, or being proactive about routing a customer to a human when they’re looking for something their bot can’t easily provide is another principle the company lives by. AirHelp also uses other AI-powered tools to interpret automated messages sent by airlines with status updates concerning customer claims, or analyze information from boarding passes and eTickets as customers upload them.

“These two alleviate much of the processing burden off our agents’ shoulders, and remove menial tasks from their plate, enabling them to focus more on working directly with customers on challenging or non-standard scenarios,” Boisvert says. ING’s AI-powered chatbot also works hand-in-hand with humans. “ING has colleagues review every conversation to make sure the system doesn’t use discriminatory or harmful language, or hallucinate,” Yilmaz says.

“No matter how AI is deployed, one thing remains true that people are at the heart of our process.” For now, that’s the way to go, ethics scholars agree. “AI is very helpful but needs to be used under the strict supervision of humans,” Jain says.

Stay mindful of biases AI can be leveraged in a variety of ways, including for customer acquisition purposes. ING, for instance, uses gen AI tools to create personalized marketing campaigns, tailoring content and offers to expats, young couples, and Gen Z. “Always with consent from the customer,” Yilmaz adds.

Both ING and the Commonwealth Bank of Australia (CBA) also use AI-powered tools to boost cybersecurity and create a safer banking experience for their customers, particularly those in vulnerable circumstances. “We can scan unusual transactional activity and identify patterns and instances deemed to be high-risk so the bank can investigate and take action,” says Luiz Pizzato, AI Labs Centre of Excellence lead at CBA. “We’ve used AI to keep customers safe from fraud and scams, respond in real-time with support to customers in natural disaster zones, and even connect customers with more than $1.

2 billion of government benefits and rebates, to which they were entitled, through our Benefits finder tool.” In cases like these, when dealing with customer data, organizations should prioritize user privacy and data protection, making sure the organization is compliant with the latest regulations. Equally vital is ensuring that user privacy and data protection remain a central focus.

“As AI systems often rely on vast amounts of personal data, safeguarding this information is crucial to maintain trust and uphold ethical standards,” Ameen says. “This means implementing stringent data protection measures and being transparent about data usage.” Another important focus should be making sure AI development, decision-making, and application are inclusive.

This means involving diverse teams to create AI systems, and using a wide range of data and perspectives to train and refine them. And of course, organizations need to commit to regularly testing their AI for any and taking steps to fix them — and CIOs should be mindful of that. “Bias can creep in at various stages of AI development and deployment, from data collection to algorithm design,” Ameen says.

“Regular audits and bias testing should be an ongoing process, with mechanisms in place to address and correct any biases discovered.” Put the AI in representation When it comes to using AI to augment customer experience, the responsibility on leaders’ shoulders is huge. “CIOs and C-level executives can enable more humane AI by championing ethical initiatives at the leadership level and fostering a culture of responsible innovation,” Ameen says, adding that they should work closely with AI experts and invest in inclusive ethics training for all staff.

“Crucially, executives must prioritize diversity in AI development teams, for example, by actively employing women and people from ethnic minorities who are underrepresented in AI jobs,” she says. “Ensuring diverse perspectives in AI-related decision making and representative datasets is essential to create fair, unbiased AI systems that effectively serve all users.” Since we’re not yet particularly efficient at teaching AI to be humane, it becomes even more important to implement mechanisms to gather feedback on AI-driven customer interactions and enhance such capacities.

“This includes real-time feedback options, customer surveys focused on AI experiences, and analysis of service logs,” she says. Establishing focus groups and dedicated channels for reporting AI-related issues can help gain valuable insights about when ethical concerns should be addressed. Boisvert adds that CIOs, CTOs, and CEOs shouldn’t start with the assumption that AI can simply replace things and then find out that, in certain cases, AI isn’t appropriate at all.

He suggests starting at the other end of the spectrum: assume no AI could ever be as humane as a human, he says. “Search for specific scenarios that provide small or marginal value and integrate them in,” Boisvert says. “Never make a workflow completely reliant on AI, such that an AI-based system would have any direct communications with customers that aren’t reviewable and approvable by humans.

” Yilmaz agrees, saying that AI isn’t the cure to immediately unlock business benefits. “If you have a non-functioning business process, injecting an AI element would only complicate things more, ,” he adds. “Real value is created when business, tech, data, and analytics teams work together to streamline a process and create a journey that our customers want.

”.