Should a Chatbot Reveal Itself?

Customer service chatbots are already incredibly pervasive online. However, studies show that it may be important for a bot to reveal its AI nature as early in an interaction as possible.

By Daniel Gutierrez
October 30, 2019

Chatbots are very pervasive these days, with a broad range of application areas, including retail, food services, healthcare, banking, travel, and so much more. It’s now routine for customers to witness a chat box pop up on a web page asking the user if they’d like to chat live with an agent for assistance. An immediate question arises, in terms of the growing demand for online transparency—should a chatbot reveal itself as being a chatbot and not an actual “human agent”? For example, Google Duplex, a groundbreaking application of AI chatbots, is able to make restaurant and nail reservations over the phone, where the people answering the call may not realize they’re engaging with a bot.

In this article, we’ll examine this question further and also consider the timing of exactly when in the customer experience process a bot should reveal itself. For the purposes of this discussion, a chatbot constitutes a conversational interface for a specific product or a brand that utilizes programmed logic and, in some cases, a machine learning algorithm trained to determine how to interact, given a specified topic or function, like placing an order, initiating a customer support request, etc.

To reveal or not to reveal?

Chatbots use AI (machine learning algorithms) to simulate human conversation through voice commands or text chats. The technology incurs close to zero marginal costs and can keep pace with some human employees, so why aren’t they deployed more often? According to new research, the chief reason is customer pushback.

Chatbots for customer service
Bots are poised to play a significant role in the customer service of the future.

Service industry companies, like American Eagle Outfitters and Domino’s Pizza, as well as online service concerns, like Amazon and eBay, all use chatbots. The machines don’t experience the realities of life including sick days and never fatigue, like human counterparts. Indeed, bots can save money for consumers, but a new research paper appearing in the INFORMS journal Marketing Science indicates that, if a chatbot reveals itself to a customer before a purchase has been made, sales rates decline by more than 79.7 percent. The study, “Frontiers: Machines vs. Humans: The Impact of Artificial Intelligence Chatbot Disclosure on Customer Purchases,” also includes an interesting result—undisclosed bots are as effective as proficient workers and four times more effective than inexperienced workers in yielding customer purchases.

The study authors, Temple University researchers Xueming Luo and Siliang Tong, Zheng Fang of Sichuan University, and Zhe Qu of Fudan University, targeted 6,255 customers from a financial services company to receive highly structured outbound sales calls from chatbots or human workers. They were randomly assigned to either humans or chatbots, and disclosure of the bots varied with the following options:

1. Not telling the customer at all
2. Telling the customer at the beginning of the conversation
3. Telling the customer after the conversation
4. Telling the customer after they'd purchased something

“Our findings show when people don’t know about the use of artificial intelligence (AI) chatbots, they [the chatbots] are four times more effective at selling products than inexperienced workers, but when customers know the conversational partner is not a human, they are curt and purchase less because they think the bot is less knowledgeable and less empathetic,” said Luo, a professor and Charles Gilliland Distinguished Chair at Temple University. “Chatbots offer enhanced technological benefits, reduced customer hassle costs, and increased consumer welfare (offering the product at lower cost because bots save money on labor). This data empowers marketers to target certain customer segments to cultivate customer trust in chatbots.”


Source: INFORMS journal Marketing Science.

Results of the reveal

Empowered by AI, chatbots are surging as a class of new technology with both business potential and customer pushback. Additional analysis finds that the study results hold fast with nonresponse bias (results when respondents differ in meaningful ways from nonrespondents) and hang-ups, and the chatbot disclosure substantially decreases call length. The negative disclosure effect seems to be driven by a subjective human perception against machines, despite the independent competence of chatbots. Fortunately, such negative impact can be mitigated by a late disclosure timing strategy and customer prior AI experience. These research findings offer useful and practical effects for strategic chatbot applications, customer targeting efforts, and advertising campaigns in the setting of conversational commerce.

What consumers expect

Many business ethicists believe that customers have the right to know whether they’re dealing with a bot or a human to handle their communications. Further, regulators are increasingly concerned about customer privacy protection and have encouraged companies to be transparent on chatbot applications during customer communications.

Another study, the “2016 Aspect Consumer Experience Index,” involved 1,000 American consumers and was designed to investigate the preferences, behaviors, and attitudes regarding touchpoints with customers, engagement within the context of self-service, customized/personalized service, as well as chatbots.

The results of the study found that over 70 percent of consumers surveyed indicated they wanted the ability to resolve a majority of customer service issues on their own. Nearly 50 percent indicated they have positive expectations of customer service interactions via text, chat, or messaging. Further, a majority indicated that they already interact with an intelligent assistant or chatbot at least once per month. What consumers definitely do not favor is when a chatbot pretends to be human.

The implication seems to be that, regardless of whether they’re engaged with a bot or a human, consumers don’t want to have a feeling of deception in the process, and the distinction should be clear.

Mitigating the effects of chatbot disclosure

The INFORMS study offers mitigation strategies for the negative effects of chatbot disclosure. First, the research shows that the more experience the customer has with chatbots, the more it may alleviate the negative chatbot disclosure effect. This means that disclosing the bot identity after the conversation helps mitigate negative impact. This makes sense since the customer might form a good impression in the early moments of the interaction with the chatbot, which can help reduce the distrust of the bot.

In addition, over time, as the customer gains experience interacting with AI applications in general, such as smartphone digital agents (e.g., Google Allo, ELSA Speak, Cortana, FaceApp, Edison Assistant, etc.), this can affect the negative effects of chatbot disclosure. Correspondingly, increased prior experience with AI leads to more customer purchases.