AI, NLG, and Machine Learning

Using AI to Ensure Chatbots Are Polite

With the coronavirus pandemic shutting stores worldwide, a strong and personalized online presence is vital to creating consumer trust -- especially when it comes to online chat.

By Adi Gaskell
August 6, 2020

The coronavirus pandemic has shut the doors of stores around the world, placing fresh focus on their online presence as the primary means through which customers engage with their brands. Indeed, data from Adobe showed that online spending in May 2020 grew by 77 percent compared to the previous year. It's a level of growth that the company believes would ordinarily have taken six years to achieve, with the surge in trade dwarfing that seen during the holiday season, which saw 13 percent growth in 2019.

“We are seeing signs that online purchasing trends formed during the pandemic may see permanent adoption,” Taylor Schreiner, director, Adobe Digital Insights, said in a statement. “While buy online, pick up in-store (BOPIS) was a niche delivery option pre-pandemic, it is fast becoming the delivery method of choice as consumers become more familiar with the ease, convenience, and experience.”

The importance of customer service

Much of the attention during this period has been on business-to-consumer (B2C) commerce, with daily stories recounting the challenges retailers are experiencing, but similar transformation has been seen in the business-to-business (B2B) sector, with research from the University of Eastern Finland highlighting just how important good customer service is during this time for the sector.

The research found that a strong and personalized presence online was vital to creating the kind of trust consumers want from the companies they deal with. This was especially true in the online chat facilities used by companies to communicate with active and potential customers.

“A lack of social presence can be one reason why a company’s online sales aren’t growing. Many companies want to invest in elements that ooze social presence, since anonymous, asynchronous, and text-based technology-mediated communication can reduce the creation of trust between the parties,” the researchers say.

Of course, the delivery of this level of customer service during COVID-19 has been complicated by lockdown measures that have meant many call centers have shut their doors. While some staff have been able to continue their work remotely, it's meant many companies have turned to artificial intelligence–based (AI-based) chatbots to provide scalable support to customers.

Agencies such as the Czech Ministry of Health and the City of Austin, Texas, have deployed virtual agents to field calls from stakeholders. As the Finnish study highlights, however, in an age where customers want personable communication with organizations, the delivery of polite and sociable support to end users can be a challenge.

This is an issue with a long-standing history, with Microsoft's Tay chatbot perhaps the most infamous example of how chatbot technology can go wrong. What began as an experiment in conversational understanding quickly descended into farce after Twitter users figured out that they could manipulate the training of Tay so that it would send highly offensive messages.

Tay differed from previous chatbot technologies in that it used a mixture of social networks, natural language processing, and machine learning to craft its responses. This was an improvement on chatbots, such as ELIZA, which used preprogrammed and fairly narrow scripts. Its descent down the rabbit hole of vulgarity and offence highlighted the risks associated with such an approach, however.

Polite conversation

The example of Tay highlighted how effective chatbots required a deep social as well as technical understanding, as language is inherently value laden, with context and nuance baked in. Politeness is a good example of this, and new research from Cornell University highlights the latest developments in making chatbot messages that are a little bit more polite.

The researchers were able to take brusque and to-the-point messages and render them more polite and hopefully, therefore, more acceptable to the recipient. The researchers relied upon an unlikely source to drive their work. As part of the legal proceedings against former energy company Enron, their entire email database was made a matter of public record. This gave the researchers a database of around 1.4 million sentences to mine, and they were able to tag each sentence according to its relative level of politeness.

The team wanted to fully understand the nuances of politeness, so their system could go beyond simply adding in “please” and “thank you” and could know when to soften the tone of a sentence so that it’s less direct or abrupt. They also wanted to understand the cultural issues surrounding politeness in different countries.

As with most AI-based projects, there was a clear progression in capabilities over time. What began with the ability to clean up swear words or to add words such as “please” and “sorry” to sentences, progressed to a more realistic and subtle appreciation of politeness. For example, the software was able to replace first-person singular pronouns with first-person plural pronouns.

It remains a work in progress, and the team have released the fully labeled “Politeness Transfer” dataset used in the project to help other teams build upon their work and make politeness an integral part of chatbot technology in future. If chatbot technology is to retain the prominence it has enjoyed as a result of the pandemic, then developments like these are likely to be crucial to help ensure that conversations meet up to the expectations of customers.