Chatbot News, Emerging Technology

What’s Next for Conversational AI?

Conversational AI, as it stands today, is already helping brands and consumers around the world interact in a smarter, more efficient way. Unfortunately, one of the major downsides to conversational AI is its inconsistency across platforms. So, what's the solution?

By Edward Pollitt
September 18, 2020

For conversational artificial intelligence (AI) to continue improving, systems must be combining unsupervised learning and self-learning.

That’s according to Prem Natarajan, Amazon head of product and VP of Alexa AI, who recently participated in a panel session on the future of conversational AI.

Conversational AI, as it stands today, is already helping brands and consumers around the world interact in a smarter, more efficient way.

Unfortunately, one of the major downsides to conversational AI is its inconsistency across platforms.

“Our aspiration is that all of our systems should perform equally well for everyone. Sometimes that aspiration is talked about in the context of fairness or ethical use, etc. But really, in the end, we want it to work equally well for everyone,” Natarajan said.

So, what’s the solution?

Creating conversational AI solutions that can learn on the go, detect patterns in consumer behavior, and not make the same mistakes twice will take the industry to the next level, said Natarajan.

This is something that is widely known as self-learning—the process in which automated systems recognize patterns, learn from the data, and become more intelligent over time.

We have already seen drastic improvements when it comes to these systems implementing self-learning techniques.

In 2018, Amazon introduced self-learning techniques for Amazon Alexa, rather than supervised training, in a bid to improve the user experience.

In the two years since these changes were introduced, Alexa has continued to learn from her users and improve on a daily basis—offering conversations that mimic human-to-human interactions more than ever before.

Moving towards unsupervised learning

However, experts—including Natarajan—are now suggesting that self-learning should act as a complementary force with unsupervised learning.

While self-learning is the ability of the algorithm to learn representations of the data, unsupervised learning occurs when the data being analyzed is unlabelled.

It is important these systems—particularly when dealing with conversational AI—have the ability to conduct unsupervised learning, as these datasets are often muddled and “unclean.”

Google has recently deployed unsupervised learning to reduce gender bias in Google Translate in a scalable fashion.

In an example of algorithmic inequality, Google Translate had previously encountered some challenges when generating masculine and feminine translations.

Using an unsupervised language model, as opposed to a syntactic parsing model that uses labeled datasets in each language, Google Translate can now produce masculine or feminine translations 99 percent of the time.

Alexa, too, has been transitioning towards unsupervised learning.

Amazon’s researchers last year detailed a technique which used 250 million unannotated customer interactions to reduce speech recognition errors by 8 percent.

As this technology continues to become more scalable, AI systems will be given the ability to effectively “learn from their mistakes,” which can help address current issues of inconsistency.

“If you fail once, that's okay—but don't make the same failures multiple times. We're trying to build systems that learn from their past failures,” said Natarajan.

Welcome to the screenless revolution

According to MarketWatch, the global screenless display market was valued last year at $932.57 million.

This is expected to jump to $5.76 million by 2025—marking a compound annual growth rate of 35.43 percent over the next few years.

Screenless displays are defined as an interactive projection technology that transmits data without the use of a screen.

Systems like Alexa and Google Assistant have given rise to screenless displays in recent years, but according to Google AI Director of Product for the Natural Language Understanding (NLU) team Barak Turovsky, we are currently at the beginning of a seismic shift.

“I believe we are on a very early beginning of a fundamental change of how people will interact with computers, mobile phones, smart devices, IoT devices, etc.,” he said during the panel session.

“Voice will become the mainstream of interaction as keyboards and mouses.”

The rise of screenless technologies—in particular, voice—will fundamentally change how humans interact with machines, Turovsky said, with natural language to play a greater role.

And this brings with it challenges for the conversational AI systems that will facilitate this revolution.

“The challenge, and where natural language understanding will become much more important, is that human interaction with voice is very different,” Turovsky said.

“It's very open-ended and, in many cases, ambiguous. Machines need to learn that even if you make mistakes, ideally don't make the same mistake again.”

As natural language processing (NLP) is mostly dealing with unclear data, there is again an opportunity for unsupervised learning to take these technologies to the next level.

The COVID-19 effect

During the COVID-19 pandemic, we have already seen the development of conversational AI technologies be accelerated due to increased demand.

Speaking about the impact of COVID-19 on conversational AI, Turovsky predicted opportunities to emerge as a result of the pandemic.

“I think there will be huge opportunities, huge shifts, in areas of conversational AI, with things like telemedicine, customer service, and customer chatbots,” he said.

Already we have seen the emergence of GPT-3, a new AI technology from OpenAI that is so realistic it is capable of completing a dialogue between two people and writing a book.

With some of the largest companies in the world now investing in this space, we can expect to see this technology continue to flourish in the coming months and years.