Chatbot Development

Understanding the Future of Chatbot Development through Cognitive Computing

Cognitive computing is emerging as a primary technological mode, and chatbots are its main staging ground. The future of chatbot development involves a combination of machine learning, artificial intelligence, and big data analysis—all powered by a conversational, engaging front-end chatbot experience.

April 23, 2019

Remember when you had to decide what to type into the Google search bar? Nowadays, autocomplete technology helps us choose the most advantageous search terms. Google “thinks” us through our searches, so we don’t have to do it ourselves. The search technology functions cognitively, not just reactively.

Autocomplete offers a modest improvement in search convenience. However, its utility represents a more significant trend—a shift from technologies based on programmed responses to user inputs to that of cognitive computing, a model through which technologies identify solutions to users’ problems on their own.

Cognitive computing is rapidly becoming the dominant computing paradigm, and chatbots are a significant staging ground for the advancement of cognitive computing. As chatbots proliferate across industries, the most useful ones will be those that can think through unique problems and can pluck viable solutions from the ether of big data.

Let’s take a look at how cognitive computing impacts chatbots and what it all means for the future of bot development.

What is cognitive computing?

Google autocomplete may predict a user’s search query, but cognitive computing, as a concept, involves more than just making predictions. It’s about building technologies that process large quantities of data to solve complex problems. Cognitive technologies rely on algorithms that unearth hard-to-find yet valuable relationships among varied data points. There’s also a “learning by doing” aspect to cognitive computing—that is, the technology processes its own successes and failures so that it gets better over time.

Oh, and it has to communicate with humans in an engaging manner.

If you think that sounds like a mashup of machine learning, artificial intelligence (AI), and big data, you’re right. Cognitive computing is what happens when all (or some) of those technologies work hand in hand to reason through problems and to communicate solutions to humans.

For example, using cognitive computing, the music streaming service Pandora can help you DJ your next 80s dance party. By processing some 450 sonic attributes of different songs, using what the company calls a music genome, Pandora predicts which music you’ll enjoy. In other words, the application learns a whole lot about music—and about your listening preferences, based on the songs you tell it to play—to build playlists for you.

And it gets more complex. Coseer is a company that builds AI technologies capable of processing enormous quantities of data for businesses. Among other things, the company’s tools have helped small business owners set up 401(k) plans and have helped healthcare organizations process millions of specifications for different medical products. These tasks aren’t just time-consuming to manually perform—they’re also subject to human limitations.

For example, Coseer SKU processing technology can quickly identify when just two SKUs out of millions are for identical or comparable products. That way, users can order the one that’s the best price. It would take human analysts a long time to crunch that much data, negating any cost savings. Multiplied across dozens (or even hundreds or thousands) of orders, Coseer’s tool can help health providers preserve significant amounts of capital.

In the healthcare realm, cognitive computing is helping reduce diagnosis time for rare diseases. Drawing on gargantuan masses of health data, the IBM Watson platform can extract valuable information that doctors or other human analysts often struggle to identify. The data set includes clinical trial data, pharmaceutical and genomic data, and over a decade of medical literature on some 9,000 rare diseases.

Is it possible for humans to process that much information? Maybe. Is it easy or fast? Definitely not.

How are chatbots used in cognitive computing?

Now that we’ve explored the basics of cognitive computing, what about chatbots? How can—and how should—chatbots “think” their way through thorny problems to arrive at a solution? To answer those questions, let’s have a look at three different industries in which chatbots are behaving cognitively to help users achieve their objectives.

Retail and e-commerce

Consider Sephora’s chatbot, which uses a feature called Color Match to help users find beauty products. To use Color Match, you snap a photo of anything that contains a color you like. The bot analyzes the image and then suggests makeup, lipsticks, eyeshadows, and other products of a similar color. If, for example, you send the bot a picture of a purple flower, it shows you makeup in a similar shade of purple. You can do the same thing with images of people from ads or from social media posts.

The lesson from Sephora? A chatbot can function like a retail salesperson who processes customer inquiries and then helps them select the right products. The bot draws on a mélange of aesthetic data to communicate its suggestions to users. It works like a salesperson who lives in your phone—product savvy and all.


Retail isn’t the only industry in which chatbots are solving problems without direct human intervention. Insurance company Singapore Life offers a chatbot that applies predictive modeling to help people select the right life insurance policy. Users interact with the bot (aka SingLife Chatbot), responding to a variety of questions relating to their health and activities. Then SingLife Chatbot shows them what kind of life insurance policy might best meet their needs.

The interesting thing about Singapore Life’s bot is what happens on the backend. The algorithms driving the bot’s conversations perform calculations based on a bank of insurance data. It works in a manner that’s almost actuarial.

As SingLife Chatbot demonstrates, chatbots can serve as a front-end complement to the backend amalgam of machine learning and AI.


The same holds true in the healthcare industry. Take Sensely, a chatbot that draws on Mayo Clinic health data to direct patients to the right health services. Sensely uses natural language processing (NLP) to triage a broad array of symptoms. Ultimately, patients are provided with self-care options or are matched with care providers within a given insurance network.

Customer service, product selection, financial security, health remedies—the use cases for chatbots in cognitive computing are numerous. Unsurprisingly, many organizations are realizing the advantages of the “thinking” chatbot. So many, it turns out, that a chatbot’s ability to analyze data, ponder the implications, and deliver recommendations is quickly becoming the standard by which bots are judged.

What does cognitive computing mean for the future of chatbots?

Predicting the future is always a gamble. Come to think of it, maybe that’s why we develop bots—to fully exploit data from the past to predict the future. And to make it better, of course.

To better understand the future of bots, look at what the major players in the bot field are up to.


Amazon is laser-focused on building a world in which chatbots are conversational, intelligent, and powered by sophisticated algorithms. Amazon Lex does precisely that by “putting the power of Alexa within reach of all developers.” Using Amazon Lex, developers can build bots that evaluate data and provide users with experiences that border on humanlike.


Remember how IBM Watson is helping health providers diagnose rare diseases? Well, Apple is apparently pretty impressed with Watson. So impressed that Apple recently announced a strategic partnership with IBM to bring Watson into iOS for enterprise applications.

Although it’s too early to know how Watson might meld itself into the DNA of iOS, consider the fact that Apple recently made its signature talking chatbot, Siri, a whole lot smarter. Apple isn’t just investing in chatbots—it’s investing in chatbots as the face of its ventures in cognitive computing.


Through its bot-building platform, Dialogflow, Google helps developers incorporate NLP into a user-friendly chatbot experience. At the company’s 2018 user conference, attendees learned that they’d be able to use Dialogflow “to build AI-powered virtual agents” for contact centers. In other words, the focus of Google’s already cognitive chatbot platform is on creating bots that can analyze data and communicate seamlessly with people.


Microsoft is also going all in on chatbots that meld AI with conversational abilities. According to Microsoft CEO Satya Nadella, bots offer a “conversational canvas” through which humans can accomplish all manner of tasks:

Bots are like new applications, and digital assistants are meta apps or like the new browsers. And intelligence is infused into all of your interactions...We introduced Cortana two years ago and ever since then it’s becoming smarter every day because of its ability to know you, to know about your organization, to know about the world and reason about all of this on a continuous basis.

Cortana, of course, is the Microsoft voice-powered chatbot. It’s native to Windows 10, but you can also communicate with it on Android devices.

Why are these companies investing in cognitive computing via chatbot? Because users demand conversational interfaces, and they expect their technology to quickly solve complex problems.

In the very near future, cognitive computing won’t just be an emerging computing paradigm. It may very well be the computing paradigm. And chatbots are set to become its standard-bearer.