Bot Basics, Chatbot Development
Natural Language Processing: Common NLP Challenges & Examples
With Natural Language Processing (NLP), chatbots can follow most conversations, but humans and language are complex and variable. Three of the most common NLP challenges are natural language understanding, information extraction, and natural language generation. Learn more about NLP, and why it matters for bots.
September 15, 2020
The proliferation of chatbots, or bots, and voice-activated technologies has reawakened excitement and curiosity around natural language processing (NLP). Previously, chatbots often functioned as scripted, linear conversations, in which the chatbot’s dialogue was already decided. With NLP, however, bots can adapt to understand many different conversations. Still, it isn’t frustration-free. Humans and language are complex and variable, and there are endless ways to express meaning.
What is natural language processing?
Natural language processing refers to a machine’s ability to understand, analyze, and respond to human speech. There are thousands of human languages, but computers use artificial languages to communicate with other computers. To do so, they rely on what was programmed by the developers. In NLP, human and computer languages converge, and the goal is for computers to respond in a manner that mimics human language.
NLP attempts to convert spoken language from a human into an artificial language for a computer. To understand a human language, computers must translate the speech into text. Most NLP systems break it down into different parts of speech, tagging the nouns and verbs and determining tense. By relying on a database of vocabulary and grammar rules, the computer applies various algorithms to figure out the meaning behind the speech. To succeed with this translation, the computer must have ample source material from which to pull. This can be anything from text to speech or audio clips. The broader the scope of the source material, the easier it is for the computer to decipher the speech.
Some of the most well-known NLP examples are Amazon’s Alexa and Apple’s Siri. They listen to commands like, “Find movie theaters near me,” translate the speech into text, scour the internet for the relevant data, and present the user with the information. However, users quickly find shortcomings even in these advanced NLP examples. If you speak too fast, aren’t articulate enough, or have a particularly strong accent, they have a hard time understanding your commands.
Humans have developed forms of shorthand in spoken language (including tonal differences and slang) that computers can’t understand. The small differences in speech patterns make programming NLP difficult. Beyond issues of colloquialisms (or even accents and dialects), there are significant hurdles for developers to clear in natural language processing.
Natural language understanding
Natural language understanding (NLU) is the process of deciphering the meaning and intent behind a phrase or command. NLU is less of an issue with text-only bots, in which variations or exceptions for common issues, like misspellings or misuse of words, are built into the computer’s code. For example, think about how Google knows you’re searching for “baseball gloves” when you accidentally type “baeball gloves.”
The main issues that NLU has are with bots that rely on spoken language. When humans interact with each other, problems like mispronunciations, stuttering, or colloquialisms are easily understood or glossed over. However, a computer isn’t as objective as a human and has a harder time parsing out relevant information versus inconsequential words.
Context and intent are also potential NLP challenges. Much of day-to-day speech relies heavily on contextual words and phrases, which are much harder for a machine to understand. For example, telling someone that you’re engaged can mean a few different things—you’re going to be married, you’re busy with a project or task, or you’re involved in a lengthy process. Additionally, many words have different meanings, depending on their use—a subtlety that computers are still working to understand.
Speech recognition is another part of the equation. Talking too quickly or too slowly, mumbling, stuttering, starting and stopping, adding ums and ahs—these are all speech patterns that humans can easily filter through, but they leave computers scrambling to catch up. Even though it may be natural for humans to use filler words in sentences, it’s difficult for the computer to decipher their meaning.
Grasping the point of any message can be tricky. The way that something is said, the words used, and the expressions that accompany a spoken message are all factors. For humans, these types of nuances are learned over time, through exposure to different groups. For computers, it’s much more difficult. As machines, computers cannot understand the subtleties that humans use in natural language, like sarcasm or condescension. Any deviation from the computer’s prescribed way of doing things can result in error messages, a wrong response, or even inaction.
For a machine, drilling down to the root of user intent requires much practice. Plus, the computer is only able to draw on the references and knowledge coded into it by the developers. Additionally, there is the task of getting a computer to focus on the right information and to ignore erroneous terminology. Humans tend to complicate phrases and requests by speaking in a way that NLP machines can’t understand.
Natural language generation
Natural language generation (NLG) allows computers to respond to humans with language, as opposed to sets of data. It’s a way for computers to offer human-friendly conversation. After all, the ultimate goal of many bots and AI programs is to progress to a place where humans and machines can communicate without problems.
Through conditional logic, the computer can effectively determine how to communicate with a human narratively. This practice ultimately informs the NLG algorithms and helps machines become more human-like in their responses.
Many of the more basic bots have responses hardcoded into their programs and use decision trees to generate their responses. Many follow if, then logic scripts. Through applied machine learning, bots may one day be able to read through the text to learn and understand abstract concepts. The reason that NLG is so tricky to facilitate is that the computer needs some level of human supervision to course correct if it incorrectly interprets an interaction. The machine can only learn from the human if the human spends time affirming or denying the machine’s responses.
Why NLP matters
Improvements in natural language processing will change the way you can build bots. They could also change how you process data. There are mountains of raw data waiting to be sifted through, including data collected from retail websites that show how users interact with different layouts and designs. This data can provide valuable insights, but because of barriers (i.e., accurate extraction of information, succinct generation of answers), the ability to interpret the data in an efficient and cost-effective manner is still a few years out.
After researchers improve upon NLP, bots and the way we interpret data will evolve to the next level. Perhaps the next step after mastering natural language processing will be predictive bot technology, to anticipate what a user wants before they even know it themselves.