AI, NLG, and Machine Learning

How to Gain a Deeper Understanding of Your NLP Engine and How to Improve Model Performance

Great strides have been made in the advancement of natural language processing (NLP) systems, but chatbot trainers still face one fundamental challenge—how to get an NLP model to perform at its best.

By Benoit Alvarez
March 16, 2021

Great strides have been made in the advancement of natural language processing (NLP) systems, but chatbot trainers still face one fundamental challenge—how to get an NLP model to perform at its best. The key to this is knowing and understanding the influence your training data has on the performance of your model. Ultimately, a big contributor to performance will be the training data, so understanding the learning value of your utterances within the model setup is key. In this article, we run through some fundamentals for the chatbot trainer.

Understanding NLP working principles

First things first: NLP doesn’t “read” and “understand” language and conversations in the same way humans have learned to read and understand them. It’s easy for chatbot trainers to fall into the trap of believing that because an utterance makes sense to them, their model will understand it with clarity and will identify the correct intent with confidence.

NLP engines such as Amazon Lex, Dialogflow, and Rasa need a qualitative approach to the training data. You can imagine the way they work as transfer learning (this is a machine-learning method that takes a previously trained piece of information and reuses it as the basis for learning a similar piece of information).

Simply adding more and more training data to the model is not the best way to solve any weaknesses in chatbot performance. In fact, this is more likely to result in poorer performance. It will add too much diversity, it’ll overfit or even unbalance your model, and it’ll probably become ineffective as a result of being trained on too many examples. Carefully curated training data is one of the key attributes of good performance. But more importantly, chatbot trainers need to understand what the learning value is of each utterance they add to their model. The optimum number of utterances is very difficult to pinpoint because it’ll depend on a number of factors, such as other intents, their “subject closeness,” their number of utterances, and so on. But as general guidance, a good starting point is 15 utterances and above—but start to be cautious when you reach the 50- or 60-utterance mark.

How can you influence NLP performance?

Broadly speaking, there are two categories of NLP engine:

1. The ones with maximum control, where you can tune almost all parameters, control where the data is, etc. These are great, but only hard-core data scientists and development teams will make the most of them. Such engines also require you to manage the tech stack and do the upgrading, scaling, and hosting yourself. Rasa is one example of this category of the engine.

2. The ones for minimum investment, provided by the most renowned NLP providers, where you benefit from the latest and most innovative advancements and improvements in NLP—but your only influence on performance is your training data. This category of NLP engine includes LUIS, Amazon Lex, Watson, and others.

Whichever NLP engine you choose to use, your training data is key to unlocking performance. So you are inevitably going to wonder how to maximize the impact of your training data. Should you repeat this concept twice? Are five times too many? Would three times be the optimum amount to gain maximum learning power for your model? How many concepts can you cover in one intent before the intent is deemed too wide? How should your utterances be structured? Should they be as short as possible—or longer, to cover more meaning? How much variance should you give to each utterance? An experienced chatbot trainer will know the answers to all these questions, if they have a true understanding of the influence and learning value their training data has on their model. And to do this, they use techniques to measure those performances.

How do you measure the quality of your training data?

Your training data needs to be assessed and analyzed to measure its quality. So techniques like preparing test data (also called cross-validation or blind data) are very efficient but also time consuming. K-fold is not great when you build your model because the changes in the training data will create performance changes only due to the randomization element of the K-fold algorithm. Leave-one-out is another technique I invite you to investigate. Ultimately, you need to find a systematic way to measure your model.

Understanding the ripple effect is very important. The ripple effect is what happens when you modify some training data in an intent X, and you improve that intent, but the performance of other intents (A, D, F) also changes, sometimes for the better—but sometimes not. The ripple effect is due to the fact that intent-classification models tend to rely on a set amount of training data per intent, and this means that each piece of training data has more influence.

The following diagrams illustrate the ripple effect, and in particular, the positive and negative effect some changes can make. In figure 1, intent 18 is struggling to perform well. It is confused with the training data of intents 10, 15, and 21. We can see that the training data (represented by dots) is spread out, indicating that the definition is not well understood. In figure 2, we reworked the training data and improved intent 18. We can see that the definition of that intent is narrower. By improving intent 18, we’ve removed some confusions in intents 10, 15, and 21, even though we didn’t change their training data, so their performance has improved (a positive ripple effect). However, if you look at intent 12, which did perform well in figure 1, it is now confused with intent 18. This is an example of a negative ripple effect.

These types of analysis are only possible with systematic testing. Finding a technique that works for you—leave-one-out, test data, or a tool to help you, like QBox.ai—will dramatically improve your understanding of the performance of your model, help you find weaknesses, analyze the reason for those weaknesses, and validate your fixes.