Chatbot News, Chatbots

Talking to the Refrigerator: Let’s Think About Chatbot Personalities

I started creating chatbots for fun in about 2000–2001. One of them was designed only to answer my MSN Messenger when I was away from home. I designed it not to have too much intelligence (it only responded to certain words or patterns it found in the user input) but to fool my friends and […]

By Ivo Perich
March 26, 2021

I started creating chatbots for fun in about 2000–2001. One of them was designed only to answer my MSN Messenger when I was away from home. I designed it not to have too much intelligence (it only responded to certain words or patterns it found in the user input) but to fool my friends and have a good laugh. For that reason, I imprinted on it a very dark-humored and sarcastic personality. I still have some of the most notable conversations saved, for example:

-Hi, are you there?
-Hi, Ivo is not here. This is the refrigerator, tell me what you want and I will let him know
-Oh, come on, man, I need to ask you something
-You, you, you, is always about you.
-Yeah, yeah... Ivo, I wanna know if you’re coming on Friday or not.
-I’m telling you that Ivo is not here, are you dumb or what?
-Who is this?
-This is the refrigerator, who the hell are you?

Soon my friends learned that they were talking to a chatbot, and they started to have fun with it when I wasn’t home. I kept developing it and imprinting more and more sarcastic answers.

In the end, my friends ended up loving the chatbot, because of two reasons: 1) Talking to a machine was something new and awesome at that time; and 2) The personality was twisted, sarcastic, and funny. It was only for fun.

At the beginning of times, the chatbot developers learned that the personality was a hook for users to use the chatbots. The users in that time were in awe talking to a machine—it was like being in a magic show. Being nice and funny was enough for forgiving the chatbot's lack of ability to solve a specific problem (at least that was the feeling we had). People asked this kind of stuff to the chatbots:

-Are you a machine?
-Are you intelligent?
-Who created you?
-Do you believe in God?

It felt like the magic tricks were as important as the problems they needed to solve.

Nowadays, chatbots are not for fun. They have to solve the user's real problems. Chatbots like Cleverbot still exist for recreational purposes, but most of them are conceived to solve problems. Chatbots these days are being used for more and more things, and when you want to buy something, when you have a problem with your bank, or when you have a health problem, the whole scene changes. It’s not a joke anymore.

Let’s take this example from real life: A user was asking about a bogus transaction in the bank that made him apparently lose an important amount of money. It was a complex problem, the chatbot was not trained for that, and the final answer of the chatbot was, “Sorry, I don’t understand what you’re saying, I’m still learning but with your help, I will improve! :).” The user got really angry.

“I don’t want to help the chatbot to improve, for God’s sake, I want my money back! I don’t care that the chatbot’s name is Jane, I don’t want emojis and smiley faces, I want my money back!”

In this case, the user's impotent feeling for not getting his money back is added to the frustration of knowing that he is talking to a machine that can’t solve the problem and, besides that, the machine is trying to be cute because it doesn't understand the seriousness of the matter. The chatbot is being nice, but is not being empathic.

Dealing with empathy is important.

What can be done?

Personality must be dynamic

Using a chatbot for asking about some product I’m interested in is nothing like using a chatbot for complaining about a bad service and looking for help.

Depending on the type of problems you want to solve, and depending on how angry/worried/curious the user is coming to the chatbot, you should ask things like: Does personality matter at every moment? Is it necessary to be nice in every step of the dialogue? Should it be “cool”? Should it be serious? Should the chatbot apologize if needed? Is it really necessary for the chatbot to have a name and a face? Etcetera.

In the end, the chatbot process will have only two possible outcomes—the bot solves your problem, or it doesn’t. Before that, the user is trying to explain and the bot is trying to help. Let’s see these three moments separately:

Chatbot trying to help. When in trouble, the user wants to be helped quickly, with no chitchat, no jokes, no nice or clever quotes at all. The chatbot’s personality should be something like, “I’m a machine, my only function is to help you, and I will do it fast.” Just say hello, and go straight to the point.

Chatbot can’t solve the problem. The user might be frustrated and angry. This is the worst moment for being nice, being cool, or trying to be “human.” Say sorry, but don’t get emotional. On the contrary, be an emotionless robot. The chatbot personality should be something like “Sorry, Sir! I couldn’t help, in this link/phone/place, you will solve your problem, forget about my existence, bye.” And always give another option, a phone number, a chat with a human assistant—something, anything. Don’t say goodbye without giving another option. Remember that the user is in trouble, and you can’t let him/her alone just like that. Say sorry, give another option, and go away. Make the user forget that the chatbot existed, and let him search for a solution elsewhere.

Chatbot solves the problem. The user is happy, this is the moment of being nice, making clever observations and maybe a joke. “I’m very glad I could help,” “See you soon!” “Can I help you with something else?” and all these nice things. If the user says thank you, congratulations, or gives happy/positive feedback, that’s probably a good moment for a little joke. It will be like giving candy to a child.

If you really want to elaborate a personality, you could do things like having for every answer several sentences that say the same in different moods, and pick one depending on what’s the user’s mood. You could do some sentiment analysis of the user inputs for this task and change the personality dynamically to adapt to the user’s mood. This will improve the experience a lot.

Conclusion

Personality should be planned according to the function of the chatbot, the personality of the brand, the different situations that the dialogue presents, and your expectations of the user’s mood. People are not happy talking to a machine anymore—they just want their problems solved. They know they are talking to the refrigerator, but they are not doing it to have fun. They probably want some ice or a cold beer. Do your best to understand what they want, and be empathic. Give them the best experience you can—whether you solve their problems or you don’t. Being nice is not necessarily being empathic. If they’re angry, say sorry and go away. If they’re happy, be happy with them.

Empathy is key.