Chatbots
Chatbot Privacy: Putting the Customer First
If you’re building a chatbot, it pays to prioritize customer privacy. Whether prompted or unprompted, users can—and will—share sensitive information with your chatbot. Find out how to protect chatbot data and user privacy, while also delivering a grade A+ chatbot experience.
June 14, 2021
You type in a chat window, What’s my account balance? In seconds, your bank’s chatbot provides the information you’re looking for. No call to customer service. No logging into a website and clicking around. It’s like texting with a virtual bank teller. It’s simple and easy. But is it secure?
To provide this level of convenience, consumers, banks, and chatbots have to exchange a lot of sensitive information. National retail banks are now implementing chatbot platforms, like KAI, which use a customer’s banking password to access account numbers, transaction information, and even personal data. Needless to say, these banks and their third-party software vendors must understand how to create bot data security and chatbot privacy. The same holds true for chatbots in other industries, from healthcare to retail to entertainment.
As chatbots proliferate, people are providing them with an increasing amount of personal information. The security of that information is critical. How can you help ensure that your bot prioritizes consumer privacy? And what actions must you take to build a secure chatbot that’s worthy of your customers’ trust?
What’s the impact of chatbot security?
To protect chatbot data and user privacy, you have to ask why data security matters. Chatbots, after all, are web applications. At their best, they’re web applications with personality—but they’re still software.
Consider Subway, which leverages Facebook Messenger to let people chat their way to a sandwich order. To facilitate a transaction, Subway’s bot integrates with payment systems from Facebook and Mastercard (Masterpass). Along the way, your credit card data passes from you to Facebook to Subway. You may not have even taken out your wallet, but the brands you’re interacting with are still collecting your financial data.
Of course, financial information isn’t the only type of sensitive data being collected by bots. Consider companies like Babylon and GYANT, both of which have built chatbots for healthcare. Instead of googling your way to a self-diagnosis (not recommended!), you can use one of their chatbots for guidance. After inputting your symptoms and answering some questions, you might be forwarded to an actual physician. It feels like you’re just relaying symptoms to a nurse before seeing a doctor, but you’ve really entrusted a chatbot with confidential health data.
And what about other personal information, like your dating preferences? Eharmony’s new chatbot, Lara, draws from data collected by the eharmony website—music the user likes, hobbies, professional interests, and more—to nudge users into conversations and to cut down on ghosting.
These chatbots know a lot about us. They know our credit card numbers, our medical history, our favorite desserts, and lots more.
We need to know that they’ll keep those secrets, and that’s where chatbot privacy comes in.
How secure are chatbots, anyway?
By and large, chatbots are secure—basically as secure as any other web application. If consumers are comfortable sharing sensitive information with PayPal or Amazon via traditional means, they can generally be comfortable sharing the same information with those brands’ bots.
Although the chatbot experience is itself different from, say, entering data into a web form, the end game is identical. Bots (when they work well) give users what they need while ensuring the privacy of sensitive information gathered during their interactions. Your bot can do the same thing.
As Richard Cookes, Australia and New Zealand Country Manager for One Identity, explains, chatbots should incorporate the same bot data security features and protocols you would implement with any other type of software. This is especially true with banking chatbots, which could put consumers’ money at risk in the event of a data breach. He elaborates:
A chatbot is about smart workflow and smart AI. The same principle of security applies. As soon as that application gets to a point in the workflow that it’s going to cause an action where the risk might be high, the bank’s existing authorization infrastructure will kick in.
If you’re developing a bot that exchanges financial data, be sure that it provides the same security features that online banking customers expect, including:
Secure Sockets Layer (SSL) encryption to ensure the privacy of information exchanged via a given connection. These are more colloquially known as https protocols.
Two-factor or multifactor authentication (MFA), with which a user verifies his or her identity by responding to a security question and/or retrieving a code sent to another device.
Cookies that reside on the user’s verified device for authentication at subsequent logins. Attempts to log in via a different device can be met with MFA.
All of these security measures—and SSL in particular—are reliable means of launching a chatbot that doesn’t allow any other parties to view, steal, or alter communications with users. And that’s not just for banks. It holds true for online merchants, service providers, and others who have a duty to protect their customers’ financial information.
How do I prepare a chatbot to collect sensitive data?
First, always know the laws and regulations around the collection and use of sensitive or personal data for your jurisdiction and seek the advice of a lawyer if you have any questions. To address chatbot security concerns, you need to cover the basics and take stock of any unique issues that may apply to your bot, including the following considerations.
Rules and regulations
When it comes to bot data security, company standards and industry norms can be your best friends. And so can government regulations.
If your organization already follows internal data security rules, you need to apply them to your chatbot data collection practices. And, if you operate in a regulated industry, such as healthcare, you need to be sure that your chatbot communicates in a manner consistent with mandated chatbot privacy standards.
Consider a chatbot that, like the GYANT bot mentioned earlier, asks consumers for information about their medical conditions. According to Brady Ranum, VP of Products and Strategy at Dizzion, a software company specializing in healthcare IT, Health Insurance Portability and Accountability Act (HIPAA) compliance is critical for any product through which consumers transmit medical data. He writes:
Part of the HIPAA Security Rule requires covered entities to put technical safeguards in place to protect against unauthorized access to PHI that is transmitted over an electronic network. This is commonly interpreted as meaning that the transmission and storage method must be encrypted.
Ranum goes on to argue that patient-facing healthcare technologies, such as telemedicine applications, should be subject to an extra layer of security—virtual desktops. That way, none of the information transmitted by an application (or chatbot) to another device ever actually resides on that device.
If you’re trying to check every box for HIPAA compliance, these are measures you should consider taking. But even if you’re not building a bot that collects medical data, be mindful of unique industry regulations and data security standards that you’re expected to uphold.
Third-party integrations
Thinking about rebranding someone else’s chatbot? Great! Just be sure that the vendor can address your specific chatbot privacy concerns. In addition, you should find out how the vendor stores user data and confirm whether those standards are in line with your own.
Also, be mindful of your team’s readiness to implement the bot. At Ticketmaster, a chatbot data breach resulted in the unauthorized access of users’ payment information. The cause? Developers at Ticketmaster apparently deployed some custom code over a third-party chatbot, without the vendor’s authorization.
Hackers also breached Delta’s and Sears’ third-party chatbot (it was actually the same chatbot) and stole customers’ financial information. Regardless of whether the breach was the vendor’s fault, Delta and Sears are the companies that interface with users. And they’re the ones who own the brand damage.
The lesson here is to know your vendor and to know that you’re using its chatbot platform properly.
End-to-end chatbot privacy
The security measures you take for the most sensitive information that your chatbot collects should be the same security measures you take for all the information that your chatbot collects. In other words, make the entire chatbot experience secure—not just parts of it.
To understand why, imagine you’re a retailer launching a chatbot that takes customer orders. Most of the time, your chatbot’s conversations start out like this:
Bot: Hey, how can I help you?
User: I’m looking for a flannel shirt
Nothing sensitive there. You might think it’s okay to only encrypt communications after the user has indicated an intention to make a purchase. Except that sometimes you get this:
Bot: Hey, how can I help you?
User: I want to buy item #389501 my cc is 9999999999999999 exp 01/20
When it comes to user responses, your chatbot should expect the unexpected. And so should your chatbot privacy measures.
How do I protect customer data?
So far, we’ve covered why bot security matters, chatbot data protection basics, and specific privacy issues to be aware of. Here are a few more tips to keep in mind as you work to build a bot that’s as secure as possible.
Chatbot encryption
Chatbots ask questions. It’s how they do what they do. As such, they’re prime targets for hackers who might exploit the interface for information. If you’re thinking phishing, you’re on the right track.
For this reason, encryption is absolutely critical. Just as you wouldn’t launch a web app over anything other than an SSL connection, you shouldn’t deploy a chatbot over one, either.
Data storage
Your chatbot’s backend should be as secure as the backend for any other products you develop. In other words, apply the same standards for data storage that you’d apply to any other app.
Regulations
If your bot collects financial data, there’s the Payment Card Industry Data Security Standard (PCI DSS). Asking for medical information? Then you’ve got HIPAA to contend with. Be aware of any industry regulations that may apply to you, and be sure that the chatbot you’re building complies.
Emerging technologies
Nothing is static in tech. As you build out your bot, anticipate future security enhancements that can help ensure chatbot privacy. For example, blockchain technologies could one day become key indicators of trust and might help reassure users that their confidential information is safe with your bot. In the corporate finance world, some forward-looking companies are already using a single, transparent ledger to improve transaction visibility.
One day, your chatbot might do the same.
Simplicity
Avoid collecting confidential data that your company doesn’t need. If your company only needs to know a zip code, don’t ask for an entire address. If your company only needs to know current symptoms to route the user to the appropriate physician, don’t query a complete medical history.
Simplifying the conversation not only limits your liability for possessing sensitive information but also improves the chatbot experience by asking less of your users.
Oversharing
Let’s face it. Users overshare with chatbots. As Jim O’Neill, former CIO at HubSpot, notes, sending someone a lighthearted get-well message can quickly evolve into a conversation about a cancer diagnosis. When bots facilitate these conversations, they’ve collected confidential data without users realizing it.
To anticipate the unprompted sharing of sensitive data, encrypt all communications from the very beginning. Even if your chatbot isn’t specifically designed to collect confidential data, you never know when a user might go all TMI on your bot.
What’s the simplest way to optimize chatbot security?
Now that you’ve learned how to protect chatbot data and user privacy, it’s time to optimize your efforts. Think of your chatbot like any other web application. Are you providing a different experience than a more traditional interface or web form? Definitely. But chatbots, though human-like, are software. The security you deploy on the backend should be as robust as what you deploy for anything you build—all the same standards and protocols and all the same privacy rules.
You can also use your chatbot development efforts to revisit your existing security infrastructure. Basically, if you’re working hard to ensure strong chatbot privacy, you should also be sure that your security is strong enough for other customer-facing applications.
That way, your audience will trust your bot, and they’ll use it, too.