As part of the “New Technologies and Grief” workshop, I was asked to give a talk on the technological limits of chatbot development. As you already know, chatbots are becoming a common presence in our lives. They pop-up in many of the websites we visit, trying to make our life easier. Most of these bots are rather specialized (e.g. our own pre-trained eCommerce chatbots), offering predefined answers to a fixed set of questions. But is it possible to build open-domain bots able to talk about any topic we can think of?. Even better, could we create a bot able to perfectly mimic what a person we knew would say in a conversation with us?.
Let me summarize the answer I gave to these questions. Because, to begin with, I think it’s important to realize that we’re talking about two different aspects. It’s one thing to ask ourselves whether chatbots can replace humans. It’s a very different thing to ask if chatbots can impersonate specific humans. So let’s tackle each question separately.
Will chatbots ever replace humans?
My answer is a definite yes. This is already happening. You have all seen chatbots in different situations (client support, product search, FAQ’s answering,…) that do a reasonable job in replacing the people that, until now, where behind the chat window answering in “live chat” mode all the web visitor’s questions. Some bots are even good enough to be effective in romance scams (be careful when using any type of dating app!).
Of course, they are not as good as a person could be, but they are good enough to replace humans and let them take care of more creative and important tasks for the company. In particular, bots can replace humans:
- In specific situations (general bots like Google’s LaMDA or Facebook’s Blender are impressive but not that useful for a specific company that wants a bot that answers its questions, not so much a bot able to sustain philosophical conversations)
- If we don’t expect a perfect result but will settle for a good enough
- If our goal is to accomplish an operational task (e.g. eCommerce)
- if we understand the 80/20 rule. It’s too difficult to create a bot that can answer any questions about our company’s products. It’s not that difficult to create a bot that answers most of the common questions well.
In general, if the chatbot is a means to an end (e.g. the “end” being getting some specific information), a tool to accomplish our goal, then chatbots will provide a good user experience and save us time and money while making most of our visitors happy enough with the results of their bot interaction.
Will chatbots ever replace a specific human?
My answer is a definite no. When it comes to creating a chatbot that impersonates a specific person, e.g. a loved one that died, we’re still talking about science fiction “Black-mirror style”. The key difference is that now you don’t want a bot that answers your question, you want a bot that answers your question as your loved one would have answered it. And this is much more challenging for the following reasons:
- You need enough digital footprint from the person to be able to train the chatbot so that it can learn not only what the person knew but how typically he would express himself
- You cannot rely on a fixed set of questions for the bot (e.g. as in a bot that answers your company’s FAQ) as people expect much more open conversations. This forces us to rely on open domain bots, which are more difficult to create and control
- Temporal issues: the chatbot will be a mixed of all the data you collected, mixing up memories and expressions from different phases of the person’s life. You may get a strange combination of the bot talking as a 5-year old sometimes and as a 80-year old in other occasions.
- Fixed snapshot: it’s difficult to make the chatbot evolve and continuously learning as if the person was still alive. So you’ll mostly get stuck with a fixed snapshot of the person
- The bot will learn also the bad things. We tend to idealize the memories of our loved ones. But the chatbot will learn all aspects of the person, including those (e.g. racism) that we would prefer to forget. Pruning the data to avoid negative opinions is difficult.
- Risk of extreme disappointment. Now the chatbot is the end, not the means to an end (as in the previous case). This means that any wrong answer by the bot risks causing a strong disappointment in the person talking to the bot as the bot will lose its “magic” and it will become evident that we’re talking with a bot, not with our loved one. Users of the chatbot need to be mentally prepared for this
- Going beyond descriptive questions. Bots are good at talking about facts. Not so much when talking about the “why” or when asked to make predictions. Even less at giving advice. But we tend to expect all these types of conversations when talking to people
So, let’s keep using bots for what they are good at (helping us) instead of trying to create artificial copies of specific people, that, at least in the short term, will just end up in a disappointing experience.