Chatbots are dead, long live conversational apps

Notebook and Pen
Words matter. They frame our thinking and influence our decisions.

In the conversational space, we have an abundance of terms and it’s time we start challenging how they affect the field and our ability to effectively communicate benefits to others.

In particular, I think the term chatbot has outlived its usefulness. I had already written about this in 2018, but that was with the hope that we could get clarity on the term chatbot itself. Instead, since then, things have gotten more complex and there is a proliferation of other equally vague terms that have joined the party. Terms such as virtual assistant, intelligent assistant, AI chatbot and conversational agent are used both interchangeably or as indications of different approaches, making it all difficult to untangle. It is time we explicitly worked to change how we describe what we do in the conversational space.

What we know we know

With a few years of commercial exploration behind us we should be in a position to accept a few ground truths. Namely:

  1. Artificial intelligence techniques may or may not play a role in the development of a solution. There is no “true” chatbot that uses only natural language processing or only rule-based matching. There are a number of issues that come into play when designing a conversational product and the only true measure is the final utility for users.
  2. Solutions can and should adapt to different mediums/platforms. There is no definitive proof that we should only design for natural language or only use buttons or other form elements. It should be a choice based on what would best serve a user in a specific context.
  3. There is nothing intrinsically positive about a solution that is able to chat about random subjects or fool the user into thinking they are chatting to a human. Similarly, we do not know enough about which personality or behavioural traits (or simulation of behavioural traits) would work best in a specific context. There is interesting work taking place and we are learning more every day and signs point to a nuanced set of parameters.
  4. Saying that a virtual assistant is different or better than a chatbot is a completely arbitrary choice depending purely on a very context-specific definition of those terms designed to support the desired marketing outcome. These sort of articles don’t help.
    Now, how do we get out of these arbitrary definition wars and purity contests and focus on what is really important?

 

First off, and most importantly, the problem we are trying to solve is not how to make computers able to converse like humans. That is an exciting research issue but not a goal for a single product. When building a conversational product there is a wide range of issues to tackle and having conversations that are as human-like as possible is an option to be driven by a clear understanding of what benefits it brings. There is much work we still need to do to better comprehend what type of personality or behaviour in a conversation will work best in what situation.

Second, the things that we build need to do much more than just talk (or chat). They are supposed to help people solve problems faster and more enjoyably than other options (otherwise we should be using a different option!). Conversational solutions need to integrate with other services, adapt to changing interfaces and platforms and always make sure they are effectively helping the user complete their goal.

Conversational applications

In short, what we are trying to build are applications. Just as we always have. Software that helps humans complete a task. What is different from all the other applications out there is that these apps are predominantly conversational.

So let us just call them that. Conversational applications.

It is primarily an interaction-style choice. One sufficiently new and interesting enough to require a specific sub-category in the application space. A choice that should be clearly justified because of the value it provides to the user.

We choose to build a conversational application because we have concluded that a conversation is the most efficient way to resolve the problem and/or a conversation is the most user-friendly way to interact through a specific platform.

We recognize that conversational applications have a specific and separate set of challenges they need to solve when compared to other types of applications. They range from AI challenges (e.g. how to interpret natural language) to UX (e.g. what conversational patterns make sense) to software engineering (e.g. how to test conversational applications). The only measure of whether an approach is right or wrong is whether users are happier. Not whether we are using deep learning, or are voice-first, or are buttons only.

As a community of practitioners building these things, it is useful to define what we are doing clearly. I think the most inclusive thing we can say is that we are building conversational applications. We can then focus on describing the rich and varied ways in which conversations can develop from voice to text to gestures and anything else we can think of. Equally, we need to start identifying what types of conversations work in what contexts, rather than assume that dimensions such as personality or more engagement is always better.

Let’s give the term chatbot a break and return it to its rightful owners. Long live conversational applications.


Article originally posted on LinkedIn on 16th January 2020.

Share This Post

More To Explore

Video

Building Intents | Build The Power of OpenDialog #3

https://www.youtube.com/watch?v=Krx9-JI2XaYIn this lesson we’ll focus primarily on how to create intents and test conversational flows. By the end of this video you should have a

See How it works!

Get in touch for a showcase of how OpenDialog can help your business Deploy Conversational AI, at scale.