People aren’t quite ready to be taken over by chatbots, according to new study

Designed to mimic human interaction via text messages or online chat windows, chatbots are quickly becoming the first, and sometimes only, point of contact for online customer support in healthcare, retail, government, banking, and more. ¨

Advances in artificial intelligence and natural language processing, as well as a global pandemic that has reduced human contact to a minimum, have made the chatbot the centerpiece of online interaction, making it an integral part of the future.

But a new study from the University of Göttingen suggests that people are not yet ready for a chatbot to take control, especially without prior knowledge of its presence behind interactions.

Study in two parts published in the Journal of Service Management found that users reacted negatively when they found out they were interacting with chatbots during an online exchange.

However, when a chatbot made a mistake or failed to complete a client request but disclosed the fact that it was a bot, user reactions tended to be more positive when they knew and were more accepting of the result.

A study by a German university published in the Journal of Service Management found that the number of negative user reactions rose in line with how critical or important they considered their service request.

More leniency towards chatbots

Each study involved 200 participants in which they contacted their energy supplier via online chat to update addresses on their electricity contracts after moving.

Half of the respondents were informed that they were interacting with a chatbot, while the other half were not.

“If their problem is not resolved, reporting that they were talking to a chatbot will help the consumer understand the root cause of the error,” says Nika Mozafari, lead author of the study.

“A chatbot is more likely to be forgiven for a mistake than a human.”

The researchers also suggested that customer loyalty may even increase after such meetings, when users learn in a timely manner what they are dealing with.

As a measure of growing sophistication and investment in chatbots, the Goettingen study comes just days after Facebook announced an update to its open-source Blender Bot, which launched last April.

“Blender Bot 2.0 is the first chatbot that can simultaneously create a long-term memory that it can constantly access, search the Internet for timely information, and have complex conversations on almost any topic,” the social media giant said in a blog post. artificial intelligence on Facebook.

Facebook AI research scientist and research engineer Jason Weston and Kurt Schuster stated that modern chatbots, including the original Blender Bot 1.0, “are able to express themselves clearly in ongoing conversations and can generate realistic-looking text, but have ” memories of goldfish.

Work is also ongoing to eliminate repetition and inconsistency in longer conversations, they said.