Dumb Computer! Why Chatbots are Offended

Dumb Computer! Why Chatbots are Offended

In contrast to a neutral chatbot, the intensity of aggressive behavior is reduced with a more human chatbot.

Virtual assistants, chatbots, have become an integral part of many company websites and are becoming increasingly important. A study by TU Dresden has investigated whether errors made by chatbots lead to aggressive behavior among their users and what influence the supposed humanity of the virtual assistants has on the reactions.

Aggression towards digital conversation partners

Chatbots are supposed to make it easier for internet users to find the information they need quickly by responding directly to questions and requests. However, the reactions to a chatbot are by no means always positive, says Professor Alfred Benedikt Brendel from TU Dresden. “If a chatbot gives incorrect or confusing answers, this can trigger aggression towards the digital conversation partner among users,” explains the holder of the Chair of Business Informatics. In the worst case scenario, aggression towards the virtual assistant, including verbal abuse, can also have other negative effects – for example on the user’s attitude towards the website or the provider itself.

In their study, the international research team led by Alfred Brendel investigated whether the design of chatbots has an influence on how users react to unsatisfactory responses. If a chatbot is provided with human attributes, it can be assumed that aggressive behavior occurs less frequently than with a neutrally designed chatbot. “In our experiments, some of the respondents used a human chatbot that was given a name, gender and picture. It answered questions in a very friendly manner and reinforced its messages with appropriate emojis.” In contrast, the neutral chatbot, with which another part of the study participants interacted, did not contain any such design elements.

Human chatbot increases satisfaction

The results of the study initially show that the human chatbot generally increases user satisfaction. This in turn also reduces the occurrence of frustration. However, contrary to the researchers’ original assumption, if a chatbot gives unsatisfactory answers, this also leads to frustration and aggression in a chatbot with human attributes. Overall, around ten percent of users show aggressive behavior towards the virtual assistants.

In contrast to a neutral chatbot, however, the intensity of aggressive behavior is reduced with a more human chatbot. For example, users were less likely to use offensive language when interacting with a human chatbot. The results have far-reaching consequences, especially in practice, explains Alfred Brendel. “I would recommend that software developers take a careful approach to human-like design and carefully consider the positive and negative effects that additional human-like design elements such as gender, age or certain names can have.”