AI supports customer interactions, but it can also put them off, warns guest author David Hefendehl of Macaw.
Whether it’s personalized product recommendations, sentiment analysis on Facebook or always-on chatbots: AI-based applications are revolutionizing customer service and meeting consumer expectations for a digitized and personalized experience. Even if the new possibilities are very tempting, companies should realize that the responsibility for using the tools lies with them. The following criteria will help ensure that artificial assistance feels right.
Many consumers are skeptical. They think of AI in terms of surveillance, non-transparency and manipulation. All the worse if the company does not clearly communicate the use of AI-supported tools. Names, voices or images of bots should therefore signal to customers that they are not interacting with a human. Even in selection processes, such as online lending, applicants have a right to know which algorithms led to a decision. Companies should ensure that their use of AI is ethical and that their decisions and algorithms are transparent and traceable, so that trust in the technology grows.
AI makes it easier to collect and process vast amounts of data. However, the results depend critically on the quality of the data. Therefore, it is important that responsible parties feed the system only with correct and up-to-date information. By the way, when training with historical data, the AI adopts decision-making patterns of the company. For example, if the company used to discriminate against certain applicant groups, an algorithm will repeat this behavior. Such biased classification of data is clearly unethical.
For all the merits of these new applications, AI cannot replace human judgment. Companies should therefore feel free to put automated processes in the hands of bots, but let employees have the final say. Especially in situations where empathy is required, for example when reporting an accident to the insurance company or making a complaint, genuine human contact must be possible.
When interacting with customers, AI tools like bots collect a lot of personal data. At this point, companies should integrate mechanisms that delete the information after processing. If, on the other hand, hackers succeed in intercepting data from chat histories with bots, users will lose confidence in the technology.
Experts need huge amounts of data to train the algorithms – and thus occupy entire data centers. Here, too, the responsibility for careful use of resources lies with the companies. Before companies decide to use AI applications, they should ask themselves whether the costs and benefits are in the right proportion. Other analysis methods are also target-oriented without having to accept immense energy consumption.
When using AI-supported customer interactions, we must not forget our social responsibility. As long as AI offers real relief for all parties involved, it is also ethically justifiable. However, companies need a critical eye to prevent automation that is supposed to simplify processes from being based on bias and discriminating. The guidelines published by the EU for the ethical use of artificial intelligence provide very good guidance here. In every decision, the customer should generally be the focus and AI should be used in such a way that the company is able to help them in the best possible way with their concerns.
is Digital Strategist at Macaw.