As life in the digital fast lane gets ever faster, chatbots are becoming popular intermediaries for customers seeking a speedy response or service. Chatbots appear to be quintessentially customer-centric, delivering the real-time on-demand approach that consumers want, The 2018 State of Chatbots Report confirms. But ethical dilemmas around the use of artificial intelligence must also be addressed if organisations are to retain customers’ trust.

Interestingly, the 2018 State of Chatbots authors confirm it’s not only digital natives and millennials who are happy to engage with chatbots, but also the older Boomer generation, who are 24 per cent more likely to expect benefits from chatbots.

Businesses such as online finance and pension advice provider Wealth Wizards are keen to exploit the top three potential benefits of chatbots that consumers reported in the 2018 survey, now available easily and affordably from cloud providers:

  • 24-hour service (64%)
  • Instant responses (55%)
  • Answers to simple questions (55%)

Wealth Wizards already uses artificial intelligence (AI) and plans to use chatbots, says Peet Denny, chief technology officer. He is aware of the ethical considerations concerning the use of machine learning chatbots to potentially mine customer conversations for undisclosed emotional data, and the company complies with the Financial Conduct Authority (FCA).

The immediate ethical gap that needs to be fixed is the lack of regulation requiring companies to declare their use of bots or AI, says Nils Lenke supervisory board member of the German Research Institute for Artificial Intelligence: “If a chatbot gives a reasonable response online, there’s a natural assumption that we are communicating with a fellow human being. Without an explicit warning, as recipients we have no opportunity to evaluate them and can become overwhelmed.”

Lenke’s concerns chime with the findings of a 2017 report, Sex, Lies and AI, which found high levels of anxiety about the undeclared conversational or video user interface. In fact, 85 per cent of respondents wanted AI to be regulated by a “Blade Runner rule”, making it illegal for chatbots and virtual assistants to conceal their identity. A cause for even greater concern, however, might be chatbots fronting an AI application capable of interpreting emotions.

While legislators and regulators crank up their efforts, businesses such as Wealth Wizards are not putting competitive advantage on hold, as Denny confirms.

“Basically, anything that is required of a human, we apply to our AI tools,” he concedes. “It [the FCA regulations] is not designed for AI. But it’s a start”.