Login or signup to connect with paper authors and to register for specific Author Connect sessions (if available).
Can you imagine? How Perceived Humanness Influences the Negative Effect of Hallucinations by Conversational Agents
Matthias Bernhard, Alfred Brendel, Sascha Lichtenberg
Driven by the maturing of Large Language Models (LLMs), companies have begun to implement Conversational Agents (CAs) (e.g., chatbots) for customer service. CAs are often designed to appear human-like (e.g., with a human name and avatar), which increases service satisfaction. However, LLMs are prone to "hallucinations" (i.e., generating inaccurate or non-existent information). In this research, we want to investigate this LLM-specific error type. Following the algorithm aversion theory, errors are more penalized by algorithms. We hypothesize that hallucinations follow the same rule. Based on the Computers-are-Social-Actors (CASA) theory, this expectation should transfer to human-like CAs. The results of our online experiment support that perceived humanness positively affects service satisfaction and mitigates the negative effect of hallucination. For theory, we provide evidence that hallucinations follow other types of errors. For practitioners, we recommend implementing a human-like CA based on an LLM.
AuthorConnect Sessions
No sessions scheduled yet