Login or signup to connect with paper authors and to register for specific Author Connect sessions (if available).
Explanation Provision Strategies in LLM-based Data Assistants: Impact on Extraneous Cognitive Load, Trust, and Task Performance
Ana-Maria Sîrbu, Till Carlo Schelhorn, Ulrich Gnewuch
Despite the remarkable capabilities of large language models (LLMs), their black-box nature often raises concerns about trustworthiness, particularly when users rely on them for data analysis. While providing insights into an LLM’s internal reasoning process through explanations could be a promising approach to addressing this issue, little is known about how LLM explanations impact users. Our study addresses this gap by investigating the impact of explanation provision strategies in LLM-based data assistants. Drawing on cognitive load theory, we conducted a between-subject online experiment (N=96) to examine how different explanation provision strategies (automatic vs. user-invoked) influence users’ extraneous cognitive load, trust, and task performance. Our results suggest that user-invoked explanations reduce extraneous cognitive load, which in turn positively influences trust and performance in data analysis tasks. We contribute to the nascent literature on LLM explainability by offering novel insights into the impact of explanation provision strategies in interactions with LLM-based assistants.
AuthorConnect Sessions
No sessions scheduled yet