Login or signup to connect with paper authors and to register for specific Author Connect sessions (if available).
Perceptions of Annotated Explanations in Explainable AI: Humans versus ChatGPT
Marc de Zoeten, Claus-Peter Ernst, Daniel Staegemann
By providing value-based explanations using LIME and adjacent methods, explainable AI can provide insights into AI systems’ black box internal decision-making processes. However, this approach often does not aid laypeople without additional text-based annotations. As such annotations are expensive to produce manually and problematic to provide in real-time, we argue that combining proven decision support systems for the generation of a decision, LIME-like value-based explanations, and corresponding LLM-generated annotations may address these two challenges. More specifically, if LLM generated annotations caused no differences in user perceptions, these could be generated with lower cost and in real-time without downsides. Hence, we conducted an experiment to compare the impact of LLM versus human-generated annotations on user perceptions of a system. Based on metrics commonly assessed for explainable AI systems, our findings suggest annotations produced by LLMs to be comparable to human generated ones, thus suggesting the deployment of such annotations.
AuthorConnect Sessions
No sessions scheduled yet