Login or signup to connect with paper authors and to register for specific Author Connect sessions (if available).
The Impact of an Explanation-Induced Expectation Violation in Explainable AI
Marc de Zoeten
Explainable Artificial Intelligence provides insight into the inner workings of systems formerly considered black-box systems by providing explanations. While this is generally intended to improve perceptions of the system, in practice the effect of explanations is unclear. Some studies show explanations improve perceptions of the system, other studies find explanations to worsen perceptions and yet again other studies find explanations to have no effect at all. Explanations may however only have an effect in case of a prior (result-induced) expectation violation. Still, this is insufficient to explain the range of effects that explanations have in the literature. I propose explanation-induced expectation violation to be a major cause of the observed discrepancy and investigate the impact of explanation-induced expectation violation in my study. I find that explanation-induced expectation violation hurts system perceptions, specifically it lowers Trust, Explainability, Transparency, Fairness and Performance.
AuthorConnect Sessions
No sessions scheduled yet