Login or signup to connect with paper authors and to register for specific Author Connect sessions (if available).
A Taxonomy for Uncertainty-Aware Explainable AI
Maximilian Förster, Michael Hagn, Nico Hambauer, Paula Jaki, Andreas Obermeier, Marc Pinski, Andreas Schauer, Alexander Schiller
Artificial Intelligence (AI) is increasingly used to augment human decision-making. However, especially in high-stakes domains, the integration of AI requires human oversight to ensure trustworthy use. To address this challenge, emerging research on Explainable AI (XAI) focuses on developing and investigating methods to generate explanations for AI outcomes. Yet, current approaches often yield limited explanations, neglecting the various sources of uncertainty that strongly influence AI-augmented decision-making. This paper presents a first step to establishing a foundation for future research in uncertainty-aware XAI. By applying the Extended Taxonomy Design Process, we aim to develop an integrated, hierarchical taxonomy to structure the key characteristics of uncertainty-aware XAI. Through this approach, we identify four primary sources of uncertainty: data uncertainty, AI model uncertainty, XAI method uncertainty, and human uncertainty. Furthermore, we propose a preliminary taxonomy as an initial foundational framework for the future design and evaluation of uncertainty- aware XAI.
AuthorConnect Sessions
No sessions scheduled yet