Login or signup to connect with paper authors and to register for specific Author Connect sessions (if available).
Curvature-Informed Local Explanations (CILE): Improving Stability and Trustworthiness in Explainable AI
Nolan Talaei, Asil Oztekin, Hongwei Zhu, Luvai Motiwalla, Anol Bhattacherjee, Salih Tutun
In Information Systems (IS), the trustworthiness and transparency of AI models are critical for adoption and practical use. Explainable AI (XAI) methods, such as LIME, SHAP, and Integrated Gradients, offer local interpretability by attributing model predictions to input features. However, existing approaches suffer from instability, where small changes in input can lead to significant variations in explanations. In this paper, we introduce Curvature-Informed Local Explanations (CILE), a novel algorithm that integrates second-order derivative (Hessian) information into gradient-based explanations to improve the stability of interpretation. We present the design rationale, the mathematical formulation of CILE, and empirical evaluations showing that it provides more consistent explanations than existing methods without sacrificing fidelity, accuracy, or computational feasibility.
AuthorConnect Sessions
No sessions scheduled yet