Login or signup to connect with paper authors and to register for specific Author Connect sessions (if available).
Measuring Trust Dynamics in AI-Assisted Decision-Making: Insights from an Experimental Study
Léane Wettstein, Thiemo Wambsganss, Roman Rietsche, Nicolas Scharowski
Trust calibration is a central component for adopting new information systems (IS) technologies, especially for AI-assisted decision-making systems. While trust is defined as an attitude with dynamic processes that evolve throughout the interaction, current research lacks a comprehensive understanding of how to measure these dynamic changes. This study seeks to evaluate the sensitivity of three common trust measurement methods- single-item scales, questionnaires, and trust games – to capture changes in trust over time. In an online experiment, participants (N = 228) interacted with a simulated AI-system for stock-market investments. The results suggest that only questionnaires are sensitive to trust changes and enable the measurement of dynamic trust, while trust games allow the measurement of dynamic reliance processes. This study contributes to developing more sensitive methods to better understand the calibration of trust and reliance in Human-AI collaboration, with broader implications for the design and evaluation of IS.
AuthorConnect Sessions
No sessions scheduled yet