Login or signup to connect with paper authors and to register for specific Author Connect sessions (if available).

The AI Complacency Model: Integrating Bounded Rationality and Information Processing
Saeed Nosrati, Hamed Motaghi
This study addresses a critical gap in understanding why users exhibit reduced oversight when interacting with generative AI systems despite their known limitations. While existing research documents AI errors across domains, theoretical frameworks explaining the underlying psychological mechanisms remain underdeveloped. We propose a comprehensive model of AI complacency by integrating bounded rationality constraints with dual-process information processing models. Our framework demonstrates how Perceived AI Reliability triggers a shift from systematic to heuristic information processing, subsequently reducing vigilance and impairing task performance. The relationship between Perceived AI Reliability and information processing is moderated by three key factors derived from bounded rationality theory: knowledge limitations, cognitive processing capabilities, and time constraints. This study contributes to the literature by identifying psychological mechanisms underlying AI complacency, explaining the processing shifts in human-AI interaction, positioning vigilance as a critical mediating mechanism, and introducing the Vigilance-Reliability Matrix as a tool for identifying different interaction patterns.

AuthorConnect Sessions

No sessions scheduled yet