Login or signup to connect with paper authors and to register for specific Author Connect sessions (if available).

Antecedents of Trust in Deepfakes: Insights from the Elaboration Likelihood Model
Steven Gal, Burcu Bulgurcu
Deepfake technologies pose emerging cybersecurity threats by leveraging AI-generated content to deceive individuals. While prior research has examined susceptibility to text- and voice-based phishing, deepfakes enhance social engineering tactics through heightened realism. This study applies the Elaboration Likelihood Model (ELM) to investigate how key contextual factors—information quality, source credibility, disclaimers, and engagement—shape perceived trust in deepfakes. Using a 2×2 between-subjects experiment with 326 university students, we manipulated disclaimer presence and engagement levels within a deepfake video displayed on a simulated social media platform. Results indicate that disclaimers reduce both perceived information quality and source credibility, which in turn influence perceived trust, while engagement affects only information quality. These findings extend ELM to deepfake contexts, providing theoretical insights into persuasion in rich media environments. The results can inform the design of interventions to mitigate deepfake-driven phishing threats.

AuthorConnect Sessions

No sessions scheduled yet