Login or signup to connect with paper authors and to register for specific Author Connect sessions (if available).

Epistemic Injustice in Generative AI
Craig Van Slyke, Jalal Sarabadani, Hossein Mosafer
Generative artificial intelligence (GAI) is reshaping society. Although GAI offers numerous benefits, it also reinforces algorithmic biases in ways that often disadvantage already marginalized communities. Although Large Language Model (LLM) bias is a topic of increasing interest, one form of bias, epistemic bias, has been largely overlooked. In this paper, we discuss how GAI-based epistemic bias can manifest in epistemic injustice in ways that reduce individual and collective well-being. We synthesize three theories, Fricker’s epistemic injustice theory, the capabilities approach, and standpoint theory, to conceptualize a multi-level framework for understanding epistemic injustice and its effects on individual and collective well-being. We also illustrate how identifying key assumptions underlying these theories can be used to derive a robust research agenda that can help us better understand epistemic injustice and mitigate its effects.

AuthorConnect Sessions

No sessions scheduled yet