Login or signup to connect with paper authors and to register for specific Author Connect sessions (if available).
No Joke: Refusal Policies for Cross-Cultural Sensitivity
Mengyuan Zhou, Xuenan Cao
This study investigates how AI language models enforce refusal policies in cross-cultural humor involving colonial histories. Through systematic testing of seven colonial pairs (e.g., Germany-Namibia, France-Algeria), this paper analyzes ChatGPT’s selective engagement with sensitive narratives. Findings reveal inconsistent refusal rates: prompts concerning Germany’s colonial past face the highest restrictions, followed by British and French contexts, while Spanish, Portuguese, and Dutch colonial pairs encounter minimal refusals. Bulk requests and U.S.-related jokes activate additional safeguards, highlighting policy biases. By shifting attention from output bias to refusal patterns, this study demonstrates how ostensibly neutral safety mechanisms can reinforce digital colonialism by privileging dominant historical narratives and silencing marginalized perspectives. It also introduces refusal analysis as a novel metric for cross-cultural sensitivity in AI and underscores the urgency of culturally informed safety frameworks to mitigate systemic inequities in global discourse.
AuthorConnect Sessions
No sessions scheduled yet