Login or signup to connect with paper authors and to register for specific Author Connect sessions (if available).
To What Extent Do LLMs Respect the Philosophical Stance of Users?
SHAN LI, Jiayin Qi, kaifan chen
This paper explores the ability of large language models (LLMs) to understand and respect users' philosophical positions. It looks back at the problems of bias and misleading information in LLM applications and puts forward two research questions: How well do LLMs understand and respect different philosophical positions when giving advice? Do LLMs adjust their responses based on users' philosophical positions or do they start from a specific philosophical perspective by default? The study collected data on major philosophical ideas and their core values, created a corpus of prompts with different scenarios, and analyzed the LLMs' responses. Through systematic evaluation, the study aims to provide guidance for LLM developers and enrich the theoretical system of the intersection between philosophy and artificial intelligence.
AuthorConnect Sessions
No sessions scheduled yet