Login or signup to connect with paper authors and to register for specific Author Connect sessions (if available).
LLMs-based Healthcare Conversational Agents: How do Users Understand and Adapt to the Moral Risks?
Yi Yang
This study focuses on large language models-based healthcare conversational agents (LLMs-based HCAs). By using a three-stage mixed method and integrating the Moral Foundations Theory and the Coping Model of User Adaptation, it explores users' perceptions of moral risks, adaptation mechanisms, and the importance ranking of risk factors. The research aims to fill the gaps in existing studies, provide a theoretical and empirical basis for designing more ethically adaptable LLMs-based HCAs, and promote the safe and effective application of this technology in the healthcare field.
AuthorConnect Sessions
No sessions scheduled yet