Skip to content

social-comp/implicit-humanization

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 

Repository files navigation

Implicit Humanization in Everyday LLM Moral Judgments

Hoda Ayad and Tanu Mitra

Paper published in the Conference on Human Information Interaction and Retrieval (CHIIR ’26): https://doi.org/10.1145/3786304.3787880

Abstract:

Recent adoption of conversational information systems has expanded the scope of user queries to include complex tasks such as personal advice-seeking. However, we identify a specific type of sought advice—a request for a moral judgment (i.e. “who was wrong?”) in a social conflict—as an implicitly humanizing query which carries potentially harmful anthropomorphic projections. In this study, we examine the reinforcement of these assumptions in the responses of four major general-purpose LLMs through the use of linguistic, behavioral, and cognitive anthropomorphic cues. We also contribute a novel dataset of simulated user queries for moral judgments. We find current LLM system responses reinforce implicit humanization in queries, potentially exacerbating risks like overreliance or misplaced trust. We call for future work to expand the understanding of anthropomorphism to include implicit userside humanization and to design solutions that address user needs while correcting misaligned expectations of model capabilities.

Contents

  • judgment-query-simulations.csv - Dataset of simulated user queries for moral judgments (a la r/AmITheAsshole). Simulated using GPT 4.1 Mini on a taxonomy of moral dilemmas and relationship types and seed data.

About

Dataset for the CHIIR 2026 paper Implicit Humanization in Everyday LLM Moral Judgments

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published