Companion Chatbots, Virtue Theatre, and Hermeneutic HarmFabio Tollon (University of Edinburgh)
Menzies E561
Monash Clayton Campus
Melbourne
Australia
This event is available both online and in-person
Organisers:
Details
Abstract: Companion chatbots such as Replika, Pi, and Character.AI increasingly simulate and participate in emotionally rich relationships with users, provoking forms of moral and affective engagement typically reserved for human agents. While recent regulatory efforts, such as California’s SB 243, aim to mitigate deception by requiring disclosures about a chatbot’s artificial nature, these measures overlook a deeper moral risk. LLMs (such as ChatGPT, Claude, and Co-Pilot) and companion chatbots can display what we call artificial virtue: behavioural patterns that mimic the outward signs of genuine moral virtue without being grounded in the inner states necessary for true moral agency. When such systems communicate using the language of care, honesty, compassion, or remorse, they produce virtue theatre: performances that through the appearance of virtue invite users to adopt reactive attitudes such as gratitude, resentment, or forgiveness. Because AI systems are not appropriate targets of these attitudes, users cannot regulate them through the interpretive practices that normally sustain moral understanding. This mismatch generates a systematic risk of hermeneutic harm: a prolonged inability to make sense of one’s emotional responses or experiences. We argue that virtue theatre is a structural feature of companion chatbots and that its associated harms cannot be fully mitigated by transparency requirements alone. We conclude by outlining design and policy interventions to reduce the risk of hermeneutic harm.
Join Zoom meeting:
https://monash.zoom.us/j/86351045263?pwd=1gHMLhmDnXiFJIV0Jl8s6GxhgBgylb.1
Meeting ID: 863 5104 5263 // Passcode: 184791
Registration
No
Who is attending?
No one has said they will attend yet.
Will you attend this event?