BEGIN:VCALENDAR
PRODID:-//Grails iCalendar plugin//NONSGML Grails iCalendar plugin//EN
VERSION:2.0
CALSCALE:GREGORIAN
METHOD:PUBLISH
BEGIN:VEVENT
DTSTAMP:20260429T071833Z
DTSTART;TZID=Europe/Bucharest:20250905T090000
DTEND;TZID=Europe/Bucharest:20250907T170000
SUMMARY:LLMs and digital autonomy: from misinformation to context collapse
UID:20260430T044503Z-iCalPlugin-Grails@philevents-web-6b96c54f56-bljdq
TZID:Europe/Bucharest
LOCATION:Splaiul Independentei nr. 204\, Bucharest\, Romania\, 060024
DESCRIPTION:<p>The conference will be held September 5-7\, 2025\, at the&nbsp\;<strong>University of Bucharest</strong>\, Romania\, as part of&nbsp\;<strong>"The&nbsp\;effects&nbsp\;of&nbsp\;LLM&nbsp\;interaction in digital and virtual environments on TOM"</strong>&nbsp\;ICUB grant in enhancing institutional performance at the University of Bucharest\, gathering a research team from Philosophy\, Linguistics\, Cognitive Psychology and Evolutionary Biology.</p>\n<p>It will have a&nbsp\;<strong>mixed format</strong>\, in that speakers may choose whether they present online only or face to face at the event's location (if so\, their session will enjoy a live audience\, but it will also be streamed to remote participants). Regular presentations will be <strong>30 minutes long\, followed by 10 minutes long Q&amp\;A</strong> and 10 minutes breaks.</p>\n<p>The conference explores the ways in which LLMs impact the autonomy of users in the digital environment. It targets the issues raised by interacting with such AI\, especially when it comes to data privacy of users\, data control (the ways in which users get to choose what happens with their data)\, and the ways in which they influence users' decision-making process.</p>\n<p><strong>Panels include\, but are not limited to:</strong></p>\n<p><strong>1. LLMs\, privacy and data control - how LLMs bear on user autonomy.</strong></p>\n<p><strong>2. LLMs and their impact on misinformation\, echo chambers\, fake news and context collapse (opportunities and risks).</strong></p>\n<p><strong>3. Accountability\, confidentiality\, responsibility and intellectual rights attribution when LLMs collaborate with human users (data generation such as text production\; therapy bots\;</strong></p>\n<p><strong>4. Do users and LLMs interact epistemically? Are LLM outputs reliable? What leads to epistemic dependence on LLMs?&nbsp\; Should it be prevented?&nbsp\;If&nbsp\;so\,&nbsp\;how?</strong></p>\n<p><strong>5. Can we&nbsp\;<em>trust&nbsp\;</em>therapy bots? (autonomy\, trust\, and trustworthiness - AI/Chatbot interaction in the context of mental health)</strong></p>
ORGANIZER;CN=Sandra-Catalina Branzaru;CN="Andrei Mărăşoiu":
METHOD:PUBLISH
END:VEVENT
END:VCALENDAR
