BEGIN:VCALENDAR
PRODID:-//Grails iCalendar plugin//NONSGML Grails iCalendar plugin//EN
VERSION:2.0
CALSCALE:GREGORIAN
METHOD:PUBLISH
BEGIN:VEVENT
DTSTAMP:20260511T183928Z
DTSTART;TZID=Europe/Copenhagen:20261001T090000
DTEND;TZID=Europe/Copenhagen:20261002T170000
SUMMARY:1st Critical AI Safety Workshop
UID:20260521T235906Z-iCalPlugin-Grails@philevents-web-6b96c54f56-bljdq
TZID:Europe/Copenhagen
LOCATION:Copenhagen\, Denmark
DESCRIPTION:<p>This workshop aims to bring together scholars from different disciplines who are working to characterize\, map\, and critique the field of AI Safety and AI Existential Risk research.&nbsp\;</p>\n<p>Main questions include\, but are not limited to:<br>- What is the landscape of AI safety and existential risk communities and research\, and what are the tensions within those?<br>- Can AI safety or AI existential risk be described as an ideology?<br>- What assumptions about cognitive science\, economics\, sociology\, society\, or power\, amongst others\, underlie and confound AI Safety? <br>- What are the formal methods of the field\, and how can they be improved?<br>- What are the funding flows in the field? How easy is it for individuals to receive funding\, and what factors are considered in funding decisions?<br>- What policy proposals does the AI safety community lobby for\, and through what channels?<br>- How is the community established\, what are their recruiting strategies\, and what makes them so successful?<br>- How tightly interlinked is the research philosophy with other non-scientific fields\, like science fiction\, hype\, speculation\, and imagination?</p>\n<p>&nbsp\; For more information\, please see the CfP:&nbsp\;https://philevents.org/event/show/149737</p>
ORGANIZER;CN=Ninell Oldenburg;CN=Nina Rajcic;CN="Anders Søgaard";CN=Bokar N'Diaye:
METHOD:PUBLISH
END:VEVENT
END:VCALENDAR
