BEGIN:VCALENDAR
PRODID:-//Grails iCalendar plugin//NONSGML Grails iCalendar plugin//EN
VERSION:2.0
CALSCALE:GREGORIAN
METHOD:PUBLISH
BEGIN:VEVENT
DTSTAMP:20260313T011204Z
DTSTART;TZID=America/New_York:20260123T120000
DTEND;TZID=America/New_York:20260123T133000
SUMMARY: Atoosa Kasirzadeh - How should we think about AI risk?
UID:20260313T065343Z-iCalPlugin-Grails@fe80:0:0:0:d82f:e3ff:fe6c:fc8a%3
TZID:America/New_York
LOCATION: University of Pittsburgh\, 4200 Fifth Avenue\, Pittsburgh\, United States\, 15260
DESCRIPTION:<p>The Center for Philosophy of Science at the University of Pittsburgh invites you to join us for our Lunch Time Talk.&nbsp\;Attend in person at 1117 Cathedral of Learning or visit our live stream on YouTube at&nbsp\;<a rel="noopenerdata-cke-saved-href="https://www.youtube.com/channel/UCrRp47ZMXD7NXO3a9Gyh2sg">https://www.youtube.com/channel/UCrRp47ZMXD7NXO3a9Gyh2sg</a>.</p>\n\n<p><strong>LTT:&nbsp\;&nbsp\;&nbsp\;<a data-cke-saved-href="https://kasirzadeh.org/">Atoosa Kasirzadeh</a></strong></p>\n<p>Friday\, January 23 @ 12:00 pm&nbsp\;-&nbsp\;1:30 pm&nbsp\;EST</p>\n\n<p><strong>Title:</strong> <strong>How should we think about AI risk?</strong></p>\n<p><strong>Abstract:</strong></p>\n<p>As artificial intelligence (AI) systems become increasingly integrated into the fabric of society\, the discourse regarding their risks has fractured into two primary camps: AI Ethics and AI Safety. The former is perceived to focus on immediate societal harms (such as algorithmic bias and transparency)\, while the latter is perceived to concentrate on long-term\, often existential risks from artificial superintelligence. This dichotomy has created a conceptual and practical schism that hinders effective progress. But do these perceptions justify the dichotomy?&nbsp\;In this talk\, I argue that this dichotomy is a false choice that overlooks profound theoretical and dynamic overlaps. Building on recent work (<a rel="noopenerdata-cke-saved-href="https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Flink.springer.com%2Farticle%2F10.1007%2Fs11098-025-02301-3&amp\;data=05%7C02%7CAHT59%40pitt.edu%7Cded2540fa3ff49d21f1a08de51fc3500%7C9ef9f489e0a04eeb87cc3a526112fd0d%7C1%7C0%7C639038340831870034%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp\;sdata=uaaznf5%2F1zBT6b0h4b05JnfP0R4lR82j4xs78iNy7rM%3D&amp\;reserved=0">Kasirzadeh\, 2025</a>\;&nbsp\;<a rel="noopenerdata-cke-saved-href="https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.nature.com%2Farticles%2Fs42256-025-01020-y&amp\;data=05%7C02%7CAHT59%40pitt.edu%7Cded2540fa3ff49d21f1a08de51fc3500%7C9ef9f489e0a04eeb87cc3a526112fd0d%7C1%7C0%7C639038340831894888%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp\;sdata=iPNzXl31ucWIrDBTacwFsjOrmYA%2FQGfHqAagjPcyBb4%3D&amp\;reserved=0">Gyevnar &amp\; Kasirzadeh\, 2025</a>)\, I present new computational investigations that bridge these perspectives. I then discuss the ramifications of this investigation for conceptualizing and mitigating AI risks going forward.</p>\n<p>This talk will be available online:</p>\n<p>Zoom:&nbsp\;&nbsp\;<a data-cke-saved-href="https://pitt.zoom.us/j/95838205595">https://pitt.zoom.us/j/95838205595</a></p>\n<p><br>YouTube:&nbsp\;<a data-cke-saved-href="https://www.youtube.com/channel/UCrRp47ZMXD7NXO3a9Gyh2sg">https://www.youtube.com/channel/UCrRp47ZMXD7NXO3a9Gyh2sg</a></p>
ORGANIZER;CN=Edouard Machery:
METHOD:PUBLISH
END:VEVENT
END:VCALENDAR
