BEGIN:VCALENDAR
PRODID:-//Grails iCalendar plugin//NONSGML Grails iCalendar plugin//EN
VERSION:2.0
CALSCALE:GREGORIAN
METHOD:PUBLISH
BEGIN:VEVENT
DTSTAMP:20260430T125906Z
DTSTART;TZID=Australia/Melbourne:20230907T150000
DTEND;TZID=Australia/Melbourne:20230907T170000
SUMMARY:AI Deception: A Survey of Examples\, Risks\, and Potential Solutions
UID:20260430T195355Z-iCalPlugin-Grails@philevents-web-6b96c54f56-bljdq
TZID:Australia/Melbourne
LOCATION:17 Young Street\, Fitzroy\, Australia\, 3065
DESCRIPTION:<p>Peter Park\, Simon Goldstein\, Aidan O'Gara\, Michael Chen\, and Dan Hendrycks\, "AI Deception: A Survey of Examples\, Risks\, and Potential Solutions"<br><br>Abstract: This paper argues that current AI systems have learned how to deceive humans. We define deception as the systematic inducement of false beliefs in the pursuit of some goal other than the truth. We first survey empirical examples of AI deception\, discussing both general-purpose technologies such as large language models\, and special-use AI systems built for specific competitive situations. Next\, we detail several risks from AI deception\, such as fraud\, election tampering\, and losing control of AI systems. Finally\, we outline three potential solutions to the problems of AI deception: regulatory frameworks should treat deceptive AI systems as high risk\, subject to robust risk assessment requirements\; policymakers should implement bot-or-not laws\; and policymakers should moreover prioritize the funding of technical research to enhance existing techniques to detect AI deception. Policymakers\, researchers\, and the broader public should work proactively to prevent AI deception from destabilizing the shared foundations of our society.</p>\n\n<p>Link to join over Zoom: <a href="https://acu.zoom.us/j/84042946087">https://acu.zoom.us/j/84042946087</a></p>
ORGANIZER;CN=Nevin Climenhaga:
METHOD:PUBLISH
END:VEVENT
END:VCALENDAR
