BEGIN:VCALENDAR
PRODID:-//Grails iCalendar plugin//NONSGML Grails iCalendar plugin//EN
VERSION:2.0
CALSCALE:GREGORIAN
METHOD:PUBLISH
BEGIN:VEVENT
DTSTAMP:20260408T050523Z
DTSTART;TZID=America/New_York:20260313T161500
DTEND;TZID=America/New_York:20260313T174500
SUMMARY:Living with Artificial Agents
UID:20260408T061133Z-iCalPlugin-Grails@philevents-web-f5d4878dd-r5qzs
TZID:America/New_York
LOCATION:Gore Hall\, Newark\, United States\, 19716
DESCRIPTION:<p>A lecture\, free and open to the public with a reception to follow.</p>\n<p>Abstract</p>\n<p>"In order to overcome the limitations of Large Language Models trained exclusively on linguistic data\, AI development is now widening and deepening AI systems' engagement with the world through <em>agency</em>--behavior that is internally-guided\, in the sense that it is based upon the systems' own perception of the environment and ability to use this perceptual information to select actions in light of internal goals.&nbsp\; However\, along with the promise of improved performance\, this brings with it the risk that artificial agents will escape human control.&nbsp\; Some form of self-regulation responsive to concerns of safety and ethics needs to be part of AI agents' internal guidance--just as it must be if human agents are to behave morally.&nbsp\; Interestingly\, the psychology of the human development of autonomous moral capacities increasingly emphasizes the role of experiential learning over innate principles--might this enable us to see how artificial agents\, whose great strength is learning\, could acquire sensitivity to concerns of safety and ethics?."&nbsp\;</p>
ORGANIZER;CN=Joel Pust:
METHOD:PUBLISH
END:VEVENT
END:VCALENDAR
