Collectives and AI systems: Agency and responsibility
Delft
Netherlands
Sponsor(s):
- Delft Digital Ethics Centre
Speakers:
Topic areas
Talks at this conference
Add a talkDetails
In this hybrid (online and offline) workshop, we explore the parallel between two cases of non-human agents that make an important difference to the world: collectives and AI systems. Are humans always responsible for the actions of these entities, or could these entities be held responsible themselves? Can these entities be intentional and moral agents?
The workshop runs from 5 - 7 pm.
Access the online meeting as follows:
Join Zoom Meeting
https://tudelft.zoom.us/j/9853779835?pwd=aldPa3pGOHFaaS9vaDdVRUpYSGNBZz09
Meeting ID: 985 377 9835
Passcode: 301811
Email Michael Klenk @ [email protected] if you'd like to join in person in Delft.
Update on talks:
Technological and collective responsibility gaps.
Hein Duijf (Utrecht University)
Do responsibility gaps exist? That is, are there situations where a harmful outcome obtains yet no individual can be held responsible for it? The widespread integration of machine learning algorithms, the additive effect of human actions on climate change, and the difficulty of assigning blame in organizations emphasize that the allocation of responsibility may be hard or impossible. In this talk, I will explore the analogy between technological and collective responsibility gaps. I will present a diagnosis of the problem and discuss some viable responses to the problem of responsibility gaps.
Group Agency and Artificial Intelligence
Christian List (LMU Munich)
The aim of this exploratory paper is to discuss a sometimes recognized but still under-appreciated parallel between group agency and artificial intelligence. As both phenomena involve non-human goal-directed agents that can make a difference to the social world, they raise some similar moral and regulatory challenges, which require us to rethink some of our anthropocentric moral assumptions. Are humans always responsible for those entities’ actions, or could the entities bear responsibility themselves? Could the entities engage in normative reasoning? Could they even have rights and a moral status? I will tentatively defend the (increasingly widely held) view that, under certain conditions, artificial intelligent systems, like corporate entities, might qualify as responsible moral agents and as holders of limited rights and legal personhood. I will further suggest that regulators should permit the use of autonomous artificial systems in high-stakes settings only if they are engineered to function as moral (not just intentional) agents and/or there is some liability-transfer arrangement in place. I will finally raise the possibility that if artificial systems ever became phenomenally conscious, there might be a case for extending a stronger moral status to them, but argue that, as of now, this remains very hypothetical
Registration
Yes
October 27, 2022, 5:00pm CET
RSVP below
Who is attending?
42 people are attending:
Will you attend this event?