AI and the Risk of Undermining Human Responsibility for Good and Bad Outcomes
Sven Nyholm (Ludwig Maximilians Universität, München)

April 28, 2023, 11:00am - 12:30pm

This event is online

Organisers:

University Vita-Salute San Raffaele

Details

Note that start time is CEST (e.g., Vienna, Amsterdam, Berlin)


How to attend (Zoom link)

https://uni-graz.zoom.us/j/68025279259?pwd=amlNTjlOYXltem4zdTl4bUJpUC9pQT09

Meeting ID: 680 2527 9259

Passcode: 934153

Abstract

In my presentation, I will discuss a significant risk related to letting artificial intelligence (AI) take over tasks we otherwise perform ourselves, using our natural intelligence. The risk is that this can disrupt or undermine human responsibility in important ways. This idea – sometimes called the problem of responsibility gaps – is usually discussed in relation to blame for bad outcomes (e.g., crashing self-driving cars or out-of-control military robots). However, I will argue that we should also consider the issue of praise for good outcomes. A risk related to handing over tasks normally requiring human intelligence to AI technologies is that this can create gaps with respect to opportunities for human beings to display talent, effort, sensitivity to reasons, and other things that make people worthy of recognition or praise. I will argue that such “positive responsibility gaps” are harder to fill than negative gaps related to blame for bad outcomes. This relates to important differences in widely accepted criteria for deserving praise for good outcomes, on the one hand, and criteria for deserving blame for bad outcomes, on the other. I will illustrate this asymmetry by focusing on generative AI (such as large language models) as a case study. 

Supporting material

Add supporting material (slides, programs, etc.)

Reminders

Registration

No

Who is attending?

No one has said they will attend yet.

Will you attend this event?


Let us know so we can notify you of any change of plan.