"Algorithmic Fairness and Resentment" - 6th Colloquium on Stereotyping & Medical AI
Zoe Johnson King (New York University, University of Southern California, University of Southern California), Boris Babic (INSEAD)

August 12, 2021, 5:00pm - 6:30pm
Department of Philosophy, King's College London

London
United Kingdom

Sponsor(s):

  • Peter Sowerby Foundation

Organisers:

Cambridge University (PhD)
King's College London
King's College London

Topic areas

Details

We are very pleased to announce our sixth colloquium in this series of colloquia on Stereotyping & Medical AI, which is co-organised by Minorities and Philosophy (MAP) and the Sowerby Philosophy & Medicine Project:

"Algorithmic Fairness and Resentment"

Speakers: Prof. Zoë Johnson King (USC) & Prof. Boris Babic (Toronto)

Chair: Willa Saadat (KCL), KCL-MAP Lead

Professor Zoë Johnson King is an Assistant Professor in the Philosophy department at USC. Her research specialties are in Ethics, Metaethics, Epistemology, Decision Theory, and Philosophy of Law. And she mainly works on non-ideal moral psychology thinking about motivation and creditworthiness for squishy messy humans in an unjust world. She is also currently leading an anti-racism discussion group for members ofNovember Project Brooklyn and November Project New York.

Professor Boris Babic has a joint appointment in the Department of Philosophy and the Department of Statistics at the University of Toronto. Previously, he was an assistant professor in the Decision Sciences Department at INSEAD and a postdoctoral fellow at the California Institute of Technology. His research specialities are in Artificial Intelligence and Machine Learning, Decision Theory, and Epistemology. And he is primarily interested in questions in Bayesian inference and decision-making, and normative questions in the implementation of artificial intelligence and machine learning.


About the Summer Colloquium Series on Stereotyping & Medical AI

The aim of this series on Stereotyping and Medical AI is to explore philosophical and in particular ethical and epistemological issues around stereotyping in medicine, with a specific focus on the use of artificial intelligence in health contexts. We are particularly interested in whether medical AI that uses statistical data to generate predictions about individual patients can be said to “stereotype” patients, and whether we should draw the same ethical and epistemic conclusions about stereotyping by artificial agents as we do about stereotyping by human agents, i.e., medical professionals. 

Other questions we are interested in exploring as part of this series include but are not limited to the following:

  • How should we understand “stereotyping” in medical contexts?
  • What is the relationship between stereotyping and bias, including algorithmic bias (and how should we understand “bias” in different contexts?)?
  • Why does stereotyping in medicine often seem less morally or epistemically problematic than stereotyping in other domains, such as in legal, criminal, financial, educational, etc., domains? Might beliefs about biological racial realism in the medical context explain this asymmetry?
  • When and why might it be wrong for medical professionals to stereotype their patients? And when and why might it be wrong for medical AI, i.e. artificial agents, to stereotype patients?
  • How do (medical) AI beliefs relate to the beliefs of human agents, particularly with respect to agents’ moral responsibility for their beliefs?
  • Can non-evidential or non-truth-related considerations be relevant with respect to what beliefs medical professionals or medical AI ought to hold? Is there moral or pragmatic encroachment on AI beliefs or on the beliefs of medical professionals?
  • What are potential consequences of either patients or doctors being stereotyped by doctors or by medical AI in medicine? Can, for example, patients be doxastically wronged by doctors or AI in virtue of being stereotyped by them?

We will be tackling these topics through a series of online colloquia hosted by the Sowerby Philosophy and Medicine Project at King's College London. The colloquium series will feature a variety contributors from across the disciplinary spectrum. We hope to ensure a discursive format with time set aside for discussion and Q&A by the audience. This event is open to the public and all are welcome. 


Our working line-up for the remainder of this summer series is as follows, with a few additional details to be confirmed:

June 17  Professor Erin Beeghly (Utah), “Stereotyping and Prejudice: The Problem of Statistical Stereotyping” 

July 1 Dr. Kathleen Creel, (HAI, EIS, Stanford) “Let's Ask the Patient: Stereotypes, Personalization, and Risk in Medical AI” 

July 15  Dr. Annette Zimmermann (York, Harvard), "Structural Injustice, Doxastic Negligence, and Medical AI"

July 22  Dr. William McNeill (Southampton), "Neural Networks and Explanatory Opacity"

July 29  Special Legal-Themed Panel Discussion: Dr. Jonathan Gingerich (KCL), Dr. Reuben Binns (Oxford), Prof. Georgi Gardiner (Tennessee), Prof. David Papineau (KCL), Chair: Robin Carpenter(London Medical Imaging & AI Centre for Value Based Healthcare)

August 12  Professor Zoë Johnson King (USC) & Professor Boris Babic(Toronto), "Algorithmic Fairness and Resentment" 

September 2  Dr. Geoff Keeling (HAI, LCFI, Google)

September 9  Professor Rima Basu (Claremont McKenna)

* For those unable to attend these colloquia, please feel free to register for our events in order to be notified once recordings of previous colloquia become available! You can find recordings of past colloquia on the Philosophy & Medicine Project website here: https://www.philosophyandmedicine.org/summercolloquia

Register for this event here: 

Supporting material

Add supporting material (slides, programs, etc.)

Reminders

Registration

Yes

August 12, 2021, 5:00pm BST

External Site

Who is attending?

No one has said they will attend yet.

Will you attend this event?


Let us know so we can notify you of any change of plan.

RSVPing on PhilEvents is not sufficient to register for this event.