"Enabling Fairness in Healthcare Through Machine Learning" - Dr. Geoff Keeling (HAI, LCFI, Google) - 7th Colloquium on Stereotyping & Medical AI
Geoff Keeling (Stanford University)

September 2, 2021, 5:00pm - 6:30pm
Department of Philosophy, King's College London

London
United Kingdom

Sponsor(s):

  • Peter Sowerby Foundation

Organisers:

University of Southampton
King's College London

Topic areas

Details

We are very pleased to announce our seventh and penultimate colloquium in this series of colloquia on Stereotyping & Medical AI, organised by the Sowerby Philosophy & Medicine Project:   


Dr. Geoff Keeling (HAI, LCFI, Google) - "Enabling Fairness in Healthcare Through Machine Learning"


Abstract

The use of machine learning systems for decision-support in healthcare may exacerbate health inequalities. However, recent work suggests that algorithms trained on sufficiently diverse datasets could in principle combat health inequalities. One concern about these algorithms is that their performance for patients in traditionally disadvantaged groups exceeds their performance for patients in traditionally advantaged groups. This renders the algorithmic decisions unfair relative to the standard fairness metrics in machine learning. In this paper, we defend the permissible use of affirmative algorithms; that is, algorithms trained on diverse datasets that perform better for traditionally disadvantaged groups. Whilst such algorithmic decisions may be unfair, the fairness of algorithmic decisions is not the appropriate locus of moral evaluation. What matters is the fairness of final decisions, such as diagnoses, resulting from collaboration between clinicians and algorithms. We argue that affirmative algorithms can permissibly be deployed provided the resultant final decisions are fair. 

About the Speaker

Dr. Geoff Keeling is an Interdisciplinary Ethics Fellow based between the Center for Ethics in Society and the Institute for Human-Centered Artificial Intelligence (HAI) at Stanford University. His work concerns the ethics of robotics and data-driven technologies, completing a PhD on the ethics of automated vehicle decision making. He is also an Associate Fellow at the Leverhulme Centre for the Future of Intelligence (LCFI), where he worked with Dr. Rune Nyrup on the Understanding Medical Black Boxes Project, looking at how tools from the philosophy of science might inform disputes about explainable machine learning in medicine.


About the Summer Colloquium Series on Stereotyping & Medical AI

The aim of this series on Stereotyping and Medical AI is to explore philosophical and in particular ethical and epistemological issues around stereotyping in medicine, with a specific focus on the use of artificial intelligence in health contexts. We are particularly interested in whether medical AI that uses statistical data to generate predictions about individual patients can be said to “stereotype” patients, and whether we should draw the same ethical and epistemic conclusions about stereotyping by artificial agents as we do about stereotyping by human agents, i.e., medical professionals. 

Other questions we are interested in exploring as part of this series include but are not limited to the following:

  • How should we understand “stereotyping” in medical contexts?
  • What is the relationship between stereotyping and bias, including algorithmic bias (and how should we understand “bias” in different contexts?)?
  • Why does stereotyping in medicine often seem less morally or epistemically problematic than stereotyping in other domains, such as in legal, criminal, financial, educational, etc., domains? Might beliefs about biological racial realism in the medical context explain this asymmetry?
  • When and why might it be wrong for medical professionals to stereotype their patients? And when and why might it be wrong for medical AI, i.e. artificial agents, to stereotype patients?
  • How do (medical) AI beliefs relate to the beliefs of human agents, particularly with respect to agents’ moral responsibility for their beliefs?
  • Can non-evidential or non-truth-related considerations be relevant with respect to what beliefs medical professionals or medical AI ought to hold? Is there moral or pragmatic encroachment on AI beliefs or on the beliefs of medical professionals?
  • What are potential consequences of either patients or doctors being stereotyped by doctors or by medical AI in medicine? Can, for example, patients be doxastically wronged by doctors or AI in virtue of being stereotyped by them?

We will be tackling these topics through a series of online colloquia hosted by the Sowerby Philosophy and Medicine Project at King's College London. The colloquium series will feature a variety contributors from across the disciplinary spectrum. We hope to ensure a discursive format with time set aside for discussion and Q&A by the audience. This event is open to the public and all are welcome. 


Please see below for the line-up of this series on Stereotyping & Medical AI:

June 17  Professor Erin Beeghly (Utah), “Stereotyping and Prejudice: The Problem of Statistical Stereotyping” 

July 1 Dr. Kathleen Creel, (HAI, EIS, Stanford) “Let's Ask the Patient: Stereotypes, Personalization, and Risk in Medical AI” 

July 15  Dr. Annette Zimmermann (York, Harvard), "Structural Injustice, Doxastic Negligence, and Medical AI"

July 22  Dr. William McNeill (Southampton), "Neural Networks and Explanatory Opacity"

July 29  Special Legal-Themed Panel Discussion: Dr. Jonathan Gingerich (KCL), Dr. Reuben Binns (Oxford), Prof. Georgi Gardiner (Tennessee), Prof. David Papineau (KCL), Chair: Robin Carpenter(London Medical Imaging & AI Centre for Value Based Healthcare)

August 12  Professor Zoë Johnson King (USC) & Professor Boris Babic(Toronto), "Algorithmic Fairness and Resentment" 

September 2  Dr. Geoff Keeling (HAI, LCFI, Google), "Enabling Fairness in Healthcare Through Machine Learning"

September 9  Professor Rima Basu (Claremont McKenna), "Blood Bans: A Case Study of Defenses of Stereotyping in Medical Contexts"

* For those unable to attend these colloquia, please feel free to register for our events in order to be notified once recordings of previous colloquia become available! You can find recordings of past colloquia on the Philosophy & Medicine Project website here: https://www.philosophyandmedicine.org/summercolloquia

Register for this event here: 

Supporting material

Add supporting material (slides, programs, etc.)

Reminders

Registration

Yes

September 2, 2021, 5:00pm BST

External Site

Who is attending?

No one has said they will attend yet.

Will you attend this event?


Let us know so we can notify you of any change of plan.

RSVPing on PhilEvents is not sufficient to register for this event.