CFP: Reminder - Call for Papers on XAI - ETIN

Submission deadline: May 1, 2021

Topic areas

Details

Dear colleagues,

  With the usual apologies for cross-posting, we are pleased to share this call for papers for a Special Issue on “The ethics and epistemology of explanatory AI in medicine and healthcare” in Ethics and Information Technology.

Ethics and Information Technology is calling for the submission of papers for a Special Issue focusing on the ethics and epistemology of explainable AI in medicine and healthcare. Modern medicine is now largely implemented and driven by diverse AI systems. While medical AI is assumed to be able to “make medicine human again” (Topol, 2019) by more accurately diagnosing diseases and, thus, freeing doctors to spend more time with their patients, a major issue that emerges with this technology is of explainability, either of the system itself or of its outcome.

In recent debates, it has been claimed that “[for] the medical domain, it is necessary to enable a domain expert to understand why an algorithm came up with a certain result.” (Holzinger et al. 2020). Holzinger and colleagues suggest that being unable to provide explanations for certain automated decisions could have adverse effects on the patients’ trust in those decisions (p. 194). But, does trust really require explanation and, if so, which kind of explanation? Alex John London has forcefully contested the requirement of explainability, suggesting in fact that we are aiming for a standard that cannot be upheld in health care. In this context, several interventions (e.g., treatments) are commonly accepted and applied because they are deemed effective, while we lack an understanding of their underlying causal mechanisms (e.g., Aspirin). Accuracy, as London suggests subsequently, is a more important value for medical AI than explainability (London 2019). Within this junction, the central claim thus remains disputed: Is explainability philosophically and computationally possible? (Durán, 2021) Are there suitable alternatives to explainability (e.g., accuracy)? Does explainability play or should play a role - and if so, which one - in the responsible implementation of AI in medicine and healthcare?

The present Special Issue aims at diving into the heart of this problem, thereby connecting computer science and medical ethics with philosophy of science, philosophy of medicine, and philosophy of technology. All contributions must relate technical and epistemological issues with normative and social problems brought up in connection with the use of AI in medicine and healthcare.

We are particularly interested in contributions that shed new light on the following questions:

• Which are the distinctive characteristics of explanations in AI for medicine and healthcare?

• Which epistemic and normative values (e.g., explainability, accuracy, transparency) should guide the design and use of AI in medicine and healthcare?

• Does AI in medicine pose particular requirements for explanations?

• Is explanatory pluralism a viable option for medical AI (i.e., pluralism of discipline and pluralism of agents receiving/offering explanations)?

• Which virtues (e.g., social, moral, cultural, cognitive) are at the basis of explainable medical AI?

• What is the epistemic and normative connection between explanation and understanding?

• How are trust (e.g., normative and epistemic) and explainability related?

• What kind of explanations are required to increase trust in medical decisions?

• What is the role of transparency in explanations in medical AI?

• How are accountability and explainability related to medical AI?

- Durán, JM (2021) Dissecting scientific explanation in AI (sXAI): A case for medicine and healthcare. Artificial Intelligence 297 (2021). doi: 10.1016/j.artint.2021.103498

- Holzinger A, Carrington A, Müller H. Measuring the Quality of Explanations: The System Causability Scale (SCS). KI - Künstliche Intelligenz. 2020;34(2):193-8. doi: 10.1007/s13218- 020-00636-z.

- Holzinger A, Carrington A, Müller H. Measuring the Quality of Explanations: The System Causability Scale (SCS). KI - Künstliche Intelligenz. 2020;34(2):193-8. doi: 10.1007/s13218- 020-00636-z.

- London AJ. Artificial Intelligence and Black-Box Medical Decisions: Accuracy versus Explainability. Hastings Center Report. 2019;49(1):15-21. doi: 10.1002/hast.973.

- Topol EJ. Deep Medicine - How Artificial Intelligence Can Make Healthcare Human Again. New York: Basic Books; 2019.

Important information:

- The deadline for submission is May 1st, 2021. 

- Papers should be prepared for blind peer review and not exceed 8000 words.

- Submissions: https://www.editorialmanager.com/etin/default.aspx

If you have any questions, please do not hesitate to contact anyone of the guest editors: Juan M. Durán (J.M.Duran@tudelft.nl), Martin Sand (M.Sand@tudelft.nl), or Karin Jongsma (K.R.Jongsma@umcutrecht.nl).

Best wishes,

The editors

Supporting material

Add supporting material (slides, programs, etc.)