BEGIN:VCALENDAR
PRODID:-//Grails iCalendar plugin//NONSGML Grails iCalendar plugin//EN
VERSION:2.0
CALSCALE:GREGORIAN
METHOD:PUBLISH
BEGIN:VEVENT
DTSTAMP:20260417T185853Z
DTSTART;TZID=America/Toronto:20201202T040000
DTEND;TZID=America/Toronto:20201202T040000
SUMMARY:Call for Papers "The ethics and epistemology of explanatory AI in medicine and healthcare" in Ethics and Information Technology
UID:20260420T084515Z-iCalPlugin-Grails@philevents-web-f5d4878dd-g4ggw
TZID:America/Toronto
DESCRIPTION:<p>Dear colleagues\,</p>\n<p>&nbsp\; With the usual apologies for cross-posting\, we are pleased to share this call for papers for a Special Issue on &ldquo\;The ethics and epistemology of explanatory AI in medicine and healthcare&rdquo\; in Ethics and Information Technology.</p>\n<p>Please find the full call for papers here: https://www.springer.com/journal/10676/updates/18649082</p>\n<p>We are particularly interested in contributions that shed new light on the following questions:</p>\n<p>&bull\; Which are the distinctive characteristics of explanations in AI for medicine and healthcare?</p>\n<p>&bull\; Which epistemic and normative values (e.g.\, explainability\, accuracy\, transparency) should guide the design and use of AI in medicine and healthcare?</p>\n<p>&bull\; Does AI in medicine pose particular requirements for explanations?</p>\n<p>&bull\; Is explanatory pluralism a viable option for medical AI (i.e.\, pluralism of discipline and pluralism of agents receiving/offering explanations)?</p>\n<p>&bull\; Which virtues (e.g.\, social\, moral\, cultural\, cognitive) are at the basis of explainable medical AI?</p>\n<p>&bull\; What is the epistemic and normative connection between explanation and understanding?</p>\n<p>&bull\; How are trust (e.g.\, normative and epistemic) and explainability related?</p>\n<p>&bull\; What kind of explanations are required to increase trust in medical decisions?</p>\n<p>&bull\; What is the role of transparency in explanations in medical AI?</p>\n<p>&bull\; How are accountability and explainability related to medical AI?</p>\n<p>The deadline for submission is May 1st\, 2021. We expect the SI to be published by the end of next year. Papers should be prepared for blind peer review and not exceed 8000 words.</p>\n<p>If you have any questions\, please do not hesitate to contact one of the guest editors: Martin Sand \, Karin Jongsma or me.</p>\n<p>Best wishes\,</p>\n<p>Juan</p>\n<p>-------</p>\n<p>Dr. Juan M. Dur&aacute\;n</p>\n<p>TU Delft</p>\n<p>Assistant Professor</p>\n<p>Department of Values\, Technology and Innovation</p>\n<p>Faculty of Technology\, Policy and Management</p>\n<p>Jaffalaan 5</p>\n<p>2628 BX Delft - B 4.310</p>\n<p>The Netherlands</p>\n<p>2019 Herbert A. Simon Award - IACAP</p>\n<p>2020 Fellow Netherlands Institute for Advanced Studies (NIAS)</p>\n<p>juanmduran.net - Academia.edu</p>
ORGANIZER:
METHOD:PUBLISH
END:VEVENT
END:VCALENDAR
