CFP: Issues in XAI #4”: “Explainable AI: between ethics and epistemology
Submission deadline: January 15, 2022
May 23, 2022 - May 25, 2022
The workshop focuses on the normative and epistemic aspects of explainable AI (XAI). The two are especially relevant to each other in this specific case. In fact, the goal of XAI is first and foremost epistemic in providing knowledge or understanding of the inner workings of AI models. Nevertheless, there are also relevant normative questions about transparency, responsibility, and accountability that interact with XAI. In this respect, the workshop aims at a synergy between epistemological concerns with non-epistemological ones (e.g., ethical, political, economic, societal). On the one hand, the epistemic status of XAI tools can help inform their role as a solution to non-epistemological/normative questions. If current XAI tools fail to provide understanding of the inner workings of AI models, e.g. yielding only limited knowledge of the importance of input features, what role can they play for facilitating meaningful human control? To what extent can they support human agency and clarify accountability questions? Being clearer on the epistemic status of users can yield more fine-grained answers to these philosophical questions. On the other hand, the normative questions can further inform what the appropriate epistemic goals are for (not yet developed) XAI tools. If the normative questions turn out to require a specific epistemic status with respect to the model that is used, then this can support epistemological discussions on how to reach that status. How is the explanatory logic for XAI that meets the epistemic and non-epistemic standards required from it? How do normative dimensions of epistemic notions impact the epistemological debate on XAI? This range of topics on the intersection of XAI is important and yet largely underdeveloped. With this workshop we hope to bring the two parts of philosophy closer together. Whereas the workshop will not be focused on one specific topic, there is special interest in medical AI. We invite submissions from all related academic fields, including philosophy of (computer) science, epistemology, political and moral philosophy, political theory, legal theory, and social theory. Possible questions/topics include:
The logic of scientific explanation for/in AI
- The epistemic and moral goods expected from explaining AI (e.g., understanding, knowledge, moral justification)
- Trustworthy AI: benefits and limits of Transparency, Accountability, Explainability, and Computational Reliabilism
- Which epistemic and non-epistemic values (social, economic, political, moral, etc.) are relevant for XAI, and to what extent do explanations in AI affect non-epistemic values?
- Are responsibility, accountability and contestability possible without XAI?
- What forms of backward- and forward-looking responsibility are tailored to XAI and notions of trustworthiness?
How can forms of epistemic injustice (hermeneutical, testimonial, and otherwise) be ameliorated?
This list is non-exhaustive, and submissions on related topics are welcome. If you are interested in participating in this expert workshop, please submit an anonymized abstract of no more than 500 words, along with an email including your name, title, and affiliation to EasyChair (https://easychair.org/my/conference?conf=eexai2021). Participants will be asked to give a presentation (25 min + 20 Q&A) of their paper. All the abstracts will be invited to submit a full paper to a Special Issue, possibly in Ethics and Information Technology. The workshop will take place in person at Delft University of Technology. It will also be online for participants that cannot travel.
- Filippo Santoni de Sio (TU Delft)
- Federica Russo (University of Amsterdam)
Best student award:
We will be offering a best student paper award that can be used to attend the workshop (trip + hotel) covering up to 500 €. Students interested must first submit their abstract and, upon acceptance, they will be requested to submit their paper for blind review (ca. 5,000 words) by the 15th of March. The best student paper award will be announced during the workshop. Organization: The workshop is sponsored by the European Research Infrastructure SoBigData++ (http://sobigdata.eu), the European Network of Human-Centered Artificial Intelligence Humane-AI (https://www.humane-ai.eu), KNAW Wetenschapsfondsen Evert Willem Beth Foundation (https://www.knaw.nl/en/awards/funds/evert-willem-beth-stichting) and the TU Delft TPM-AI Lab (https://www.tudelft.nl/tbm/tpm-ai-lab). This workshop is part of the workshop series “Issues in Explainable AI” (www.explainable-intelligent.systems). Key dates
Abstracts submission deadline: 15th January, 2022
- Notification sent to participants: 1st March, 2022
- Submission paper best student award: 15th of March, 2022
Workshops: 23, 24, 25th May 2022
Jonne [email protected]