Understanding Black Boxes: Interdisciplinary Perspectives

September 5, 2022 - September 7, 2022
Department of Philosophy and Political Science, TU Dortmund

International Meeting Center (IBZ), TU Dortmund
Emil-Figge-Straße 59
Dortmund 44227
Germany

This will be an accessible event, including organized related activities

Sponsor(s):

  • Volkswagen Foundation

Speakers:

Bergische Universität Wuppertal
University of Twente
Delft University of Technology
Dortmund University
University of the Andes
(unaffiliated)

Organisers:

Dortmund University
TU Dortmund

Topic areas

Talks at this conference

Add a talk

Details

Confirmed Speakers

  • Sabine Ammon, TU Berlin
  • Chiara Balestra, TU Dortmund
  • Florian Boge, University of Wuppertal
  • Mieke Boon, University of Twente
  • Philipp Cimiano, University of Bielefeld
  • Juan M. Durán, TU Delft
  • Tim Hunsicker, Saarland University
  • Nicole Krämer, University of Duisburg-Essen
  • Anne Lauber-Rönsberg, TU Dresden
  • Sara Mann, TU Dortmund
  • Andrés Páez, University of the Andes
  • Emanuele Ratti, Johannes Kepler University Linz
  • Nadine Schlicker, University Hospital of Marburg


Workshop Description

Artificial intelligent (AI) systems are used to great success in many contexts, e.g. for medical diagnosis or autonomous driving. They often depend on extremely complex algorithms designed to detect patterns in large amounts of data, which makes it difficult to discern their inner workings. However, especially in high-stakes situations, it is important to be able to evaluate desirable system properties, e.g. their safety, fairness or trustworthiness, and so to understand how they work. To address this concern, the field of explainable artificial intelligence (XAI) develops methods to make opaque AI systems understandable by means of explanations.

But what does it mean to understand an (opaque) system or its outputs? What is the link between explanations and understanding? How do different contexts affect the explainability and understandability of AI systems? How important is understanding really and how does this depend on the specific context of use, e.g. on legal vs. medical contexts? How can we make complex systems understandable at all and how do different explainability methods fare in this regard (e.g., surrogate models or perturbation-based methods)? What can theories of understanding and explanation from psychology and philosophy contribute to XAI, and how do these insights mesh with specific explainability approaches from computer science?

Clarifying relevant concepts such as understanding and explanation is an important fundamental step towards developing human-centered XAI methods; vice versa, researchers from philosophy and psychology can gain a deeper understanding of the concepts by reflecting on their application in XAI. In this light, our workshop will bring together researchers from philosophy, psychology, computer science, and law to push forward research on understanding and XAI.

Organization


Homepage: https://explainable-intelligent.systems/issuesinxai5/

Date: September 5, 2022 (Monday) – September 7, 2022 (Wednesday)
The workshop will begin at 11am (CET) on Monday and will end at 2pm (CET) on Wednesday.

Venue: International Meeting Center (IBZ), TU Dortmund (https://international.tu-dortmund.de/en/international-campus/international-meeting-center-ibz/)

Address: TU Dortmund University, Emil-Figge-Straße 59, 44227 Dortmund, Germany

Currently, we are planning the workshop to be held completely in-person. The workshop language is English.

The workshop is organized within the research project “Explainable Intelligent Systems” (EIS) (https://explainable-intelligent.systems/). EIS is an interdisciplinary research group consisting of philosophers, computer scientists, psychologists, and legal scholars. At EIS we investigate how explainability of artificial intelligent systems contributes to the fulfillment of important societal desiderata.

The workshop is the 5th installment of the workshop series “Issues in XAI”, which is co-organized by EIS and its partners, the Leverhulme Centre for the Future of Intelligence (Cambridge), TU Delft, and the project BIAS (Leibniz University Hannover). Both the EIS project and the workshop are funded by the Volkswagen Foundation.

Organization: Eva Schmidt, Sara Mann

If you have any queries, please send an e-mail to sara.mann[at]tu-dortmund.de

Registration

If you would like to join our workshop, please register by sending an e-mail with subject header “Registration” to Sara Mann (sara.mann[at]tu-dortmund.de).

The deadline for registrations is Monday, August 22, 2022.

Supporting material

Add supporting material (slides, programs, etc.)

Reminders

Registration

Yes

August 22, 2022, 11:00pm CET

Who is attending?

No one has said they will attend yet.

Will you attend this event?


Let us know so we can notify you of any change of plan.

RSVPing on PhilEvents is not sufficient to register for this event.