BEGIN:VCALENDAR
PRODID:-//Grails iCalendar plugin//NONSGML Grails iCalendar plugin//EN
VERSION:2.0
CALSCALE:GREGORIAN
METHOD:PUBLISH
BEGIN:VEVENT
DTSTAMP:20260415T021825Z
DTSTART;TZID=Europe/Berlin:20220905T110000
DTEND;TZID=Europe/Berlin:20220907T170000
SUMMARY:Understanding Black Boxes: Interdisciplinary Perspectives
UID:20260415T083208Z-iCalPlugin-Grails@philevents-web-f5d4878dd-x5n6c
TZID:Europe/Berlin
LOCATION:Emil-Figge-Straße 59\, Dortmund\, Germany\, 44227
DESCRIPTION:<p><strong>Confirmed Speakers</strong></p>\n<ul>\n<li>Sabine Ammon\, TU Berlin</li>\n<li>Chiara Balestra\, TU Dortmund</li>\n<li>Florian Boge\, University of Wuppertal</li>\n<li>Mieke Boon\, University of Twente</li>\n<li>Philipp Cimiano\, University of Bielefeld</li>\n<li>Juan M. Dur&aacute\;n\, TU Delft</li>\n<li>Tim Hunsicker\, Saarland University</li>\n<li>Nicole Kr&auml\;mer\, University of Duisburg-Essen</li>\n<li>Anne Lauber-R&ouml\;nsberg\, TU Dresden</li>\n<li>Sara Mann\, TU Dortmund</li>\n<li>Andr&eacute\;s P&aacute\;ez\, University of the Andes</li>\n<li>Emanuele Ratti\, Johannes Kepler University Linz</li>\n<li>Nadine Schlicker\, University Hospital of Marburg</li>\n</ul>\n<p><br><strong>Workshop Description</strong></p>\n<p>Artificial intelligent (AI) systems are used to great success in many contexts\, e.g. for medical diagnosis or autonomous driving. They often depend on extremely complex algorithms designed to detect patterns in large amounts of data\, which makes it difficult to discern their inner workings. However\, especially in high-stakes situations\, it is important to be able to evaluate desirable system properties\, e.g. their safety\, fairness or trustworthiness\, and so to understand how they work. To address this concern\, the field of explainable artificial intelligence (XAI) develops methods to make opaque AI systems understandable by means of explanations.</p>\n<p>But what does it mean to understand an (opaque) system or its outputs? What is the link between explanations and understanding? How do different contexts affect the explainability and understandability of AI systems? How important is understanding really and how does this depend on the specific context of use\, e.g. on legal vs. medical contexts? How can we make complex systems understandable at all and how do different explainability methods fare in this regard (e.g.\, surrogate models or perturbation-based methods)? What can theories of understanding and explanation from psychology and philosophy contribute to XAI\, and how do these insights mesh with specific explainability approaches from computer science?</p>\n<p>Clarifying relevant concepts such as understanding and explanation is an important fundamental step towards developing human-centered XAI methods\; vice versa\, researchers from philosophy and psychology can gain a deeper understanding of the concepts by reflecting on their application in XAI. In this light\, our workshop will bring together researchers from philosophy\, psychology\, computer science\, and law to push forward research on understanding and XAI.</p>\n<p><strong>Organization</strong></p>\n<p><strong><br></strong><u>Homepage:</u> https://explainable-intelligent.systems/issuesinxai5/</p>\n<p><u>Date:</u> September 5\, 2022 (Monday) &ndash\; September 7\, 2022 (Wednesday)<br>The workshop will begin at 11am (CET) on Monday and will end at 2pm (CET) on Wednesday.</p>\n<p><u>Venue:</u> International Meeting Center (IBZ)\, TU Dortmund (https://international.tu-dortmund.de/en/international-campus/international-meeting-center-ibz/)</p>\n<p><u>Address:</u> TU Dortmund University\, Emil-Figge-Stra&szlig\;e 59\, 44227 Dortmund\, Germany</p>\n<p>Currently\, we are planning the workshop to be held completely in-person. The workshop language is English.<br><br>The workshop is organized within the research project &ldquo\;Explainable Intelligent Systems&rdquo\; (EIS) (https://explainable-intelligent.systems/). EIS is an interdisciplinary research group consisting of philosophers\, computer scientists\, psychologists\, and legal scholars. At EIS we investigate how explainability of artificial intelligent systems contributes to the fulfillment of important societal desiderata.</p>\n<p>The workshop is the 5th installment of the workshop series &ldquo\;Issues in XAI&rdquo\;\, which is co-organized by EIS and its partners\, the Leverhulme Centre for the Future of Intelligence (Cambridge)\, TU Delft\, and the project BIAS (Leibniz University Hannover). Both the EIS project and the workshop are funded by the Volkswagen Foundation.</p>\n<p><u>Organization:</u> Eva Schmidt\, Sara Mann</p>\n<p>If you have any queries\, please send an e-mail to sara.mann[at]tu-dortmund.de</p>\n<p><strong>Registration</strong></p>\n<p>If you would like to join our workshop\, please register by sending an e-mail with subject header &ldquo\;Registration&rdquo\; to Sara Mann (sara.mann[at]tu-dortmund.de).</p>\n<p>The deadline for registrations is Monday\, August 22\, 2022.</p>
ORGANIZER;CN=Sara Mann;CN=Eva Schmidt:
METHOD:PUBLISH
END:VEVENT
END:VCALENDAR
