Explanation-aware computing (ExaCt 2012)

August 27, 2012 - August 28, 2012
Laboratoire d'Informatique, de Robotique et de Microélectronique de Montpellier

Montpellier
France

View the Call For Papers

Topic areas

Talks at this conference

Add a talk

Details

When knowledge-based systems are partners in interactive socio-technical processes, with incomplete and changing problem descriptions, effective communication between human and software system is vital. Explanations exchanged between human agents and software agents may play a key role in such mixed-initiative problem solving. For example, explanations may increase the confidence of the user in specific results or in the system as a whole, by providing evidence of how the results were derived. AI research has also focused on how computer systems can themselves use explanations, for example to guide learning.

Explanation-awareness in computing system development aims at making systems able to interact more effectively or naturally with their users, or better able to understand and exploit knowledge about their own processing. Systems intended to exhibit explanation-awareness must be more than simple reactive systems. When the word 'awareness' is used in conjunction with the word 'explanation' it implies some consciousness about explanation and reasoning about explanations at the knowledge level.

Thinking of the Web not only as a collection of web pages, but as providing a Web of experiences exchanged by people on many platforms, gives rise to new challenges and opportunities to leverage experiential knowledge in explanation. For example, records of experiences on the Web and interrelationships between experiences may provide provenance
and meta-data for explanations and can provide examples to help instil confidence in computing systems. The interplay of provenance information with areas such as trust and reputation, reasoning and meta-reasoning, and explanation are known, but not yet well exploited.

Outside of artificial intelligence, disciplines such as cognitive science, linguistics, philosophy of science, psychology, and education have investigated explanation as well. They consider varying aspects, making it clear that there are many different views of the nature of explanation and facets of explanation to explore. Two relevant examples of these are open learner models in education, and dialogue management and planning in natural language generation.

The ExaCt workshop series aims to draw on the multiple perspectives on explanation, to examine how explanation can be applied to further the development of robust and dependable systems, and increase transparency, user sense of control, trust, acceptance, and decision support.

Goals and audience
The main goal of the workshop is to bring together researchers, scientists from both industry and academia, and representatives from different communities and areas such as those mentioned above, to study, understand, and explore explanation in AI applications. In addition to presentations and discussions of invited contributions and invited talks, this workshop will offer organised and open spaces for targeted discussions and creating an interdisciplinary community. Demonstration sessions will provide the opportunity to showcase explanation-enabled/-aware applications.

If you have questions please contact the chairs using the following email address: [email protected].

Workshop schedule
The schedule will be made available on the workshop website. See the workshop website for an agenda overview and links to past workshops.

Chairs
Thomas Roth-Berghofer, School of Computing and Technology,
University of West London, United Kingdom
thomas.roth-berghofer (at) uwl ac uk

David B. Leake, School of Informatics and Computing,
Indiana University, USA
leake (at) cs indiana edu

Jörg Cassens, Institute for Multimedia and Interactive
Systems (IMIS), University of Lübeck, Germany
cassens (at) imis uni-luebeck de

Programme committee
Agnar Aamodt, Norwegian University of Science and Technology (NTNU)
David W. Aha, Navy Center for Applied Research in AI, Washington DC, USA
Martin Atzmüller, University of Kassel, Germany
Ivan Bratko, University of Ljubljana, Slovenia
Patrick Brézillon, LIP6, France
Ashok Goel, Georgia Tech University, Atlanta, GA, USA
Pierre Grenon, KMI, The Open University, UK
Anders Kofod-Petersen, SINTEF, Norway
Hector Muñoz-Avila, Lehigh University, USA
Miltos Petridis, University of Brighton, UK
Enric Plaza, IIIA-CSIC, Spain
Christophe Roche, University of Savoie, France
Olga Santos, Spanish National University for Distance Education
Gheorghe Tecuci, George Mason University, Fairfax, VA, USA
Douglas Walton, University of Windsor, Canada

Supporting material

Add supporting material (slides, programs, etc.)

Reminders

Registration

No

Who is attending?

No one has said they will attend yet.

Will you attend this event?


Let us know so we can notify you of any change of plan.