CFP: Overcoming Opacity in Machine Learning @ AISB 2020
Submission deadline: January 10, 2020
April 6, 2020 - April 9, 2020
St Mary's University College, Twickenham
London, United Kingdom
Computing systems are opaque when their behavior cannot be explained or understood. This is the case when it is difficult to know how or why inputs are transformed into corresponding outputs, and when it is not clear which environmental features and regularities are being tracked. The widespread use of machine learning has led to a proliferation of opaque computing systems, giving rise to the so-called Black Box Problem in AI. Because this problem has significant practical, theoretical, and ethical consequences, research efforts in Explainable AI aim to solve the Black Box Problem through post hoc analysis, or to evade the Black Box Problem through the use of interpretable systems. Nevertheless, questions remain about whether or not the Black Box Problem can actually be solved or evaded, and if so, what it would take to do so.
This symposium brings together researchers from Artificial Intelligence, Cognitive Science, Philosophy, and Law to investigate the nature, causes, and consequences of opacity in different scientific, technical, and social domains, as well as to explore and evaluate recent efforts to overcome opacity in Explainable AI.
#Machine Learning, #Epistemic Opacity, #Black Box Problem, #Artificial Intelligence