CFP: Topical Collection on Trust and Opacity in Artificial Intelligence (Philosophy & Technology)

Submission deadline: July 31, 2024

Details

Call for Papers: Topical Collection on Trust and Opacity in Artificial Intelligence for Philosophy & Technology (Springer)

New Deadline: 31 July 2024

Guest Editors:  Martin Hähnel (University of Bremen) and Rico Hauswald (TU Dresden)

Topical Collection Description:

As artificial intelligence (AI) is becoming part of our everyday lives, we are faced with the question of how to use it responsibly. In public discourse, this issue is often framed in terms of trust – for example by asking whether, to what extent, and under what conditions trusting AI systems is appropriate. In this context, the philosophical debates on practical, political, and epistemic trust that have been ongoing since the 1980s have recently been gaining momentum and developed within the philosophy of AI.
However, a number of fundamental questions remain unanswered. For example, some authors have argued that the concept of trust is interpersonal in nature and therefore entirely inapplicable to relationships with AI systems. According to these authors, AI systems cannot be “trusted” in the strict sense of the term, but can at best be “relied upon”. Other authors have disputed this assessment, arguing that at least certain kinds of trust can apply to relationships with AI technologies. Also controversial is the influence of AI’s notorious black-box character on its potential trustworthiness. While some authors consider AI systems to be trustworthy only to the extent that their internal processes can be made transparent and explainable, others point out that, after all, we do trust humans without being able to understand their cognitive processes. In the case of experts and epistemic authorities, we often do not even grasp the reasons and justifications they give. Another point of contention is the trustworthiness of the developers of innovative AI systems, i.e., the extent to which the trustworthiness of AI systems can be reduced to, and should be based on, trust in the developers themselves. In this context, the debate on “ethics by design” or “embedded ethics” seems to be crucial as it helps evaluate the various attempts currently being made to promote trust in AI by taking ethical principles and usability aspects into account.

We welcome contributions on trust and opacity in AI that focus on a) foundations and conceptual prerequisites, b) perspectives from epistemology, c) perspectives from ethics, or d) perspectives from political philosophy.

When preparing your paper, please read the journal’s “Instructions for authors” at https://www.springer.com/journal/13347/submission-guidelines. More details and the link to the submission platform can be found here: https://link.springer.com/collections/dibibiaidb. When submitting your paper, please select “Research Article” in the Editorial Manager. Then, in the “Additional Information” section, answer “Yes” to the question “Are you submitting this manuscript to a Thematic Series?” and select “Trust and Opacity in Artificial Intelligence”.

If you have questions please contact Martin Hähnel ([email protected]) or Rico Hauswald ([email protected]).

Supporting material

Add supporting material (slides, programs, etc.)