CFP: Topical Issue on 'AI and Responsibility'

Submission deadline: June 30, 2021

Topic areas

Details

Call for Papers for Philosophy and Technology’s topical collection on

AI and Responsibility

GUEST EDITORS

Niël Conradie (RWTH Aachen University)
Hendrik Kempt (RWTH Aachen University)
Peter Königs (RWTH Aachen University)

INTRODUCTION

The rapid progress in the research and development of Artificial Intelligence (AI) is radically changing many aspects of our lives. Along with these changes come moral challenges. Crucial among these challenges are those that call attention to the myriad potential relationships between AI technologies and moral responsibility. Many moral concerns regarding AI essentially revolve around two critical questions about responsibility:

1) Can we develop responsible AI, and what would such an AI look like?

The concern with responsible AI can be understood as a forward-looking responsibility on those involved in the funding, development, and deployment of these systems: A duty to ensure that the AI technologies they bring forth meet certain ethical criteria. Though most (but not all) participants in the discussion can broadly agree on some elements of these criteria − e.g. non-maleficence, transparency and privacy, the provision of fair and just outcomes for stakeholders − there is substantial disagreement on how these elements should be understood, what other elements there may be, and about the details of how they are to be implemented technically.

2) Who, if anybody, is responsible for what a highly autonomous AI does?

Thanks to advanced machine-learning techniques, intelligent machines are approaching a degree of sophistication and autonomy that makes it difficult to fully understand, let alone predict their behavior. For this reason, it has struck many as inappropriate to assign responsibility for an AI’s actions to the human agents who have causally contributed to its actions, such as its operator or its engineers. Whether autonomous AI really gives rise to such ‘responsibility gaps’, what might be problematic about them, and how they ought to be dealt with are important unresolved questions.

TOPICS

Acknowledging the wide variety of approaches to the subject matter, and intending to not exclude any specific approach, we invite prospective authors to consider the following discussion topics as suitable examples, not as restrictions:

  1. Meeting the challenge of responsibility gaps in military and non-military contexts
  2. Understanding the role of collective responsibility in relation to AI systems
  3. Unpacking the impact of algorithmic nudging on our responsibility practices
  4. Responsibility in and of AI in different areas of application (warfare, education, surveillance, medicine, law, journalism, transportation, work robots, care work, sex work)
  5. The role and requirement of explainability, transparency for the design of responsible AI systems
  6. Are AI technologies a net loss or boon in the fight for justice and fairness?
  7. Moral vs legal responsibility for AI
  8. AI agency and responsibility
  9. Alternative framings of responsibility in AI: critical, feminist, posthumanist perspectives

TIMETABLE

June 30, 2021: Deadline papers submissions

SUBMISSION DETAILS

To submit a paper for this topical collection, authors should go to the journal’s Editorial Manager http://www.editorialmanager.com/phte/  The author (or a corresponding author for each submission in case of co- authored papers) must register into EM.

The author must then select the special article type: AI and Responsibility from the selection provided in the submission process. This is needed in order to assign the submissions to the Guest Editors. 

Submissions will then be assessed according to the following procedure: 

New Submission => Journal Editorial Office => Guest Editor(s) => Reviewers => Reviewers’ Recommendations => Guest Editor(s)’ Recommendation => Editor-in-Chief’s Final Decision => Author Notification of the Decision.

The process will be reiterated in case of requests for revisions.

For any further information please contact the Guest Editors ([email protected])

https://www.springer.com/journal/13347/updates/18601544

Supporting material

Add supporting material (slides, programs, etc.)

Reminders

Custom tags:

#Artificial Intelligence, #Responsibility, #Technology, #AI