CFP: ACM Journal on Responsible Computing. Special Section on Barocas, Hardt and Narayanan: Fairness and Machine Learning. Limitations and Opportunities.

Submission deadline: May 1, 2024

Topic areas

Details

Call for Papers for ACM Journal on Responsible Computing: 

Special Section on Barocas, Hardt and Narayanan Fairness and Machine Learning: Limitations and Opportunities

Even as anxiety about future artificial general intelligence (AGI) systems goes mainstream, and technologists and policymakers worldwide are scrambling to anticipate and mitigate hypothetical AGI risks, existing machine learning (ML) systems are quietly revolutionizing many aspects of public and private life. ML has fundamentally reshaped the attention economy, is increasingly relied upon by governments for service delivery, and silently underpins the infrastructure of global investment. And in all of these areas, early enthusiasm has given way to a visceral recognition that deploying ML systems in any critical scenarios involves serious risks—not the risks of future AGI systems, but of present systems that routinely reproduce the socially unjust past.

Important early work on the social costs of ML systems has shaped public debate and social policy; but most of the early book-length treatments have focused on persuading a public audience of the risks of over-reliance on them. Solon Barocas, Moritz Hardt, and Arvind Narayanan are central and influential figures in the fair ML research community, and they have both distilled and enhanced a crucial suite of normative problems at the intersection of technology and society. Fairness and Machine Learning: Limitations and Opportunities will be a vital gateway into the debate not only for computer scientists, but for scholars from other disciplines who want to better understand ML and its problems.

The ACM Journal on Responsible Computing calls for papers for a special section of responses to Fairness and Machine Learning, aiming to curate a specific interdisciplinary conversation between computer and information scientists and philosophers. The responses will be an opportunity to explore how this landmark text could be used as a springboard for examining the role of empirical computer science in the normative philosophy of computing. The authors of the book will, in turn, respond to the contributions. A symposium of responses to the book will in this way spark a multidisciplinary dialogue around questions that are at root philosophical but touch on technical aspects, the socio-technical systems in which ML is embedded and ML’s social impacts.

Please submit your paper using JRC’s submission site: https://mc.manuscriptcentral.com/acmjrc. At Step 1 of the submission process, please select “Special Section: Fairness and Machine Learning” as Manuscript Type. The deadline is April 15, 2024. The papers should be no more than 8000 words in length, including footnotes and references.

EDITOR BIOS

Seth Lazar is a Professor of Philosophy at the Australian National University and a Distinguished Research Fellow of the University of Oxford Institute for Ethics in AI. He was General Chair for the ACM Fairness, Accountability and Transparency conference 2022, and Program Chair of the AAAI/ACM AI, Ethics and Society Conference 2021. At ANU he leads the Machine Intelligence and Normative Theory Lab, where he directs research projects on the moral and political philosophy of AI. 

Alan Rubel is a Professor and Director of the Information School at the University of Wisconsin-Madison. He is also a faculty member of the UW Center for Law, Society & Justice and faculty affiliate of the UW Department of Medical History & Bioethics and Law School. He has been a visiting scholar at the 4TU Centre for Ethics & Technology and Delft University of Technology, and a senior advisor to the Presidential Commission for the Study of Bioethical Issues.

Diana Acosta-Navas is an Assistant Professor at the Quinlan School of Business at Loyola University Chicago. Her work analyzes how the moral principles that inspire the right to free speech may be best upheld in the current public forum and the role of digital platforms in creating conditions for a healthy public debate.

Henrik Kugelberg is a British Academy Postdoctoral Fellow at London School of Economics and Political Science. He was previously a fellow at the McCoy Family Center for Ethics in Society, in partnership with Apple University. His work focuses on the political philosophy of artificial intelligence and the digital public sphere.

Supporting material

Add supporting material (slides, programs, etc.)