CFP: Southampton Public Reason and Artificial Intelligence Workshop

Submission deadline: December 7, 2023

Conference date(s):
May 1, 2024

Go to the conference's page

Conference Venue:

University of Southampton
Southampton, United Kingdom

Details

Call for Abstracts: Public Reason and Artificial Intelligence Workshop


1 May 2024, University of Southampton

 

Abstracts (approx. 500-750 words with a preference for abstracts defending an argument) due 7 December 2023 to [email protected]

 

Please add [email protected] to your safe list to ensure delivery.

 

Description

While one may question whether ‘hype’ about Artificial Intelligence (AI) is warranted, some forms of AI continue to be and are bound to continuing being adopted in key sectors, from finance to healthcare, contract interpretation to housing policy and justice systems. Choices as to whether and when to adopt these tools will fundamentally impact persons’ vital life interests. They accordingly demand some justification. Furthermore, whilst the concerns often target the capacity of AI to bring about certain desirable outcomes, the process by which an outcome is realized is equally important. This may be most acute when decisions as to whether to permit or adopt the use of an AI tool are made by government officials who claim legitimate authority. But it plausibly extends to private individuals: A doctor should, for instance, provide reasons why it is OK to accept an AI diagnosis if doing so will substantially determine a patient’s life prospects.

This raises important questions about the proper standards of justification, explainability and answerability of AI and the scope of their application. The purpose of this event is to explore whether and how the concept of public reason can address those questions – or whether different concepts better address AI justification, answerability, or reason-giving norms. As Jonathan Quong (2022) helpfully summarizes, public reason minimally “requires that the moral or political rules that regulate our common life be, in some sense, justifiable or acceptable to all those persons over whom the rules purport to have authority.” This concept is most often associated with the work of John Rawls, in which it offered as a standard for just decision-making in liberal-democratic societies. But it has a broader history and offer multiple interpretations. Recent work suggests it can help address questions of AI accountability (e.g., Binns 2018) and that it helps establish a norm of explainability that should guide interpretation of relevant legal requirements (e.g., Maclure 2018).

Questions remain as to the precise role public reason should play in AI ethics and governance, which version of public reason is best placed to role its role, or the precise technological or regulatory implications that should follow from adopting the norm. Does, for instance, public reason require that AI be ‘explainable’ or only that the decision-maker be able to provide justifiable reasons for why they use an opaque tool?

This workshop will examine these issues and their regulatory implications.

Questions addressed might include:

- Can the norms of public reason apply in AI settings and if so, which ones?

- How does public reason address concerns about AI accountability?

- Does public reason apply differently in decisions by public and private actors?

- What are the features of a regulatory system for AI that can fulfill public reason?

- Do existing regulatory mechanisms do so – and what can we learn from them?

- Are sector-specific regulations better or worse at fulfilling these norms?

- What technical requirements (if any) must AI itself meet to fulfill public reason norms?

- What are the implications of public reason for a legal right to explanation?

- Is there a particular interpretation of public reason that best fulfills these tasks?

- What alternative norms or principles – other than public reason – might one use to fulfill these aims (i.e., justification, accountability, and reason-giving in the use of AI systems)?

Instructions

We invite interested scholars in relevant fields (philosophy, law, computer science, etc.) to submit anonymized abstracts of approximately 500-750 words) to [email protected]by 7 December 2023.


This is a workshop. Selected speakers will be asked to provide a draft paper by 12 April 2024 and to come to Southampton having read and being prepared to discuss the other papers.

Additional Information

This workshop will feature both invited speakers and presentations selected through this call. We hope to maintain at least one slot for an early career researcher [ECR]. If you self-identify as an ECR, please let us know in the body of your e-mail with an explanation of why.

Our budget should cover meals on the day, a night’s hotel, and rail travel in the UK.

If the collection of presentations coheres, we may look to put together an essay collection. This can be discussed closer to the event, but notices of interest (or not) are useful now too.

Please address any additional questions or requests for information to Michael Da Silva at [email protected]

Bibliography

Binns. R. (2018). Algorithmic Accountability and Public Reason. Philosophy and Technology 31: 543-556.

Maclure, J. (2021). AI, Explainability and Public Reason: The Argument from the Limitations of the Human Mind. Minds and Machines 31: 421-438.

Quong, J. (2022). Public Reason. Stanford Encyclopedia of Philosophy: https://plato.stanford.edu/entries/public-reason/.

Supporting material

Add supporting material (slides, programs, etc.)