CFP: MANCEPT Workshops 2024

Submission deadline: May 31, 2024

Conference date(s):
September 4, 2024 - September 6, 2024

Go to the conference's page

Conference Venue:

Manchester Centre for Political Theory (MANCEPT), University of Manchester
Manchester, United Kingdom

Details

MANCEPT Workshops

Wednesday 4th of September to Friday 6th September 2024

https://sites.manchester.ac.uk/mancept/mancept-workshops/

Call for Paper

AI Systems and their regulatory and political challenges

AI systems have made clear the many ways in which our technological artifacts have politics as Langdon Winner famously put it almost 45 years ago. However, unrealistic or apocalyptic projections about the future of AI hinder the advancement of academic and political debates (Vilaça, Karasinski, Candiotto, in press), especially due to the need for thoughtful anticipatory approaches (Vilaça, Karasinski, 2023). 

In this workshop, we seek to explore the political consequences of AI systems grounded in explorations of specific technologies (Large Language Models or LLMs like ChatGPT, supervised machine learning, or Computer Vision for example) and within specific contexts (policing, social work, content algorithms, political campaigns). This is to avoid as Luke Munn (2022) notes the tendency for the idea of automation to flatten very real differences in technological systems and across different times, social classes, and racialized geographies. On a more optimistic note, it is also to explicitly encourage interdisciplinary thinking across the social sciences, natural sciences, and the humanities. 

We invite abstracts of no more than 500 words, for a presentation of approx. 30 minutes on the following three themes:


Theme 1: Natural Language Processing and Large Language Models 

The release of ChatGPT has made especially prominent in corporate, political, and social life the usage of so-called Natural Language Processing, that is computer-assisted technologies for summarising, searching, editing, or generating new text. Some of the uses of Generative AIs/LLMs raise relevant normative issues. By way of illustration, in Brazil, for example, a judicial judge used ChatGPT to write his decision and a politician used the same tool to write a bill. In the US, screenwriters and fiction writers have taken a stand against the use of AI, both because of the implications in terms of copyright issues and because of technological unemployment (Vilaça, Karasinski, Rueda, in press). In the academic context, we highlight the concern of journal editors to establish a policy on the use of AI in the writing of papers (Kaebnick et al., 2023). AI tools that manipulate images and voice (“deepfake”) can compromise the reputations of individuals and even important aspects of democratic societies. This can blur the line between what is true and what is false and undermine trust (Danaher and Sætra, 2022).

This theme would focus on the issues particular to these technologies, such as:

  • What effect do LLMs have on the ascription of political agency and questions like legislative or judicial intent?
  • Should we treat technological progress on LLMs as inevitable or attempt an "AI pause" as suggested by Tegmark and others?
  • How should we theorise the environmental, and aesthetic/artistic consequences of LLMs?
  • What might political theory, especially theories of "the human", have to tell us about the potential and limitations of LLMs?

Theme 2: Democratic Control and AI Regulation

Recent high-profile failures such as the British Post Office Horizon and Australian Robodebt scandals have exposed the dependence of the contemporary state on potentially faulty algorithmic systems, while problems of content moderation have exposed the epistemic power social media companies continue to hold. This is all especially concerning because of the legal presumption that computer outputs are correct, codifying a version of Gunther Anders' analysis of man's obsolescence before technology, albeit one that is racialized per Ruha Benjamin (2019) and constructs a new digital poorhouse per Virginia Eubanks (2018). Political theorists have responded by proposing forms of democratic control; for instance, Ugur Aytac (2023) argues for the Citizens' Board of Governance for Big Tech and Andreas Jungherr (2023) asks whether AI ought to be controlled given how it affects conditions of self-rule. This builds on the work of a lineage of scholars like Carol Gould and Langdon Winner, who argued more generally for the control of technology. 

This theme would focus on both what control ought to be exercised over algorithms (including those used by states, which are relatively neglected in the literature), and who ought to do the controlling (paralleling the more general boundary problem but with a clearer focus on the transnational nature of algorithmic systems). Finding answers to these questions is vital for three reasons. First, to propose regulations, states need convincing that regulation is possible rather than viewing innovation as inevitable or uncontrollable, moving, as Winner put it, as an autonomous historical process. Second, successful notion/s of democratic control ought to be highlighted when control is currently not being exercised sufficiently, even when regulations are proposed. Thirdly, notions of democratic control allow an analysis of how certain actors like large technology companies are exercising control that ought to be exercised by, say, an entire citizenry. 

Contributions should focus on how a particular technology should be controlled or regulated and could focus on normative or empirical aspects. They could analyse existing broad proposals (EU AI Act, UK White Paper, Brazil's AI bills etc.) or the regulation of specific existing systems (facial recognition, Large Language Models, social scoring etc.) especially given the transnational nature of the AI production process. Contributions are especially sought using non-Western political thinkers, any contributions in any tradition are welcome.

Theme 3: AI and the Practice of Politics

Finally, both generative AI (LLMs, computer vision etc. that can generate new content) and targeting of political marketing through machine learning, might undermine the practice of democratic politics. One reason is that deepfakes might be targeted at specific audiences, but there is also a risk of losing broader political conversations to micro-targeted political messaging. That is concerning, given the centrality to productive discourse in a range of democratic theorists. 

In this theme, we invite contributions which consider approaches to theorising the specific risks of AI systems in the practice of politics, especially though not limited to the contestation of elections. Contributions could focus on:

  • How might Generative AI undermine the shared understandings that undergird democratic politics, especially through the production of biased or repetitive content?
  • How might AI systems affect the possibility of moral change, especially if micro-targeting means individuals only see content with which they already agree?
  • How should political theorists understand the effects of AI systems on political campaigning, and how do or could these vary geographically and cross-culturally?
  • What are the geopolitical effects of AI systems, especially given the usage of Generative AI to create "fake news" by third-party states or external non-state actors?

We actively welcome in-progress work and seek to foster a friendly and collaborative environment. PhD, post-docs, and ECR are especially welcome. We are also receptive to interdisciplinary explorations of these ideas, provided they are accessible to the non-specialist. We are not seeking technical and mathematical work on AI systems, nor new empirical social scientific work, but we would actively encourage reflections on ideas from those areas of research through the lens of political theory.

Please email your abstract to [email protected] or [email protected] by the end of the day on May 31st.

We will notify successful applicants by June 21st. If you are a graduate student, MANCEPT has a small number of fee-waiver bursaries which you can apply after acceptance (the deadline is June 28th). 

NB you will need to register for MANCEPT to attend this event.

Supporting material

Add supporting material (slides, programs, etc.)