CFP: New Technology and (Non-)Human Agency: Risks, Responsibilities and Regulation
Submission deadline: April 30, 2017
July 17, 2017 - July 21, 2017
School of Law, University of Lisbon
XXVIIIth World Congress of the International
Association for the Philosophy of Law and Social Philosophy (IVR)
Lisbon, Portugal | 16 – 21 July 2017
New Technology and (Non-)Human Agency:
Risks, Responsibilities and Regulation
Vesco Paskalev and Peter Cserne (University of Hull, UK)
Intelligent machines mimicking, replacing and threatening human agency have moved from science fiction to our everyday life and keep on populating our imagination and culture.
Since Kubrick’s 2001: A Space Odyssey (1968) we have come a long way for HAL 9000 to become a real, rather than a fictional character. Herzog’s recent Lo and Behold reports on even more fascinating prospects of technology and connectivity changing human society. In one possible reading, what Ken Loach’s I, Daniel Blake is a strong claim of agency, vividly demonstrating how automated bureaucracies can turn welfare systems into the opposite of what their name suggests. However much our private and collective decisions are delegated to algorithms, we still want to be treated with respect, as citizens, rather than “national insurance numbers and blips on a screen.”
Mid-20th century legal philosophers such as Fuller and Hart linked the distinctive features of law as a system of governance to our shared assumptions about human agency and personhood. In our world, as we know it, people individuate actions and think of each other as responsible for their doings, at least in central cases. Hart refers to these features of human agency and society as so basic that they could even be called ‘natural’ necessity. He saw the logical possibility of their change not as a real prospect for any predictable future, rather as something belonging to ‘science fiction’.
Many scientific and technological achievements that seemed to belong to science fiction turned out to be real and part of human practice. There are driverless cars already on the streets in a few cities and the red light, if all goes as designed and planned, does make them stop. Policymakers and legal scholars have already started deliberation about the unplanned scenarios too, as the European Parliament’s recent discussion on civil liability of designers and users of robots illustrates.
Yet we also face fundamental questions about, as it were, the cultural impact of artificial intelligence and smart technology on our thinking about human agency. Should we worry that our actual behaviour is increasingly determined by various apps – while we travel, work, socialise or date? Is it sufficient that, in theory, the algorithms are only guiding our behaviour but the capacity to do otherwise remains with us, when in practice we do follow them unreflectively? Is our capacity for reflexivity and moral choice undermined by such props, in the same manner that the capacity of pilots is undermined by their frequent use of autopilot? Is the maintenance or development of our skills and ‘moral muscles’ sufficient reason against delegating decisions to smart devices?
On Raz’s service conception of law, it is a virtue that rules give us pre-emptive reasons for action, so that our capacity to reason is free for more rewarding exercises. We might make a small gain in moral virtues if we remain able to choose whether to stop on red light, but if we are spared the effort, we may be able to employ our moral capacity for more valuable choices, such as deliberating on public policy. If this is so then another set of questions arises. If technology can be used to divert our reflection, and therefore agency, from one area to another, is our agency hostage to the dominantly market-driven technological development? Can or should we, collectively, manage this development? And who are ‘we’ in the first place?
This special workshop seeks to identify and address some normative – moral, ethical, political – issues raised by smart technology and its impact on agency. Possible topics include the following.
- Is or can a smart system be an agent and if yes, on what conditions? Is agency a prerequisite of responsibility?
- To whom can we attribute moral or legal agency and responsibility in a smart machine environment? Is it possible and/or desirable to identify and impose liability on human decision-makers on account of being the designer, owner or user of smart technology?
- What is the likely impact of smart machines on our experience of agency? How much moral weight should these experiences carry in individual and collective decisions about non-human agents?
We invite contributions discussing these and related questions from philosophical, social science and legal perspectives.
An abstract of 250–500 words, indicating the authors’ academic affiliation, should be submitted by 30 April 2017 to Dr. Vesco Paskalev Acceptance decisions will be communicated by mid-May.