“Risk imposition by artificial agents: the moral proxy problem"
null, Johanna Thoma (London School of Economics)

June 19, 2020, 11:00am - 12:30pm

This event is online

Organisers:

Oxford University
University of Texas at Austin

Topic areas

Details

Abstract: The ambition for the design of autonomous artificial agents is that they can make decisions at least as good as, or better than those humans would make in the relevant decision context. Human agents tend to have inconsistent risk attitudes to small stakes and large stakes gambles. While expected utility theory, the theory of rational choice designers of artificial agents ideally aim to implement in the context of risk, condemns this as irrational, it does not identify which attitudes need adjusting. I argue that this creates a dilemma for regulating the programming of artificial agents that impose risks: Whether they should be programmed to be risk averse at all, and if so just how risk averse, depends on whether we take them to be moral proxies for individual users, or for those in a position to control the aggregate choices made by many artificial agents, such as the companies programming the artificial agents, or regulators representing society at large. Both options are undesirable.

More information on our other seminars can be found here.

Supporting material

Add supporting material (slides, programs, etc.)

Reminders

Registration

Yes

June 18, 2020, 1:00pm BST

External Site

Who is attending?

No one has said they will attend yet.

Will you attend this event?


Let us know so we can notify you of any change of plan.

RSVPing on PhilEvents is not sufficient to register for this event.