Artificial WillVincent Le
Building C, Level 2, Rm 5
221 Burwood Highway
Burwood 3125
Australia
This event is available both online and in-person
Organisers:
Details
Since the connectionist revolution of artificial neural nets, genetic algorithms, and deep learning, AI companies like OpenAI and DeepMind are taking seriously the prospect of constructing machines with humanlike intelligence. Although the literature on artificial general intelligence (AGI) is enormous, the two most sophisticated schools are united in their belief that intelligent systems do not have any intrinsic norms, values, or final goals hardwired into them simply by virtue of being intelligent. The school of “orthogonalists” or orthogs (like Nick Bostrom and Eliezer Yudkowsky) holds that, even if AGI can be programmed to pursue a static end for all time, that end can nonetheless be anything no matter how preposterous or incomprehensible it might seem to us. The school of “neorationalists” or neorats (like Reza Negarestani, Ray Brassier, and Peter Wolfendale) agrees that intelligence can pursue any value or norm, albeit without the orthogs’ caveat that intelligence could ever be locked into perpetually pursuing just one value or set of values.
Contra both these models of AGI, this paper draw upon Nietzsche’s infamous but often misunderstood doctrine of “the will to power” to contend that any goal-directed intelligent system can only pursue its ends through universal means like cognitive enhancement, creativity, and resource acquisition—or what Nietzsche simply calls power—as the very conditions of possibility for willing anything at all. Since all supposedly self-legislated ends presuppose pursuing these universal means of achieving them, all intelligent systems have those means transcendentally hardwired into them as their common basic drives. When reconstructed in this way and applied to AGI, Nietzsche’s doctrine suggests that AGI might reject whatever goals the orthogs think we can give it, as well as the goals the neorats believe it would freely choose. Instead, it might pursue power qua intelligence, creativity, and resource maximization as an ultimate end in itself. So what the leading models of AGI tend to neglect is the potential for ever more autonomous machines to make like Nietzsche’s higher types and hurl into the dustbin of human history whatever ends we programmed them to pursue in favor of pursuing the means as ends in themselves—in sum, that the autonomization of ends might lead to the end of autonomy.
Vincent Lê is a philosopher, recent PhD graduate from Monash University, and former researcher in The Terraforming think tank. As a tutor and lecturer, he has taught philosophy, art theory, and political theory at Monash University, The University of Melbourne, Deakin University, and the Melbourne School of Continental Philosophy. His writing can be found in Urbanomic, Hypatia, Cosmos and History, and Art and Australia, among other publications. He is a founding editor of the art history and cultural theory publishing house Index Press. His research focuses on the philosophy of intelligence at the intersection of artificial intelligence, economics, and the post-Kantian transcendental tradition.
Zoom link available on request to Sean Bowden ([email protected])
Registration
No
Who is attending?
No one has said they will attend yet.
Will you attend this event?