Ideational Preparation Is All You Need: Deep Learning meets the Faculty Empiricism of William JamesCameron Buckner (University of Houston)
Zoom link: bit.ly/3v7K6tt
Meeting ID: 929 1903 1328
Time: 10 am - 12 pm Hong Kong Time
Find out the time of this event in your location: https://www.timeanddate.com/worldclock/fixedtime.html?msg=Ideational+Preparation+Is+All+You+Need&iso=20220506T10&p1=102
Abstract Deep learning is a research area in computer science that has over the last ten years produced a series of transformative breakthroughs in artificial intelligence—creating systems that can recognize complex objects in natural photographs as well or better than humans, defeat human grandmasters in strategy games such as chess, Go, or Starcraft II, create bodies of novel text that sometimes are indistinguishable from those produced by humans, and predict how proteins will fold more accurately than human microbiologists who have devoted their lives to the task. The artificial neural network approach behind deep learning is usually aligned with empiricist theories of the mind, which can be traced back to philosophers such as Locke and Hume. Contemporary rationalists like Gary Marcus and Jerry Fodor have criticized the innovations behind some of these breakthroughs, because they appeal to innate structure which is supposed to be off-limits to empiricists. I argue that these innovations are consistent with historical empiricism, however, as they implement roles attributed to domain-general psychological faculties like perception, memory, imagination, and attention, which were frequently invoked by paradigm empiricists in their explanations of the mind's ability to extract abstractions from sensory experience. Computer scientists may benefit by reviewing these philosophers’ accounts of these faculties, for they anticipated many of the coordination and control problems that will confront deep learning theorists as they aim to bootstrap their models to greater levels of cognitive complexity using more ambitious architectures with multiple interacting faculty modules. In this talk, I focus on William James’ account of attention in the Principles of Psychology by comparing the roles he assigned to attention in the extraction of abstract knowledge from experience to the innovations behind many recent architectures in deep learning. Despite numerous alignments, I argue that deep learning still has much to gain by considering other aspects of James' theory which have not yet been fully implemented, especially the “ideational preparation” component of his theory, which aligns more naturally with predictive processing accounts of cognition.
Bio: Cameron Buckner is an Associate Professor in the Department of Philosophy at the University of Houston. He began his academic career in logic-based artificial intelligence. This research inspired an interest into the relationship between classical models of reasoning and the (usually very different) ways that humans and animals actually solve problems, which led him to the discipline of philosophy. He received a PhD in Philosophy at Indiana University in 2011 and an Alexander von Humboldt Postdoctoral Fellowship at Ruhr-University Bochum from 2011 to 2013. His research interests lie at the intersection of philosophy of mind, philosophy of science, animal cognition, and artificial intelligence, and he teaches classes on all these topics. Recent representative publications include “Empiricism without Magic: Transformational Abstraction in Deep Convolutional Neural Networks” (2018, Synthese), and “Rational Inference: The Lowest Bounds” (2017, Philosophy and Phenomenological Research)—the latter of which won the American Philosophical Association's Article Prize for the period of 2016–2018.