Kinds of Cognition

Today

This event is online

Speakers:

University of Florida
Institut Jean Nicod

Organisers:

University of Connecticut
University of Connecticut

Topic areas

Talks at this conference

Add a talk

Details

“Kinds of Cognition” – a graduate conference to be held online on Saturday February 22, 2025 – will bring together researchers working on the topic of cognition from diverse perspectives. Our aim is to foster cross-disciplinary discussions on various types or kinds of cognition, cognitive processes, and mechanisms, that have intrigued philosophers, cognitive scientists, linguists, neuroscientists, and developmental, comparative, and evolutionary psychologists, among others.

Registration Link: https://events.uconn.edu/ecom/event/490615-kinds-of-cognition-graduate-conference

Programme (in EST)

09:00 - 09:10 Welcome and Introduction

09:10 - 10:15 KeynoteElisabeth Pacherie (institut Jean Nicod; Institute for the Study of Cognition at Ecole Normale Supérieure, Paris)
"Motoric Representational Format"

Abstract: Much recent work elucidates different types of representational format, and ways that aspects of perception and cognition may be formatted. In this talk, I target an underexplored topic: the format of motor representations, the psychological states that serve as the primary causal link between an agent’s immediate intention to act and their subsequent behavior. I first situate motor representations within the context of processes of motor planning and motor control. I then discuss a key distinction between symbolic and analog representational format-types. I argue that there is evidence indicating that motor representations show both core markers of analog formatting and core markers of symbolic formatting. This leads me characterize motor representations as an interesting form of hybrid representation. In a slogan, motor representation is a symbolic scheme that utilizes analog representations as cognitive tools. I close by highlighting several open questions this characterization raises. (This talk is based on work done in collaboration with Myrto Mylopoulos and Josh Shepherd.)

10:20 - 10:50 James D. Grayot (University of Porto)
“Representation hunger: Reformulating the ‘problem-domain’ of truly complex cognition” 

10:55 - 11:25 Iwan Williams (Monash University) 
" Proto-asserters?: The case of chatbot speech meets the case of toddler speech "

11:30 - 12:05 Frederik T. Junker (University of Copenhagen)
"From Daydreams to Decisions"

12:10 - 12:40 Georgina Brighouse (University of Liverpool)
"Rethinking aphantasia: A genuine lack of capacity but not a disorder or disability

12:40 - 1:20 Lunch

1:20 - 1:50 Mica Rapstine (University of Michigan)
"Moral Epiphany and Insight in Problem Solving"

1:55 - 2:25 Joachim Nicolodi (University of Cambridge) 
"Consciousness in the Creative Process and the Problem for AI" 

2:30 - 3:00 Mona Fazeli (University of California, Los Angeles)
"Does Metareasoning Contribute to Epistemic Rationality?"

3:05 - 3:35 Juan Murillo Vargas (Massachusetts Institute of Technology)
"How Language-Like is the Language of Thought?"

3:40 - 4:35 Keynote: Cameron Buckner (University of Florida)
"Large Language Models as models of human reasoning"

Abstract: Recent advances in large language models that use self-prompting (like GPT’s o1/o3 and DeepSeek) appear to encroach on human-level performance on “higher reasoning” problems in mathematics, planning, and multi-step problem-solving. OpenAI in particular has made ambitious claims that these models are "reasoning", and that scrutinizing their chains of self-prompting can allow us to “read the minds” of these models, with obvious implications for problems of opacity and safety. In this talk, I review four different methodological approaches to evaluate the success of these models as models of human reasoning (“psychometrics”, “signature limits”, “inner speech”, and “textual culture”), focusing especially on comparisons to philosophical and psychological work on “inner speech” in human reasoning. I argue that this work suggests that while the achievements of self-prompting models are impressive and may make their behavior more human-like, we should be skeptical that problems of transparency and safety are solved by scrutinizing chains of self-prompting, and more philosophical and empirical work needs to be done to understand how and why self-prompting improves the performance of these models on reasoning problems.


Supporting material

Add supporting material (slides, programs, etc.)

This is a student event (e.g. a graduate conference).

Reminders

Registration

Yes

February 1, 2025, 9:00am UTC

RSVP below

Who is attending?

11 people are attending:

Arizona State University
Birkbeck, University of London
and 9 more.

See all

Will you attend this event?


Let us know so we can notify you of any change of plan.

RSVPing on PhilEvents is not sufficient to register for this event.

Custom tags:

#Philosophy of Mind , #Philosophy of Action, #Philosophy of Cognitive Science, #Philosophy of AI, #Philosophical Psychology, #Philosophy of Science, #New York Events, #Boston Events, #Connecticut Events, #Graduate Conference