Reasoning in Logic and in Language Models
(Room to be announced soon)
University of Melbourne
Melbourne
Australia
This event is available both online and in-person
Organisers:
Talks at this conference
Add a talkDetails
Description: This is an interdisciplinary and exploratory workshop on the topic of reasoning in logic and in language models, exploring some aspects of common interest to logicians, psychologists, and computational linguists, including negation, compositionality, and meaning.
-
Preliminary Program
December 4, 2025
Morning session: Negation
- 10:00-11h00 Yulia Otmakhova & Thinh Truong (University of Melbourne): Language models are still not naysayers
- 11h15-12h15 Ellie Ripley (Monash): TBA
Afternoon session: Meaning
- 14:00-15:00 Andy Perfors (University of Melbourne): The nature of meaning: What LLMs miss
- 15:15-16:15: Tansu Alpcan (University of Melbourne) and Paul Egré (IRL Crossing): Concepts in Machines, Concepts in Humans
- 16:30-17:30: Hanna Clark-Younger (Iostack.ai, Auckland): TBA
December 5, 2025
Morning Sessions: Logic
- 10h-11:00: Lloyd Humberstone (Monash): Protection by Embedding, Protection by Non-embedding
- 11h15-12:15: Zach Weber (Ottago): Logical Paradoxes, Methodology, and Metalogic
Lunch
- 12:30-14h00: Round table and discussion
--
Abstracts
Yulia Otmakhova & Thinh Truong: Language models are still not naysayers Abstract: Negation is central to human language, lying at the core of inference and reasoning. However, in natural language processing, most prior work has treated negation superficially, that is detecting whether a negation marker is present in text, and disregarding both more intricate types of negation and the ways it affects meaning. In this talk, we examine how different types of negation affect the behaviour of modern large language models. We find that models can recognize negative statements, but fail to reason with them on downstream tasks, and still struggle with scope and compositionality. Thus, despite recent advances in many other areas, negation remains problematic for language models, which highlights its complexity as a cognitive and linguistic phenomenon.
Andy Perfors: The nature of meaning: What LLMs miss Abstract: This is going to be a general talk — part cognitive science, part computer science, and part philosophy — where I discuss where meaning comes from for people. I'll argue that the deepest forms of meaning inherently require other minds, and is a process of collaboration whose results become deeply embedded in, and inextricable from, our environment and culture. This has implications for our understanding of LLMs, the interaction between people and LLMs, and the consequences for ourselves and our society.
Tansu Alpcan and Paul Egré: Concepts in Machines, Concepts in Humans Abstract: Various theories of concepts compete in psychology, including definitional theories, prototype theories, exemplar theories, and theory theories. In this paper we are interested in ways in which the recent success of deep learning models of categorization, and more recently language models of the lexicon, can cast light on the nature and definition of concepts in humans. We will focus on concepts of concrete categories as out starting point, to look at whether models of supervised learning in particular stand anything close to how concepts are formed in humans.
Lloyd Humberstone: Protection by Embedding, Protection by Non-embedding
Abstract: A proposed account of some phenomenon, such as the possession of a given concept, may meet with any of a cluster of what go collectively under the heading of circularity objections. Arguably, in some cases, what on syntactical grounds counts as circularity in an account of the application conditions of some concept may be unobjectionable (‘non-vicious’), because the account does not aspire to being an analysis of the concept in question. A natural response to the worries here gestured at may involve making sure that crucial parts of the account are suitably embedded within the scope of certain, e.g., intensional – indeed, typically, intentional – operators. Such moves bear also on attempts to isolate one kind of discourse from another, for example: normatively neutral description of the world from normatively committal language. In other cases, the response may consist in ensuring – or in emphasizing – that the material in question is not thus embedded. ‘Protection’ in the title refers to protection against potential objections to projects of the kind just alluded to. Particular interest is taken in some of the logical issues arising in this area.
Zach Weber: Logical Paradoxes, Methodology, and Metalogic Logical paradoxes are arguments where reasoning seems to go wrong. After briefly recalling some famous paradoxes, this talk has two aims. First, we will survey a few proposals for dealing with the paradoxes, and compare their relative merits. These include proposals about (1) negation, where there may be truth-value `gluts'; (2) structural rules, where contraction or transitivity may fail; and (3) truth, where some have gone so far as to claim that, to avert the paradoxes, we should accept that there are no truths at all. Second, we will look at the methodology for weighing up and deciding between such options. I will argue from an `anti-exceptionalist' view about logic, that whatever criteria we apply when deciding which logics to adopt, applies equally to deciding how we reason *about* logics themselves---that is, to metalogic.
Registration
No
Who is attending?
No one has said they will attend yet.
Will you attend this event?