CFP: LLMs and their "intuitions"

Submission deadline: February 18, 2025

Conference date(s):
March 8, 2025 - March 9, 2025

Go to the conference's page

This event is available both online and in-person

Conference Venue:

Department of Philosophy, University of Bucharest
Bucharest, Romania

Topic areas

Details

Recently, with the development of Large Language Models based applications, some researches have claimed that LLMS exhibit several human-like abilities, such as TOM, empathy, intuitive thought and may others (Welvita and Pu, 2024; Kosinski, 2023; Kosinski, 2024; . Since these applications have started to be used in different fields, such as healthcare or psychology, we find that it is important to understand the mechanisms through which they reason, make decisions, or prompt specific lines of text. The conference focuses on one specific new-found capability in LLMs, intuition, either as artificial intuition (Pedwell, 2023) or human-like intuition (Hagendorff et al, 2023), and aims to explore the reasons behind claiming LLMs may or may not be capable of intuitive reasoning.

We encourage BA, MA and PhD students, as well as early PhD's and postdocs, to contribute research abstracts related to the event's topic areas. Abstracts should be written in English and should not exceed 300 words. Abstracts will receive full consideration if sent before February 18th, 2025 at the following address: [email protected]

Word or PDF attachments are preferred, with the message titled "abstract submission".

We welcome papers that address (but are not limited to) the following issues:

  • Is intuition something algorithmic? If intuition is experience-based (sensing some solution without explicit representation of how), what would it mean to say that a LLM exhibits intuition? Do LLMs have experiences?
  • Are there specific underlying cognitive mechanisms for intuitive reasoning, if intuition is experience-based, that can be computationally modelled? 
  • When a LLM arrives at a solution or an idea, do we call the process intuitive because the LLM is not able to provide reasons for how it arrived at the solution? Would such a view really account for what intuitions are?
  • What could constitute evidence for intuitive reasoning in large language models?  Can we find benchmarks for intuition?
  • There’s no standard operational definition of what intuitions are (beliefs, dispositions to believe, sui generis states, propositions, faculties), so should one focus on the function of intuitions? (If so, which may these be?) 
All submissions will go through a process of blind peer review. (Please write your identifying details in the body of the email, and leave the attached abstract anonymized.)   We intend notifications of acceptance to be sent out on or before February 28th, 2025. The conference programme will be announced as soon as review is completed.  For any questions, please don't hesitate to email  [email protected]  

Supporting material

Add supporting material (slides, programs, etc.)