BEGIN:VCALENDAR
PRODID:-//Grails iCalendar plugin//NONSGML Grails iCalendar plugin//EN
VERSION:2.0
CALSCALE:GREGORIAN
METHOD:PUBLISH
BEGIN:VEVENT
DTSTAMP:20260416T223204Z
DTSTART;TZID=Europe/Bucharest:20251121T090000
DTEND;TZID=Europe/Bucharest:20251122T170000
SUMMARY:CIDL25 Workshop: Do LLMs exhibit natural-language processing cognitive abilities?
UID:20260422T221319Z-iCalPlugin-Grails@philevents-web-f5d4878dd-x5n6c
TZID:Europe/Bucharest
LOCATION:5-7 Edgar Quinet St.\, Bucharest\, Romania
DESCRIPTION:<p>We invite submissions to a workshop to be held within the 25th&nbsp\;International Conference of the Department of Linguistics of the&nbsp\;Faculty of Letters.</p>\n<p>CIDL25 Workshop: Do LLMs exhibit natural-language processing cognitive&nbsp\;abilities?</p>\n<p>Relevant topics: Cognitive Science\; Computational Linguistics\;&nbsp\;General Linguistics\; Semantics\; Syntax</p>\n<p>Questions we aim to explore within the ambit of these topics:</p>\n<p>&bull\; Do LLMs have metalinguistic abilities (do they have the ability to&nbsp\;generate analyses of language data/theoretical linguistic abilities of&nbsp\;language samples so as to identify whether a sentence is syntactically&nbsp\;ambiguous? How well do they handle linguistic recursion tasks? Can&nbsp\;they identify what type of recursion a sentence contains (adjectival\,&nbsp\;possessive\, PP&hellip\;)? Can they draw a syntactic tree for it and add more<br>layers of recursion?</p>\n<p>&bull\; Do LLMs meet plausible criteria associated with linguistic&nbsp\;metasemantic theories or mental metasemantic theories (cf.&nbsp\;https://link.springer.com/article/10.1007/s11229-024-04723-8)?</p>\n<p>&bull\; What would it mean for a Large Language Model to understand or&nbsp\;acquire a language? What would it mean for them to meaningfully use a&nbsp\;language or words? Is understanding language or grasping the meaning&nbsp\;of words based on an innate structure? Does that structure need to be&nbsp\;biological? Do LLMs need sensory grounding for language understanding?</p>\n<p>&bull\; Are LLMs outputs based on linguistic representations? Do LLMs have&nbsp\;internal representations? What would that mean? What kind of&nbsp\;representations would that mean? If they do\, then do they acquire&nbsp\;them? Can connectionism account for it?</p>\n<p>&bull\; Can LLMs actually tell us something about Universal Grammar? Are&nbsp\;there alternative ways for acquiring language? Can LLMs learn new&nbsp\;languages via identification of grammar rules within data patterns?</p>\n<p>Please see the associated call for abstracts!</p>
ORGANIZER;CN="Andrei Mărăşoiu";CN=Alexandru Nicolae;CN=Sandra-Catalina Branzaru:
METHOD:PUBLISH
END:VEVENT
END:VCALENDAR
