AI, Cognition, and Agency: A Workshop with Carlos Montemayor

May 27, 2025
Eindhoven Center for the Philosophy of AI, Eindhoven University of Technology

TU/e Campus, Building Gemini Zuid, Room 4.23
Eindhoven
Netherlands

Organisers:

Eindhoven University of Technology
Eindhoven University of Technology

Topic areas

Talks at this conference

Add a talk

Details

The Eindhoven Centre for the Philosophy of AI (ECPAI) at Eindhoven University of Technology invites all those interested to attend an in-person workshop exploring emerging questions at the intersection of AI, cognition, and agency, from generative art and neural implants to autonomous vehicles and artificial minds. All are welcome to join the conversation.

Programme:

13:30 – 14:00

Arrival and Coffee

14:00 – 14:30

Does AI dream of making art? Generative AI, intentionality and the classification of artist

Charlie Kuppens, Tilburg University

14:30 – 15:00

Mental schemata and internal representations in human and autonomous driving

Michela Ghezzi, Eindhoven University of Technology

15:00 – 15:30

From extension to hybridization: rethinking cognitive integrations with neural implants

Guido Cassinadri, Scuola Superiore Sant’Anna

15:30 – 16:00

Break

16:00 – 17:00

Keynote: Artificial minds and the temporal aspects of agential control

Carlos Montemayor, San Francisco State University


Abstracts:

Does AI dream of making art? Generative AI, intentionality and the classification of artist

Charlie Kuppens, Tilburg University

Among the digital technologies that artists use to create art, generative AI is relatively new. It is fairly accepted to consider the works created using generative AI as works of art. More controversial, however, is claiming that both the human artist and the AI program should be credited as artists, because the artwork is based on input from both.

In this talk, I will argue that AI should not be classified as artist, because its contribution to works of art is creatively non-intentional. Firstly, I will show that the human inclination to attribute more artistic credit to generative AI, compared to other factors of influence, is likely based on anthropomorphism. Following, I will show that while AI does have an undeniable impact on the final artwork, its influence is creatively non-intentional. Compared to other kinds of (non-intentional) influences on artworks, AI does not distinguish itself in a intentionally relevant way. Therefore, from an intentional perspective, there is no justification to classify generative AI as an artist while not awarding comparable non-intentional influences the same classification.


Mental schemata and internal representations in human and autonomous driving

Michela Ghezzi, Eindhoven University of Technology

This work explores the role of internal representations in autonomous vehicles (AVs), comparing them with human mental schemata that underpin situational awareness (SA) during driving. On the one hand, human drivers are able to perceive, interpret, and respond to driving situations thanks to mental schemata, cognitive structures built from past experience that support the ability to know what is happening in a current situation, thereby enhancing situational awareness. On the other hand, the question remains open as to whether autonomous vehicles develop similar internal representations that support situational awareness, or whether they merely replicate learned statistical correlations.

In order to address this issue, the work identifies four fundamental questions: the nature and structure of computational representations in AVs; the extent to which these resemble human mental schemata; the degree to which any such resemblance either fosters or hinders situational awareness; and finally, should it hinder SA, how the internal representations of AVs ought to be rethought.

The proposed methodology for addressing these four questions is based on artificial cognition, an approach to Explainable Artificial Intelligence (XAI) inspired by cognitive psychology that allows inference about the internal structures of intelligent systems by analysing their behaviour in response to controlled stimuli.

Understanding the representational modes of AVs is crucial for improving technical transparency, stimulating social consideration, and guiding effective institutional regulation. This work thus provides the theoretical framework for a four-year research project aimed at investigating in depth the cognitive capacities that can be attributed to autonomous vehicles, and at fostering their responsible development.

From extension to hybridization: rethinking cognitive integrations with neural implants

Guido Cassinadri, Scuola Superiore Sant’Anna

I argue that the “hybrid mind framework” (HM) is more suitable than the “extended mind framework” (EM) for characterizing cognitive integrations with neural implants. The EM claims that a device highly integrated into an agent’s cognitive system becomes a constitutive component of it. A hybrid mind system is realized via the integration between an artificial and a biological cognitive system, which 1) reciprocally co-adapt to each other 2) co-realize one or more cognitive functions; 3) the impact on the subject’s phenomenological experience; 4) giving rise to a symbiotic relationship.

 First, the EM either relies on a non-universally accepted mark of cognition or struggles to define the exact conditions for mental extension, ultimately adopting a gradualist and integrationist approach. The HM is a full-fledged integrationist and gradualist framework, abandoning the search for a clear tipping point for cognitive extension.

Second, the neurotechnological enhancement or therapy of cognitive functions can induce feelings of alienation. Since the EM often equates technological cognitive enhancement with cognitive extension and vice versa, fails to establish whether highly integrated device producing detrimental effects respects the EM conditions. In contrast, the HM uses the symbiosis metaphor to account for intricate relationships that go beyond the sharp distinction between purely beneficial or detrimental interactions.

Artificial minds and the temporal aspects of agential control

Carlos Montemayor, San Francisco State University

There is much debate surrounding the possibility that artificial minds might become conscious agents and that, as a consequence, they might deserve moral and epistemic protection, partly because their intelligence could surpass our cognitive capacities. An evaluation of how exactly this possibility might come to fruition can teach us lessons about how to best understand our own minds. This talk offers such an evaluation, arguing in favor of a temporal characterization of the complexity of mental skills, based on the contrast between human, animal, and artificial minds. Focusing on five types of agential temporality, I argue that assessments of intelligence should be based on specific capacities that have different temporal profiles. The role of epistemic and other cognitive needs in grounding and generating intelligent behavior through these capacities explains key differences between actual and possible minds. The talk sketches, in the conclusion, a framework for comparing artificial and biological minds.

Supporting material

Add supporting material (slides, programs, etc.)

Reminders

Registration

No

Who is attending?

No one has said they will attend yet.

Will you attend this event?


Let us know so we can notify you of any change of plan.