RTAIM 26 Online | "Dialogical Models and Counterfactual Questions for Epistemic Robustness in XAI" 
Rocío Martín Istilart

March 27, 2026, 5:00pm - 6:30pm

This event is online

Organisers:

University of Porto

Topic areas

Details

rTAIM (Rebuilding Trust in AI Medicine) Monthly Seminars

 

Seminar #26

Dialogical Models and Counterfactual Questions for Epistemic Robustness in XAI

Rocío Martín Istilart (Universidad Nacional del Sur, Argentina)

We are happy to announce the forthcoming 26th rTAIM Online Seminar, with the participation of Rocío Martín Istilart on the 27th March 2026, 17h00-18h00 Lisbon Time Zone, via Microsoft teams.

https://teams.live.com/meet/9378537563070?p=6c27AWdBD7eGSN48wB

ID Teams:  9378537563070 | Password: uT7jS3]

# Seminar 26: Explainable Artificial Intelligence (XAI) faces the challenge of providing explanations that are not only technically accurate but also epistemically robust and meaningful for users. This paper proposes a normative-epistemological framework for the formalization of counterfactual explanations within a dialogical setting. Building on the Explanation–Question–Response (EQR) protocol and Walton’s dialog theory, the work introduces a structured set of critical counterfactual questions designed to test whether automated inferences remain stable across alternative contexts or depend on spurious correlations. The proposal integrates counterfactual reasoning into human–machine dialogues as new locutions, enabling explanations to be dynamically scrutinized in a way analogous to human argumentative practices. This approach aims to strengthen the causal, contextual, and selective dimensions of XAI explanations while improving their transferability and user alignment. An illustrative example in the domain of automated criminal decision-making shows how critical counterfactual questioning can help uncover hidden biases and enhance transparency. Although the framework is currently conceptual, it outlines a promising path toward improving the epistemic quality and defensibility of automated decisions. The paper concludes that embedding counterfactual explanations within dialogical protocols such as EQR can contribute to more trustworthy, context-sensitive, and critically testable XAI systems, while also identifying key technical and empirical challenges for future research.

Short bio: Rocío Martín Istilart is a PhD student in Philosophy at Universidad Nacional del Sur (Argentina) and a research fellow of CONICET (since 2026). Her work focuses on the epistemology of artificial intelligence, particularly on bias detection, explainable AI (XAI), and the philosophical analysis of automated decision-making. She holds a teaching degree in Philosophy from Universidad Nacional del Sur.

rTAIM Seminars: https://ifilosofia.up.pt/activities/rtaim-seminars

https://trustaimedicine.weebly.com/rtaim-seminars.html


Organisation:
Steven S. Gouveia (MLAG/IF)
Mind, Language and Action Group (MLAG)
Instituto de Filosofia da Universidade do Porto – UID/00502/2025
Fundação para a Ciência e a Tecnologia (FCT)

____________________________________________

Instituto de Filosofia (UI&D 502)
Faculdade de Letras da Universidade do Porto
Via Panorâmica s/n
4150-564 Porto
Tel. 22 607 71 80
E-mail: [email protected]
http://ifilosofia.up.pt/

Supporting material

Add supporting material (slides, programs, etc.)

Reminders

Registration

No

Who is attending?

No one has said they will attend yet.

Will you attend this event?


Let us know so we can notify you of any change of plan.