CFP: 5th International Conference on Ethics of Artificial Intelligence (5ICEAI)

Submission deadline: May 15, 2026

Conference date(s):
June 22, 2026 - June 26, 2026

Go to the conference's page

Conference Venue:

University of Zagreb
Zagreb, Croatia

Topic areas

Details

[Call for Abstracts]

5th International Conference on Ethics of Artificial Intelligence (5ICEAI)

Faculty of Humanities and Social Sciences, University of Zagreb, Zagreb, Croatia

22-26 June 2026 (22-23 June, Online | 24-26 June, in-person)

About: The 5th International Conference on Ethics of Artificial Intelligence (5ICEAI) brings together researchers, academics, and students to examine central ethical and political questions raised by contemporary AI. Hosted by the Faculty of Humanities and Social Sciences, University of Zagreb (Zagreb, Croatia), the conference promotes dialogue across moral and political philosophy, philosophy of technology, law, and allied interdisciplinary fields, with an emphasis on both conceptual foundations and concrete institutional challenges. Key themes include responsibility and accountability in socio-technical systems; transparency, explanation, and contestability; fairness and discrimination in data-driven decision-making; privacy, surveillance, and informational autonomy; the effects of AI on labour and social inequality, as well as sustainability; and the integrity of epistemic environments shaped by automation (misinformation, persuasion, and dependency). The programme also foregrounds questions of governance: how to design oversight and regulatory frameworks that are ethically defensible, practically workable, and aligned with human rights and democratic values. The event runs in a hybrid format: online sessions on 22–23 June 2026, followed by in-person sessions on 24–26 June 2026 at the Faculty of Humanities and Social Sciences, University of Zagreb.

ETHICS OF AI AWARD 2026 (in-person talks only): The best-submitted abstract will receive the opportunity to deliver a special Award Talk similar to a keynote talk (note: the selected author will have the fee waived).

The final deadline to submit proposals in different research topics is May 15, 2026.

 

KEYNOTES SPEAKERS:

·       Roman V. Yampolskiy is an Associate Professor in the Department of Computer Engineering and Computer Science at the University of Louisville (Speed School of Engineering) and the founding Director of the Cyber Security Lab.

·       Emily E. Sullivan is a Senior Lecturer in the Department of Philosophy, University of Edinburgh, and Co-Director of the Centre for Technomoral Futures (Edinburgh Futures Institute).

·       Vincent Blok is Professor at Wageningen University & Research and Professor at Erasmus University Rotterdam; he is also Scientific Director of the 4TU Centre for Ethics of Technology.

·       Siobhain Lash is a Teaching Assistant Professor at the John Chambers College of Business and Economics at West Virginia University.

·       Srećko Gajović is a Distinguished Professor at the School of Medicine, University of Zagreb, and is affiliated with the Croatian Institute for Brain Research.

·       Devon Schiller is a biological, cognitive, and medical semiotician based at the Department of English and American Studies, University of Vienna, Vienna, a DOC Fellow of the Austrian Academy of Sciences.

·       Saša Horvat is an Associate Professor at the University of Rijeka, Faculty of Medicine, affiliated with the Department of Social Sciences and Medical Humanities.

 

Topics might include (but are not limited to):

 

1.       Foundations of AI Ethics and Normative Frameworks

a. Value pluralism in AI: human rights, capabilities, welfare, dignity, autonomy
b. Deontic vs. consequentialist vs. virtue-theoretic approaches to design and deployment
c. Individual vs. collective harms; distributive vs. procedural justice in automated systems

2.       Responsibility, Accountability, and Agency in Socio-Technical Systems
a. Responsibility gaps, many-hands problems, and institutional responsibility
b. Human–AI decision pipelines: delegation, oversight, and meaningful control
c. Liability, professional duties, and accountability mechanisms in high-stakes contexts

3.       Transparency, Explainability, and Contestability

a. Explanation as justification vs. explanation as understanding: stakeholders and standards

b. Epistemic limits of interpretability; post-hoc rationalisations and “explanation theatre”

c. Procedural safeguards: auditability, due process, and avenues for appeal

4.       Fairness, Discrimination, and Structural Injustice

a. Competing fairness metrics; impossibility results and ethical trade-offs
b. Bias across the AI lifecycle: data, modelling, deployment, feedback loops
c. Group harms, intersectionality, and the reproduction of social power

5.       Privacy, Surveillance, and Data Governance

a. Data minimisation, purpose limitation, and secondary use in AI systems
b. Re-identification risk, inference threats, and privacy in multimodal models
c. Consent, agency over data, and collective data rights

6.       Safety, Robustness, and Misuse

a. Risk assessment under uncertainty: hazard modelling, red-teaming, and assurance cases

b. Dual-use, adversarial behaviour, deception, and manipulation risks
c. Security-by-design and the ethics of releasing powerful models

7.       Epistemic Harms and the Integrity of the Information Environment
a. Misinformation, synthetic media, and epistemic injustice
b. Recommender systems, attention capture, and autonomy over belief-formation
c. Trust, credibility, and the ethics of human reliance on AI outputs

8.       Governance, Regulation, and Institutional Design

a. Compliance, enforcement, and the ethics of “checklist” governance
b. Standards, certification, and third-party auditing: what counts as due diligence?
c. Global governance, regulatory fragmentation, and cross-border impacts

 

9.       Labour, Education, and the Political Economy of AI

a. Automation, deskilling, and workplace surveillance

b. Intellectual property, creative labour, and compensation in data-driven systems

c. Public-sector AI, procurement ethics, and democratic accountability

10.  Environmental and Infrastructural Ethics

a. Energy use, carbon accounting, and ecological impacts of training and deployment
b. Supply-chain ethics (minerals, hardware, e-waste) and infrastructural inequality
c. Sustainability trade-offs: “bigger models” vs. “better models”

11.  Human–AI Interaction, Persuasion, and Relational Ethics

a. Manipulation, nudging, and user vulnerability (children, patients, dependents)
b. Anthropomorphism, trust calibration, and the ethics of conversational agents
c. Social roles: AI as advisor, companion, gatekeeper, or authority

12.  Methods in AI Ethics

a. Bridging principles and practice: operationalisation, metrics, and evaluation protocols
b. Participatory design, stakeholder engagement, and community oversight
c. Interdisciplinary methods: empirical ethics, ethnography, and impact assessment

Special Track I: Medical AI Ethics

This track focuses on ethical, legal, and clinical issues in the development and deployment of AI in healthcare. Topics may include:

a. Clinical responsibility and accountability for AI-assisted decisions
b. Bias, inequity, and health disparities in medical datasets and tools
c. Explainability, informed consent, and patient autonomy in AI-mediated care
d. Safety, validation, and post-deployment monitoring in real clinical settings
e. Trustworthy AI and “ethics-by-design” approaches for healthcare systems

Special Track II: EthicAI4Care — Implementing Ethics by Design in AI for Healthcare

This track is aligned with EU project “EthicAI4Care”, which develops an integrated training approach combining AI, healthcare, and ethics, aiming to strengthen trustworthy AI in the health sector through ethics-by-design and educational capacity-building. Topics may include:

a. Ethics-by-design frameworks and self-assessment tools for healthcare AI
b. Embedding EU ethical values and fundamental rights into curricula and professional training
c. Pedagogical methods for interdisciplinary upskilling (clinicians, educators, developers)
d. From guidelines to practice: institutional implementation, evaluation, and governance

Special Track III: Asymmetric communities, Sustainability, and AI

Aligned with the EU NRRP project “OBZIR”, this track examines moral, political, and legal challenges of non-reciprocal relations in mixed communities of humans, non-humans, ecosystems, and artificial entities, especially AI and robotic systems, with a focus on sustainability-oriented norms and governance. Topics may include:

a. Criteria for asymmetry and resulting obligations
b. Human–AI and Human–robot relations: status, responsibility, and governance
c. Sustainability and expanding community membership
d. Indigenous/alternative frameworks and action-guiding ethical guidelines

FEES (accepted speakers)

·       Early Stage (until 15 May 2026)

·       Professionals (posdoc, professor, tenure-track): € 130,00

·       Students: (Master, PhD): € 90,00

 

·       Later Stage (15May – 15 June 2026)

·       Professionals (posdoc, professor, tenure-track): € 180,00

·       Students: (Master, PhD): € 120,00

 

Attendance: Free.

 

Languages of the colloquium: English and Croatian.

SUBMISSIONS:

·       IMPORTANT: you should clearly state if you are submitting for the online segment (OS) (22-23 June) or the in-person segment (PS) (24-26 June). If online, you need to provide a preferred day (22 or 23 June) and time schedule (Morning: 9h30-12h30; Afternoon: 14h00 – 18h) considering the Zagreb Time Zone.

·       In-person submissions have a higher chance of being accepted (more slots available) and are automatically registered for Ethics of AI Award 2026.

·       Proposals should include two files: (in word. format: pdf. formats will not be accepted):

o   (1) a cover page with identification, clear academic affiliation (if several, choose the main)

o   (2) an anonymized title and abstract (maximum 250 words, up to 10 references)

o   (3) sent to [email protected]

·       Paper duration: 30 minutes (20 minutes presentation + 10 minutes for discussion);

·       Notification Info: in order to facilitate the request for funding of the accepted talks so speakers can prepare their travel in advance, notification of acceptance or rejection will be given in a 7-10 days period (review) after the submission;

·       Publications: Some of the papers presented at the conference are expected to be published in several projects (edited volume, special issue, etc.; the publication process will be independent and optional; more details after the conference);

·       Any doubts or concerns can be addressed to: [email protected]

Venue: Faculty of Humanities and Social Sciences of the University of Zagreb, Ulica Ivana Lučića 3, HR-10000 Zagreb, Croatia

Organization: Mind, Language and Action Group, Institute of Philosophy, University of Porto | Laboratory for Conceptual Engineering of the Faculty of Humanities and Social Sciences of the University of Zagreb | Department of Philosophy of the Faculty of Humanities and Social Sciences of the University of Zagreb | TBA

 

Organizing Committee

Steven S. Gouveia (Chair)

Luka Perušić (Local Chair)

Sofia Miguens

Jakov Erdeljac

Marko Kos

Damian Srša

TBA

Support:

CEEC Project by FCT 2022.02527.CEECIND

TL Modern & Contemporary Philosophy

RG Mind, Language and Action Group (MLAG)

Instituto de Filosofia da Universidade do Porto – UID/00502/2025

Fundação para a Ciência e a Tecnologia (FCT)

Laboratory for Conceptual Engineering of the Faculty of Humanities and Social Sciences of the University of Zagreb

Department of Philosophy of the Faculty of Humanities and Social Sciences of the University of Zagreb

Supporting material

Add supporting material (slides, programs, etc.)