BEGIN:VCALENDAR
PRODID:-//Grails iCalendar plugin//NONSGML Grails iCalendar plugin//EN
VERSION:2.0
CALSCALE:GREGORIAN
METHOD:PUBLISH
BEGIN:VEVENT
DTSTAMP:20260415T170352Z
DTSTART;TZID=Europe/Belgrade:20260515T230000
DTEND;TZID=Europe/Belgrade:20260515T230000
SUMMARY:5th International Conference on Ethics of Artificial Intelligence (5ICEAI)
UID:20260417T133822Z-iCalPlugin-Grails@philevents-web-f5d4878dd-x5n6c
TZID:Europe/Belgrade
LOCATION:Faculty of Humanities and Social Sciences of the University of Zagreb\,\, Zagreb\, Croatia\, HR-10000 
DESCRIPTION:<p>[Call for Abstracts]</p>\n<p><strong>5th International Conference on Ethics of Artificial Intelligence&nbsp\;</strong>(5ICEAI)</p>\n<p>Faculty of Humanities and Social Sciences\, University of Zagreb\, Zagreb\, Croatia</p>\n<p><strong>22-26 June 2026 </strong>(22-23 June\, Online | 24-26 June\, in-person)</p>\n<p><strong>About: </strong>The <em>5th International Conference on Ethics of Artificial Intelligence</em> (5ICEAI) brings together researchers\, academics\, and students to examine central ethical and political questions raised by contemporary AI. Hosted by the Faculty of Humanities and Social Sciences\, University of Zagreb (Zagreb\, Croatia)\, the conference promotes dialogue across moral and political philosophy\, philosophy of technology\, law\, and allied interdisciplinary fields\, with an emphasis on both conceptual foundations and concrete institutional challenges. Key themes include responsibility and accountability in socio-technical systems\; transparency\, explanation\, and contestability\; fairness and discrimination in data-driven decision-making\; privacy\, surveillance\, and informational autonomy\; the effects of AI on labour and social inequality\, as well as sustainability\; and the integrity of epistemic environments shaped by automation (misinformation\, persuasion\, and dependency). The programme also foregrounds questions of governance: how to design oversight and regulatory frameworks that are ethically defensible\, practically workable\, and aligned with human rights and democratic values. The event runs in a <strong>hybrid format</strong>: online sessions on 22&ndash\;23 June 2026\, followed by in-person sessions on 24&ndash\;26 June 2026 at the Faculty of Humanities and Social Sciences\, University of Zagreb.<strong></strong></p>\n<p><strong>ETHICS OF AI AWARD 2026</strong> (in-person talks only): The best-submitted abstract will receive the opportunity to deliver a special Award Talk similar to a keynote talk (note: the selected author will have the fee waived).</p>\n<p>The final deadline to submit proposals in different research topics is&nbsp\;<strong>May 20\, 2026. </strong><strong></strong></p>\n<p><strong>&nbsp\;</strong></p>\n<p><strong><u>KEYNOTES SPEAKERS:</u></strong><strong></strong></p>\n<p>&middot\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\; <strong>Roman V. Yampolskiy </strong>is an Associate Professor in the Department of Computer Engineering and Computer Science at the University of Louisville (Speed School of Engineering) and the founding Director of the Cyber Security Lab.<strong></strong></p>\n<p>&middot\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\; <strong>Emily E. Sullivan </strong>is a Senior Lecturer in the Department of Philosophy\, University of Edinburgh\, and Co-Director of the Centre for Technomoral Futures (Edinburgh Futures Institute).</p>\n<p>&middot\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\; <strong>Vincent Blok </strong>is Professor at Wageningen University &amp\; Research and Professor at Erasmus University Rotterdam\; he is also Scientific Director of the 4TU Centre for Ethics of Technology.<strong> </strong><strong></strong></p>\n<p>&middot\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\; <strong>Siobhain Lash </strong>is a Teaching Assistant Professor at the John Chambers College of Business and Economics at West Virginia University.<strong></strong></p>\n<p>&middot\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\; <strong>Srećko Gajović </strong>is a Distinguished Professor at the School of Medicine\, University of Zagreb\, and is affiliated with the Croatian Institute for Brain Research.<strong></strong></p>\n<p>&middot\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\; <strong>Devon Schiller </strong>is a biological\, cognitive\, and medical semiotician based at the Department of English and American Studies\, University of Vienna\, Vienna\, a DOC Fellow of the Austrian Academy of Sciences.<strong> </strong><strong></strong></p>\n<p>&middot\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\; <strong>Sa&scaron\;a Horvat </strong>is an Associate Professor at the University of Rijeka\, Faculty of Medicine\, affiliated with the Department of Social Sciences and Medical Humanities.<strong> </strong><strong></strong></p>\n<p><strong>&nbsp\;</strong></p>\n<p><strong>Topics might include (but are not limited to):</strong><strong></strong></p>\n<p><strong>&nbsp\;</strong></p>\n<p>1.&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\; <strong>Foundations of AI Ethics and Normative Frameworks</strong></p>\n<p>a. Value pluralism in AI: human rights\, capabilities\, welfare\, dignity\, autonomy<br> b. Deontic vs. consequentialist vs. virtue-theoretic approaches to design and deployment<br> c. Individual vs. collective harms\; distributive vs. procedural justice in automated systems</p>\n<p>2.&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\; <strong>Responsibility\, Accountability\, and Agency in Socio-Technical Systems</strong><br> a. Responsibility gaps\, many-hands problems\, and institutional responsibility<br> b. Human&ndash\;AI decision pipelines: delegation\, oversight\, and meaningful control<br> c. Liability\, professional duties\, and accountability mechanisms in high-stakes contexts</p>\n<p>3.&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\; <strong>Transparency\, Explainability\, and Contestability</strong></p>\n<p>a. Explanation as justification vs. explanation as understanding: stakeholders and standards</p>\n<p>b. Epistemic limits of interpretability\; post-hoc rationalisations and &ldquo\;explanation theatre&rdquo\;</p>\n<p>c. Procedural safeguards: auditability\, due process\, and avenues for appeal</p>\n<p>4.&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\; <strong>Fairness\, Discrimination\, and Structural Injustice</strong></p>\n<p>a. Competing fairness metrics\; impossibility results and ethical trade-offs<br> b. Bias across the AI lifecycle: data\, modelling\, deployment\, feedback loops<br> c. Group harms\, intersectionality\, and the reproduction of social power</p>\n<p>5.&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\; <strong>Privacy\, Surveillance\, and Data Governance</strong></p>\n<p>a. Data minimisation\, purpose limitation\, and secondary use in AI systems<br> b. Re-identification risk\, inference threats\, and privacy in multimodal models<br> c. Consent\, agency over data\, and collective data rights</p>\n<p>6.&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\; <strong>Safety\, Robustness\, and Misuse</strong></p>\n<p>a. Risk assessment under uncertainty: hazard modelling\, red-teaming\, and assurance cases</p>\n<p>b. Dual-use\, adversarial behaviour\, deception\, and manipulation risks<br> c. Security-by-design and the ethics of releasing powerful models</p>\n<p>7.&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\; <strong>Epistemic Harms and the Integrity of the Information Environment<br> </strong>a. Misinformation\, synthetic media\, and epistemic injustice<br> b. Recommender systems\, attention capture\, and autonomy over belief-formation<br> c. Trust\, credibility\, and the ethics of human reliance on AI outputs</p>\n<p>8.&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\; <strong>Governance\, Regulation\, and Institutional Design</strong></p>\n<p>a. Compliance\, enforcement\, and the ethics of &ldquo\;checklist&rdquo\; governance<br> b. Standards\, certification\, and third-party auditing: what counts as due diligence?<br> c. Global governance\, regulatory fragmentation\, and cross-border impacts</p>\n<p><strong>&nbsp\;</strong></p>\n<p>9.&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\; <strong>Labour\, Education\, and the Political Economy of AI</strong></p>\n<p>a. Automation\, deskilling\, and workplace surveillance</p>\n<p>b. Intellectual property\, creative labour\, and compensation in data-driven systems</p>\n<p>c. Public-sector AI\, procurement ethics\, and democratic accountability</p>\n<p>10.&nbsp\; <strong>Environmental and Infrastructural Ethics</strong></p>\n<p>a. Energy use\, carbon accounting\, and ecological impacts of training and deployment<br> b. Supply-chain ethics (minerals\, hardware\, e-waste) and infrastructural inequality<br> c. Sustainability trade-offs: &ldquo\;bigger models&rdquo\; vs. &ldquo\;better models&rdquo\;</p>\n<p>11.&nbsp\; <strong>Human&ndash\;AI Interaction\, Persuasion\, and Relational Ethics</strong></p>\n<p>a. Manipulation\, nudging\, and user vulnerability (children\, patients\, dependents)<br> b. Anthropomorphism\, trust calibration\, and the ethics of conversational agents<br> c. Social roles: AI as advisor\, companion\, gatekeeper\, or authority</p>\n<p>12.&nbsp\; <strong>Methods in AI Ethics</strong></p>\n<p>a. Bridging principles and practice: operationalisation\, metrics\, and evaluation protocols<br> b. Participatory design\, stakeholder engagement\, and community oversight<br> c. Interdisciplinary methods: empirical ethics\, ethnography\, and impact assessment</p>\n<p><strong>Special Track I: Medical AI Ethics</strong><strong></strong></p>\n<p>This track focuses on ethical\, legal\, and clinical issues in the development and deployment of AI in healthcare. Topics may include:</p>\n<p>a. Clinical responsibility and accountability for AI-assisted decisions<br> b. Bias\, inequity\, and health disparities in medical datasets and tools<br> c. Explainability\, informed consent\, and patient autonomy in AI-mediated care<br> d. Safety\, validation\, and post-deployment monitoring in real clinical settings<br> e. Trustworthy AI and &ldquo\;ethics-by-design&rdquo\; approaches for healthcare systems</p>\n<p><strong>Special Track II: EthicAI4Care &mdash\; Implementing Ethics by Design in AI for Healthcare</strong><strong></strong></p>\n<p>This track is aligned with EU project &ldquo\;EthicAI4Care&rdquo\;\, which develops an integrated training approach combining AI\, healthcare\, and ethics\, aiming to strengthen trustworthy AI in the health sector through ethics-by-design and educational capacity-building. Topics may include:</p>\n<p>a. Ethics-by-design frameworks and self-assessment tools for healthcare AI <br> b. Embedding EU ethical values and fundamental rights into curricula and professional training <br> c. Pedagogical methods for interdisciplinary upskilling (clinicians\, educators\, developers) <br> d. From guidelines to practice: institutional implementation\, evaluation\, and governance</p>\n<p><strong>Special Track III: Asymmetric communities\, Sustainability\, and AI</strong><strong></strong></p>\n<p>Aligned with the EU NRRP project &ldquo\;OBZIR&rdquo\;\, this track examines moral\, political\, and legal challenges of non-reciprocal relations in mixed communities of humans\, non-humans\, ecosystems\, and artificial entities\, especially AI and robotic systems\, with a focus on sustainability-oriented norms and governance. Topics may include:</p>\n<p>a. Criteria for asymmetry and resulting obligations<br> b. Human&ndash\;AI and Human&ndash\;robot relations: status\, responsibility\, and governance<br> c. Sustainability and expanding community membership<br> d. Indigenous/alternative frameworks and action-guiding ethical guidelines</p>\n<p><strong><u>FEES (accepted speakers)</u></strong><strong></strong></p>\n<p>&middot\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\; <strong>Early Stage (until 15 May 2026)</strong><strong></strong></p>\n<p>&middot\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\; Professionals (posdoc\, professor\, tenure-track):<strong> &euro\; 120\,00</strong><strong></strong></p>\n<p>&middot\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\; Students: (Master\, PhD):<strong> &euro\; 90\,00</strong><strong></strong></p>\n<p><strong>&nbsp\;</strong></p>\n<p>&middot\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\; <strong>Later Stage (15May &ndash\; 15 June 2026)</strong><strong></strong></p>\n<p>&middot\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\; Professionals (posdoc\, professor\, tenure-track):<strong> &euro\; 160\,00</strong><strong></strong></p>\n<p>&middot\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\; Students: (Master\, PhD):<strong> &euro\; 120\,00</strong><strong></strong></p>\n<p><strong>&nbsp\;</strong></p>\n<p><strong><u>Attendance:</u></strong> Free.<strong></strong></p>\n<p><strong>&nbsp\;</strong></p>\n<p><strong>Languages of the colloquium: </strong>English and Croatian.</p>\n<p><strong>SUBMISSIONS:</strong></p>\n<p>&middot\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\; IMPORTANT: you should <strong>clearly state</strong> if you are submitting for the <em>online segment</em> (OS) (22-23 June) or the <em>in-person segment</em> (PS) (24-26 June). If online\, you need to provide a <strong>preferred day </strong>(22 or 23 June)<strong> and time schedule </strong>(<em>Morning</em>: 9h30-12h30\; <em>Afternoon</em>: 14h00 &ndash\; 18h) considering the <em>Zagreb Time Zone</em>.</p>\n<p>&middot\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\; In-person submissions have a higher chance of being accepted (more slots available) and are automatically registered for <strong>Ethics of AI Award</strong> <strong>2026</strong>.</p>\n<p>&middot\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\; Proposals should include <strong>two files</strong>: (in <strong>word.</strong> format: pdf. formats will not be accepted):</p>\n<p>o&nbsp\;&nbsp\; (1) a cover page with identification\, clear academic affiliation (if several\, choose the main)</p>\n<p>o&nbsp\;&nbsp\; (2) an anonymized title and abstract (maximum 250 words\, up to 10 references)</p>\n<p>o&nbsp\;&nbsp\; (3) sent to interconfethicsofai@gmail.com</p>\n<p>&middot\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\; <strong>Paper duration</strong>: 30 minutes (20 minutes presentation + 10 minutes for discussion)\;</p>\n<p>&middot\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\; <strong>Notification Info</strong>: in order to facilitate the request for funding of the accepted talks so speakers can prepare their travel in advance\, notification of acceptance or rejection will be given in a <strong>7-10 days period</strong> (review) after the submission\;</p>\n<p>&middot\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\; <strong>Publications</strong>: Some of the papers presented at the conference are expected to be published in several projects (edited volume\, special issue\, etc.\; the publication process will be independent and optional\; more details after the conference)\;</p>\n<p>&middot\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\; Any <em>doubts or concerns</em> can be addressed to: interconfethicsofai@gmail.com</p>\n<p><strong>Venue</strong>: Faculty of Humanities and Social Sciences of the University of Zagreb\, Ulica Ivana Lučića 3\, HR-10000 Zagreb\, Croatia</p>\n<p><strong>Organization: </strong>Mind\, Language and Action Group\, Institute of Philosophy\, University of Porto | Laboratory for Conceptual Engineering of the Faculty of Humanities and Social Sciences of the University of Zagreb | Department of Philosophy of the Faculty of Humanities and Social Sciences of the University of Zagreb | TBA</p>\n<p><strong>&nbsp\;</strong></p>\n<p><strong>Organizing Committee</strong></p>\n<p>Steven S. Gouveia (Chair)</p>\n<p>Luka Peru&scaron\;ić (Local Chair)<strong></strong></p>\n<p>Sofia Miguens</p>\n<p>Jakov Erdeljac</p>\n<p>Marko Kos</p>\n<p>Damian Sr&scaron\;a</p>\n<p>TBA</p>\n\n<p><strong>Support:</strong></p>\n<p>CEEC Project by FCT 2022.02527.CEECIND</p>\n<p>TL Modern &amp\; Contemporary Philosophy</p>\n<p>RG Mind\, Language and Action Group (MLAG)</p>\n<p>Instituto de Filosofia da Universidade do Porto &ndash\; UID/00502/2025</p>\n<p>Funda&ccedil\;&atilde\;o para a Ci&ecirc\;ncia e a Tecnologia (FCT)</p>\n<p>Laboratory for Conceptual Engineering of the Faculty of Humanities and Social Sciences of the University of Zagreb</p>\n<p>Department of Philosophy of the Faculty of Humanities and Social Sciences of the University of Zagreb</p>
ORGANIZER;CN=Steven Gouveia;CN="Luka Perušić":
METHOD:PUBLISH
END:VEVENT
END:VCALENDAR
