AISoLA 2025 Conference Track: Responsible and Trusted AI: An Interdisciplinary Perspective
Faliraki
Greece
Talks at this conference
Add a talkDetails
As AI systems increasingly permeate diverse sectors and influence many aspects of our lives, ensuring their responsible, trustworthy, and safe development and deployment is more critical than ever. Addressing these challenges requires moving beyond purely technical considerations to grapple with complex societal, ethical, and governance questions. Building on the success of similar tracks at AISoLA 2023 and 2024, this track seeks to foster interdisciplinary dialogue by bringing together researchers from a broad range of fields.
We invite contributions from philosophy, law, psychology, economics, sociology, political science, informatics, and other relevant disciplines. This track offers a platform to share recent research and collaboratively explore the societal implications of designing, implementing, and regulating AI systems.
Topics of interest include, but are not limited to:
- Clarifying key concepts: Critical analysis of terms such as "trustworthy AI," "trusted AI," "responsible AI," and other related frameworks and terminologies.
- Balancing benefits and risks: Identifying and weighing the individual and societal benefits and risks of AI systems.
- Ethical design: Exploring value alignment and ethical considerations in the design and development of AI systems.
- Addressing public concerns: Tackling fears, misconceptions, and biases surrounding AI, including issues of privacy, security, trustworthiness, and responsibility.
- Legal implications: Analyzing the legal consequences of deploying AI systems, including questions of compliance and accountability.
- Regulatory frameworks: Developing robust legal and policy frameworks to support responsible AI adoption, with a focus on the challenges of complying with the upcoming EU AI Act.
- Labor market impact: Assessing the implications of AI on employment, workforce dynamics, and economic structures.
- AI for social good: Investigating opportunities and challenges in leveraging AI for societal benefit, including economic growth and problem-solving in key sectors.
- Responsibility and accountability: Examining the attribution of responsibility and accountability in AI development and application.
- Human oversight: Understanding the role of human oversight in AI decision-making processes and its limitations.
- Legal liability: Addressing civil liability and other legal responsibilities in cases of malfunctioning AI systems.
- Ownership and authorship: Exploring issues of ownership, authorship, and intellectual property in the context of generative AI.
- Copyright and revenue sharing: Evaluating intellectual property rights, copyright, and equitable revenue sharing for AI-generated works.
- AI in education: Examining the impact of AI on education, including grading systems, academic integrity, and new learning paradigms.
- Bias and fairness: Identifying, addressing, and mitigating bias, discrimination, and algorithmic unfairness in AI systems.
- Transparency and explainability: Investigating the roles of transparency, explainability, and traceability in reducing societal risks.
- Holistic model assessment: Evaluating AI models within broader decision-making contexts, focusing on acceptability, trustworthiness, and user trust.
- Normative choices in AI design: Critically examining the normative assumptions and ethical choices embedded in AI system design and deployment.
This track is part of the AISoLA conference, which serves as an open forum for discussing recent advancements in machine learning and their far-reaching implications.
Conference date: Nov. 1-5, 2025
Conference venue: Alila Resort & Spa, Rhodes, Greece.
There will be a conference fee. For information check the conference website. Depending on funding, we plan to waive the fee for some speakers - please contact us if you are interested in this option.
For any questions, please feel free to contact Eva Schmidt at [email protected].
If you would like to contribute to the conference proceedings, we invite you to submit a paper. Submissions can be either full papers (12–15 pages) or short papers (6–11 pages). Deadline: April 30, 2025 (Watch out for extensions!)
For those interested in contributing to the post-proceedings, we welcome the submission of a 500-word abstract. (Deadline tba, May at the earliest.)
We particularly encourage interdisciplinary authors and, especially, teams of authors from diverse fields to submit their work!
Submission:
https://equinocs.springernature.com/service/rtai25
Organizers: Kevin Baum (German Research Center for Artificial Intelligence, DE), Thorsten Helfer (CISPA Helmholtz Center for Information Security, DE), Sophie Kerstan (University of Freiburg, DE), Markus Langer (University of Freiburg, DE), Eva Schmidt (TU Dortmund, DE), Timo Speith (University of Bayreuth, DE), Andreas Sesing-Wagenpfeil (Saarland University, DE)
This track is co-organized by the Centre for European Research in Trusted Artificial Intelligence (CERTAIN) and the Lamarr Institute for Machine Learning and Artificial Intelligence.
Registration
No
Who is attending?
No one has said they will attend yet.
Will you attend this event?