12th rTAIM Monthly Seminar
Francesco Prinzi

July 24, 2024, 2:00am - 4:00am

This event is online

Organisers:

University of Porto

Topic areas

Details

#12 Rebuilding Trust in AI Medicine Monthly (rTAIM) Seminar

24 July 2024 | 2:00pm - 3:00pm (Lisbon Time Zone)

[Zoom ID: 932 3566 2066 | Password: 761386 | Link: https://videoconf-colibri.zoom.us/j/93235662066?pwd=4JGunJeQskC2DOVbALXGu2PSjUMdCn.1]


From Theory to Practice: Overcoming Challenges in Implementing AI-Based CDSSs in Healthcare #Seminar 12: In recent years, the adoption of computer-assisted tools employing Artificial Intelligence (AI) techniques has increased in several fields. These tools, which harness machine-learning and deep-learning architectures, have been pivotal in Medicine with the development of Clinical Decision Support Systems (CDSSs). CDSSs are designed to assist clinicians in critical healthcare processes, where understanding the decision-making process and ensuring system reliability are paramount. Despite the potential of data-driven AI, its application in medicine remains fraught with challenges, including the creation of high-performing yet non-interpretable CDSSs. Consequently, only a few CDSSs have transitioned from theoretical frameworks to clinical practice. Several factors hinder this transition: insufficient clinician involvement in data preparation, limited datasets, a lack of external validation, etc. The engagement of clinicians in model interpretation is crucial, as relying solely on accuracy metrics raises significant concerns about the models' validity. Regulatory agencies have responded by proposing frameworks and guidelines such as the GDPR and the AI Act to address concerns about applications of machine learning models. This presentation aims to elucidate these critical challenges and provide recommendations for implementing reliable CDSS. In particular, the discussion will focus on achieving explainability in AI models, highlighting the importance of transparent and interpretable results. The concept of Trustworthy AI is central to this integration, emphasizing that explainability, one of the key features in trustworthy AI, is essential for developers to technically validate results, for clinicians to align models with clinical literature, and for patients to understand the decisions. The presentation will cover methods to achieve explainability in both shallow and deep learning architectures, considering tabular and image inputs. Additionally, a new paradigm addressing the accuracy-explainability trade-off will be discussed, offering insights into future developments in this field.

Short bio: Francesco Prinzi received his Ph.D. in biomedicine, neuroscience and advanced diagnostics in 2023, University of Palermo. He is currently Assistant Professor at University of Palermo and affiliated with the Computer Laboratory of the University of Cambridge (UK). His research is focused on the development of diagnostic and predictive models through Machine Learning, Explainable Artificial Intelligence and Medical Imaging Analysis methods.

Supporting material

Add supporting material (slides, programs, etc.)

Reminders

Registration

No

Who is attending?

No one has said they will attend yet.

Will you attend this event?


Let us know so we can notify you of any change of plan.