SAS23 - Reliability or Trustworthiness?

November 30, 2023 - December 1, 2023
HLRS, Universität Stuttgart

Nobelstraße 19
Stuttgart
Germany

View the Call For Papers

Organisers:

Universität Stuttgart

Topic areas

Talks at this conference

Add a talk

Details

Registrations for attendance are possible until November 1st. The conference fee is 100€. Please visit https://regi.hlrs.de/2023/sas/registration/reginit

For a schedule please see https://philo.hlrs.de/?p=290

Conference on Computer Intensive Methods

Recent debates on AI have pointed out trust as a seminal issue. However, it is often unclear what is meant by that: How do we understand trust in AI models? What is the underlying concept of trust? The philosophy of trust has developed fundamentally different notions of trust, which can describe fundamentally different relationships to a person, an institution or even artifacts. Although terminology sometimes tends to obscure categorical differences, two major forms of theory can be distinguished here: reliabilist approaches, which are based on epistemic reasons, and trust in the narrower sense, which is a normative ground. In the philosophy of computational sciences and models, however, this difference has hardly been noted.

Therefore, this year’s SAS conference focuses on this alternative: relying on or trusting AI and simulation models?

At first glance, it seems natural to understand trust in purely epistemic terms. An epistemic interpretation of trust exists in the concept of reliabilism – and this seems ideally suited for application to scientific-technical contexts. It is then concerned with the reliability of methods and techniques. In AI contexts, reliability is often quantified by the system itself (indicating the confidence in the classification). Historical-theoretical approaches also argue in this direction. Naomi Oreskes, for example, appears to provide reliabilistic arguments to answer the question of why science should be trusted, citing evidence and “reliable knowledge” as reasons.

However, the question remains as to how evidence can be established and who can judge the reliability of knowledge. It seems that a great deal of expertise is required to – reliably?! – determine the reliability of complex scientific methods and technologies. For example, while an individual can easily determine the reliability of a toaster, the same cannot be said for AI and simulation models or other complex technologies, as they do not necessarily provide immediate impressions of their performance.

In this context, normative theories of trust have argued that evidence will only be accepted if the source (in this case: experts!) is trusted. Following this line of argumentation, reliability seems to presuppose trustworthiness. While this argument is compelling, it raises questions about how we can evaluate the trustworthiness of experts or other sources of evidence. Thus, it appears that there is an entangled relationship between reliability and trustworthiness that requires further examination.

Supporting material

Add supporting material (slides, programs, etc.)

Reminders

Registration

Yes

November 1, 2023, 9:00am CET

External Site

Who is attending?

No one has said they will attend yet.

1 person may be attending:

(unaffiliated)

See all

Will you attend this event?


Let us know so we can notify you of any change of plan.

RSVPing on PhilEvents is not sufficient to register for this event.