CFP: SAFECOMP 2021 Workshop - Using Argument-Based Assurance for Bias Mitigation in Intelligent and Autonomous Systems (EA-BM 2021)

Submission deadline: April 16, 2021

Conference date(s):
September 7, 2021

Go to the conference's page

Conference Venue:

Department of Computer Science, University of York
York, United Kingdom

Topic areas

Details

Introduction

Social, statistical, and cognitive biases can cascade through the project stages of designing, developing, and deploying an intelligent and autonomous system [1]. Even if the final system is “safe” in one respect (e.g., functions as expected within a limited operational context), the impact of unidentified biases can still cause significant social harm or lead to other undesirable (and perhaps unintended) consequences [2].

The scope of this SAFECOMP 2021 workshop will be to explore whether the methods and tools of argument-based assurance can be used to help mitigate the harms associated with the aforementioned biases within the context of intelligent and autonomous systems. For instance, can the process of developing a goal-structured assurance case help operationalise ethical principles associated with normative goals, such as fairness, and subsequently help build confidence regarding the expected behaviour of intelligent and autonomous systems?

Topics of Interest

Despite significant process in the area of Safe and Ethical AI, there is still uncertainty about how principles and values, such as fairness or social justice, can (and should) be translated into practical guidelines for designers, developers, policy-makers, and other stakeholders. The primary topic of interest for this workshop is to explore whether argument-based assurance can help address this gap.

Argument-based assurance relies on structured methods of argumentation (i.e., assurance cases) to provide assurance to another party (or parties) that a particular claim (or set of related claims) about a property of a system is valid and warranted given the available evidence. It is a well-established method within safety critical domains (e.g., energy, aviation)[3], originally influenced by work in informal logic and argumentation theory [4], [5]. This approach has also recently been extended to provide assurance for intelligent and autonomous systems (e.g., systems that rely on some form of machine learning or artificial intelligence) [6], [7].

Building on this research, this workshop will help establish an interdisciplinary agenda in which to realise the wider potential of argument-based assurance. As such, possible topics and areas of application could include (but are not limited to):

•          How does bias affect different tasks throughout a typical project lifecycle (e.g., project design, preprocessing, model development, user training, system monitoring), and how could argument-based assurance support the task of identifying and mitigating bias?

•          How could argument-based assurance help designers, developers, and auditors evaluate the ethical challenges associated with developing and deploying a particular intelligent and autonomous system within a social context marked by socioeconomic inequalities?

•          Are there any fairness-related goals that argument-based assurance would be inappropriate for? If so, what are the limitations of this approach?

•          Could argument-based assurance help establish trust in system property claims that make explicit reference to ethical principles such as fairness or justice?

•          Could assurance patterns for bias mitigation and fairness be developed that would help support consensus formation within and among industries?

•          How would ethical assurance complement existing efforts to develop universal standards for the safety and risk analysis and evaluation of intelligent and autonomous systems (e.g. IEEE or ISO standards)?

•          Could ethical assurance support policy-makers and public sector organisations fulfil legal duties, such as those identified in the Public Sector Equality Duty?

Workshop Schedule

An introductory tutorial will be delivered at the start of the workshop by the workshop organisers, covering a range of topics for those who are unfamiliar with argument-based assurance (e.g., methodology and logic, technical tools and standards, ethical applications).

In addition, a keynote presentation will be delivered at the end of the day (speaker TBC).

At present, due to the uncertainty of COVID-19, it is not possible to confirm whether the workshop will be delivered in-person.

Submission of Abstracts and Full-Papers

We invite the submission of extended abstracts (500-1000 words) in the first instance, to be reviewed by the programme committee and organisers. On the basis of this initial review, applicants will then be invited to submit a full paper (6-12 pages) for presentation at the workshop, which will be reviewed by three independent reviewers to suggest improvements.

Workshop proceedings will be provided as complementary book to the SAFECOMP Proceedings in Springer LNCS. Accepted papers must be formatted according to SPRINGER LNCS style guidelines: http://www.springer.com/computer/lncs?SGWID=0-164-6-793341-0 (use Microsoft Word if possible).

Submission of both abstracts and full papers will be via EasyChair: https://easychair.org/conferences/?conf=eabm2021

Submission Deadlines:

•          Extended abstract submission: 17 April 2021

•          Provisional notification: 26 April 2021

•          Full paper submission: 24 May 2021

•          Notification of acceptance: 7 June 2021

•          Camera-ready submission: 20 June 2021

•          Workshop: 7 September 2021

 Workshop Organisers

Please direct all enquiries to [email protected]

Dr Christopher Burr
Alan Turing Institute,
British Library, 96 Euston Road,
London, NW1 2DB
United Kingdom

[email protected]

Dr David Leslie
Alan Turing Institute,
British Library, 96 Euston Road,
London, NW1 2DB
United Kingdom

[email protected]

References

[1]        T. Kliegr, Š. Bahník, and J. Fürnkranz, “A review of possible effects of cognitive biases on interpretation of rule-based machine learning models,” 25-Jun-2020. [Online]. Available: http://arxiv.org/abs/1804.02969. [Accessed: 16-Jul-2020]

[2]        A. Rajkomar, M. Hardt, M. D. Howell, G. Corrado, and M. H. Chin, “Ensuring Fairness in Machine Learning to Advance Health Equity,” Ann Intern Med, vol. 169, no. 12, p. 866, Dec. 2018, doi: 10.7326/M18-1990. [Online]. Available: http://annals.org/article.aspx?doi=10.7326/M18-1990. [Accessed: 08-Apr-2020]

[3]        G. Community, “GSN Community Standard (Version 2),” The Assurance Case Working Group, 2018 [Online]. Available: https://scsc.uk/r141B:1?t=1. [Accessed: 05-Nov-2020]

[4]        F. H. V. Eemeren and R. Grootendorst, A Systematic Theory of Argumentation: The pragma-dialectical approach. Cambridge University Press, 2004.

[5]        S. Toulmin, The Uses of Argument, Updated Edition. Cambridge: Cambridge University Press, 2003.

[6]        C. Picardi, C. Paterson, R. Hawkins, R. Calinescu, and I. Habli, “Assurance Argument Patterns and Processes for Machine Learning in Safety-Related Systems,” in Proceedings of the Workshop on Artificial Intelligence Safety (SafeAI 2020), 2020, pp. 23–30 [Online]. Available: http://ceur-ws.org/Vol-2560/paper17.pdf

[7]        R. Ashmore, R. Calinescu, and C. Paterson, “Assuring the Machine Learning Lifecycle: Desiderata, Methods, and Challenges,” 10-May-2019. [Online]. Available: http://arxiv.org/abs/1905.04223. [Accessed: 24-Feb-2020]

Supporting material

Add supporting material (slides, programs, etc.)

Reminders

Custom tags:

#AI Ethics, #Ethical Assurance