BEGIN:VCALENDAR
PRODID:-//Grails iCalendar plugin//NONSGML Grails iCalendar plugin//EN
VERSION:2.0
CALSCALE:GREGORIAN
METHOD:PUBLISH
BEGIN:VEVENT
DTSTAMP:20260311T132451Z
DTSTART;TZID=Europe/London:20210416T200000
DTEND;TZID=Europe/London:20210416T200000
SUMMARY:SAFECOMP 2021 Workshop - Using Argument-Based Assurance for Bias Mitigation in Intelligent and Autonomous Systems (EA-BM 2021)
UID:20260314T131739Z-iCalPlugin-Grails@fe80:0:0:0:d4cf:baff:fea2:9582%3
TZID:Europe/London
LOCATION:York\, United Kingdom
DESCRIPTION:<p><a name="introduction"></a>Introduction</p>\n<p>Social\, statistical\, and cognitive biases can cascade through the project stages of designing\, developing\, and deploying an intelligent and autonomous system <a href="#ref-kliegr2020">[1]</a>. Even if the final system is &ldquo\;safe&rdquo\; in one respect (e.g.\, functions as expected within a limited operational context)\, the impact of unidentified biases can still cause significant social harm or lead to other undesirable (and perhaps unintended) consequences <a href="#ref-rajkomar2018">[2]</a>.</p>\n<p>The scope of this SAFECOMP 2021 workshop will be to explore whether the methods and tools of <em>argument-based assurance</em> can be used to help mitigate the harms associated with the aforementioned biases within the context of intelligent and autonomous systems. For instance\, can the process of developing a goal-structured assurance case help operationalise <em>ethical principles</em> associated with normative goals\, such as fairness\, and subsequently help build confidence regarding the expected behaviour of intelligent and autonomous systems?</p>\n<p><a name="topics-of-interest"></a>Topics of Interest</p>\n<p>Despite significant process in the area of Safe and Ethical AI\, there is still uncertainty about how <em>principles and values</em>\, such as fairness or social justice\, can (and should) be translated into <em>practical guidelines</em> for designers\, developers\, policy-makers\, and other stakeholders. The primary topic of interest for this workshop is to explore whether argument-based assurance can help address this gap.</p>\n<p>Argument-based assurance relies on structured methods of argumentation (i.e.\, assurance cases) to provide assurance to another party (or parties) that a particular claim (or set of related claims) about a property of a system is valid and warranted given the available evidence. It is a well-established method within safety critical domains (e.g.\, energy\, aviation)<a href="#ref-gsncommunity2018">[3]</a>\, originally influenced by work in informal logic and argumentation theory <a href="#ref-eemeren2004">[4]</a>\, <a href="#ref-toulmin2003">[5]</a>. This approach has also recently been extended to provide assurance for intelligent and autonomous systems (e.g.\, systems that rely on some form of machine learning or artificial intelligence) <a href="#ref-picardi2020">[6]</a>\, <a href="#ref-ashmore2019">[7]</a>.</p>\n<p>Building on this research\, this workshop will help establish an <em>interdisciplinary agenda</em> in which to realise the wider potential of argument-based assurance. As such\, possible topics and areas of application could include (but are not limited to):</p>\n<p>&bull\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\; How does bias affect different tasks throughout a typical project lifecycle (e.g.\, project design\, preprocessing\, model development\, user training\, system monitoring)\, and how could argument-based assurance support the task of identifying and mitigating bias?</p>\n<p>&bull\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\; How could argument-based assurance help designers\, developers\, and auditors evaluate the ethical challenges associated with developing and deploying a particular intelligent and autonomous system within a social context marked by socioeconomic inequalities?</p>\n<p>&bull\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\; Are there any fairness-related goals that argument-based assurance would be inappropriate for? If so\, what are the limitations of this approach?</p>\n<p>&bull\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\; Could argument-based assurance help establish trust in system property claims that make explicit reference to ethical principles such as fairness or justice?</p>\n<p>&bull\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\; Could assurance patterns for bias mitigation and fairness be developed that would help support consensus formation within and among industries?</p>\n<p>&bull\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\; How would ethical assurance complement existing efforts to develop universal standards for the safety and risk analysis and evaluation of intelligent and autonomous systems (e.g.&nbsp\;IEEE or ISO standards)?</p>\n<p>&bull\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\; Could ethical assurance support policy-makers and public sector organisations fulfil legal duties\, such as those identified in the Public Sector Equality Duty?</p>\n<p><a name="workshop-schedule"></a>Workshop Schedule</p>\n<p>An introductory tutorial will be delivered at the start of the workshop by the workshop organisers\, covering a range of topics for those who are unfamiliar with argument-based assurance (e.g.\, methodology and logic\, technical tools and standards\, ethical applications).</p>\n<p>In addition\, a keynote presentation will be delivered at the end of the day (<strong>speaker TBC</strong>).</p>\n<p>At present\, due to the uncertainty of COVID-19\, it is not possible to confirm whether the workshop will be delivered in-person.</p>\n<p><a name="submission-of-abstracts-and-full-papers"></a>Submission of Abstracts and Full-Papers</p>\n<p>We invite the submission of extended abstracts (500-1000 words) in the first instance\, to be reviewed by the programme committee and organisers. On the basis of this initial review\, applicants will then be invited to submit a full paper (6-12 pages) for presentation at the workshop\, which will be reviewed by three independent reviewers to suggest improvements.</p>\n<p>Workshop proceedings will be provided as complementary book to the SAFECOMP Proceedings in Springer LNCS. Accepted papers must be formatted according to SPRINGER LNCS style guidelines: <a href="http://www.springer.com/computer/lncs?SGWID=0-164-6-793341-0">http://www.springer.com/computer/lncs?SGWID=0-164-6-793341-0</a> (use Microsoft Word if possible).</p>\n<p>Submission of both abstracts and full papers will be via EasyChair: <a target="_blank">https://easychair.org/conferences/?conf=eabm2021</a></p>\n<p><a name="submission-deadlines"></a>Submission Deadlines:</p>\n<p>&bull\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\; Extended abstract submission: 17 April 2021</p>\n<p>&bull\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\; Provisional notification: 26 April 2021</p>\n<p>&bull\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\; Full paper submission: 24 May 2021</p>\n<p>&bull\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\; Notification of acceptance: 7 June 2021</p>\n<p>&bull\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\; Camera-ready submission: 20 June 2021</p>\n<p>&bull\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\; Workshop: 7 September 2021</p>\n<p><a name="workshop-organisers"></a>&nbsp\;Workshop Organisers</p>\n<p>Please direct all enquiries to cburr@turing.ac.uk</p>\n\n<p>Dr Christopher Burr<br> Alan Turing Institute\, <br> British Library\, 96 Euston Road\, <br> London\, NW1 2DB <br> United Kingdom <br> <br> <a href="mailto:cburr@turing.ac.uk">cburr@turing.ac.uk</a></p>\n<p>Dr David Leslie <br> Alan Turing Institute\, <br> British Library\, 96 Euston Road\, <br> London\, NW1 2DB <br> United Kingdom <br> <br> <a href="mailto:dleslie@turing.ac.uk">dleslie@turing.ac.uk</a><a name="references"></a></p>\n\n<p>References<br> </p>\n<p><a name="refs"></a><a name="ref-kliegr2020"></a>[1] &nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\; T. Kliegr\, &Scaron\;. Bahn&iacute\;k\, and J. F&uuml\;rnkranz\, &ldquo\;A review of possible effects of cognitive biases on interpretation of rule-based machine learning models\,&rdquo\; 25-Jun-2020. [Online]. Available: <a href="http://arxiv.org/abs/1804.02969">http://arxiv.org/abs/1804.02969</a>. [Accessed: 16-Jul-2020]</p>\n<p><a name="ref-rajkomar2018"></a>[2] &nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\; A. Rajkomar\, M. Hardt\, M. D. Howell\, G. Corrado\, and M. H. Chin\, &ldquo\;Ensuring Fairness in Machine Learning to Advance Health Equity\,&rdquo\; <em>Ann Intern Med</em>\, vol. 169\, no. 12\, p. 866\, Dec. 2018\, doi: <a href="https://doi.org/10.7326/M18-1990">10.7326/M18-1990</a>. [Online]. Available: <a href="http://annals.org/article.aspx?doi=10.7326/M18-1990">http://annals.org/article.aspx?doi=10.7326/M18-1990</a>. [Accessed: 08-Apr-2020]</p>\n<p><a name="ref-gsncommunity2018"></a>[3] &nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\; G. Community\, &ldquo\;GSN Community Standard (Version 2)\,&rdquo\; The Assurance Case Working Group\, 2018 [Online]. Available: <a href="https://scsc.uk/r141B:1?t=1">https://scsc.uk/r141B:1?t=1</a>. [Accessed: 05-Nov-2020]</p>\n<p><a name="ref-eemeren2004"></a>[4] &nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\; F. H. V. Eemeren and R. Grootendorst\, <em>A Systematic Theory of Argumentation: The pragma-dialectical approach</em>. Cambridge University Press\, 2004.</p>\n<p><a name="ref-toulmin2003"></a>[5] &nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\; S. Toulmin\, <em>The Uses of Argument\, Updated Edition</em>. Cambridge: Cambridge University Press\, 2003.</p>\n<p><a name="ref-picardi2020"></a>[6] &nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\; C. Picardi\, C. Paterson\, R. Hawkins\, R. Calinescu\, and I. Habli\, &ldquo\;Assurance Argument Patterns and Processes for Machine Learning in Safety-Related Systems\,&rdquo\; in <em>Proceedings of the Workshop on Artificial Intelligence Safety (SafeAI 2020)</em>\, 2020\, pp. 23&ndash\;30 [Online]. Available: <a href="http://ceur-ws.org/Vol-2560/paper17.pdf">http://ceur-ws.org/Vol-2560/paper17.pdf</a></p>\n<p><a name="ref-ashmore2019"></a>[7] &nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\;&nbsp\; R. Ashmore\, R. Calinescu\, and C. Paterson\, &ldquo\;Assuring the Machine Learning Lifecycle: Desiderata\, Methods\, and Challenges\,&rdquo\; 10-May-2019. [Online]. Available: <a href="http://arxiv.org/abs/1905.04223">http://arxiv.org/abs/1905.04223</a>. [Accessed: 24-Feb-2020]</p>
ORGANIZER;CN=Christopher Burr:
METHOD:PUBLISH
END:VEVENT
END:VCALENDAR
