CFP: Justice and Fairness in Data Use and Machine Learning

Submission deadline: February 15, 2019

Conference date(s):
April 5, 2019 - April 7, 2019

Go to the conference's page

Conference Venue:

Department of Philosophy and Religion, Northeastern University
Boston, United States

Details

The Information Ethics Roundtable (IER) is a yearly conference that brings together researchers from disciplines such as philosophy, information science, communications, public administration, anthropology, and law to discuss ethical issues such as information privacy, intellectual property, intellectual freedom, and censorship.

The 17th annual Information Ethics Roundtable will explore the relationship between the normative notions of justice and fairness and current practices of data use and machine learning.

Artificial intelligence is now a part of our everyday lives. It allows us to easily find get to a place we have never been before, while avoiding traffic and road work, to communicate with our Chinese friend when we don’t share a common language, and to carry out complex but mind numbing repetitive jobs in factories. But such artificial intelligences can also exhibit what we might call “artificial bias;” that is, machine behavior that, if produced by a person, we would say is biased against particular groups, such as racial minorities. Machine learning using large data sets is one means of achieving AI that is particularly vulnerable to producing biased systems, because it uses data from human behavior that is itself biased. A number of tech companies, such as Google and IBM, and computer science researchers are currently seeking ways to correct for such biases and to produce “fair” algorithms. But a number of fundamental questions about bias, fairness, and even justice still need to be answered if we are to solve this problem. (See below for some examples.)

In the 2019 edition of IER, we seek proposals that approach these questions from a variety of disciplinary perspectives through the lens of information ethics.

Registration is free and the conference is open to the public. Thus, we invite you to attend, regardless of whether or not you are formally workshopping or discussing a paper.

 

Proposals

Suggested Topics:

  • What concepts of fairness and justice in philosophy and other disciplines are most useful for understanding fairness, equality, and justice in data use and machine learning?
  • To what extent is it possible to operationalize (or computationalize) different conceptions of fairness and justice within different machine learning techniques?
  • Should machine learning based decision-making systems be held to a higher or different standard of fairness and justice before being implemented in industry (e.g. lending) or social services (e.g. child protective services) in comparison to currently accepted practices?
  • What is the role of data scientists and computer programmers in correcting for bias? How can machine learning be used in this role?
  • Not all biases are problematic; indeed, some are very helpful. What sorts of bias are unjust and why?
  • What can modern day programmers of “classifications” learn about avoiding bias from the experience of other disciplines devoted to classification, such as librarianship?
  • What can normative research in other areas – for example, with respect to police profiling or immigration/refugee screening – teach about when or under what conditions profiling with machine learning is acceptable?
  • What is the relationship between explainability/interpretability in machine learning decision-making and the just use of machine learning in different contexts?

Proposal requirements

We invite three types of proposals:
(1) Papers: Please submit a 500-word abstract of your paper. If accepted, you are expected to submit a detailed outline of your talk to the Roundtable. This will give your commentator a chance to prepare his/her comments in advance.

(2) Panels: Please submit a 1500-word description of your panel. The description should include: i) description of the topic, ii) biographies of the panel members, ii) organization of the panel. It is a requirement that panels focus tightly on a specific emergent topic, technology, phenomena, policy, or the like, with clear connections between the presentations.

(3) Posters (for undergraduate and graduate students only): Please submit a 500-word abstract of your poster and an outline of the major sections.

Proposals should be sent to Katie Molongoski at [email protected]. Please include the subject line: “IER 2019 proposal”

Commentators: We are also interested in receiving expressions of interest to serve as a commenter/discussant for another person’s paper. Each author with an accepted proposal will be paired with a commenter who will provide formal feedback and comments during the conference. Expressions of interest should be sent to Katie Molongoski at [email protected] by March 10th, although decisions will be made on a rolling basis after March 1, 2019 (the notification of paper acceptance date). Please include the subject line: “IER 2019 commenter.”

Deadlines:
–Submission of Proposals: February 15, 2019
–Notification of Acceptance: March 1, 2019
–Presentation Outline Deadline: March 15, 2019.
–Registration Deadline: March 25, 2019
–Conference Dates: April 5-7, 2019

Information on previous IER’s is availabe here: https://www.northeastern.edu/csshresearch/ethics/information-ethics-roundtable-2/ 

Supporting material

Add supporting material (slides, programs, etc.)

Reminders

Custom tags:

#Information Ethics, #Computer Ethics, #Data Ethics, #Boston Events