CFP: Special issue on "Teaching the Ethics of Artificial Intelligence" for Teaching Ethics (journal)
Submission deadline: February 1, 2024
Call for Papers on "Teaching the Ethics of Artificial Intelligence" as a special issue of
the journal Teaching Ethics
Fall 2024 Special Issue of Teaching Ethics Editors: Glen Miller (Department of Philosophy, Texas A&M) and Deborah Mower (The Center for Practical Ethics, University of Mississippi)
The rapid development and adoption of artificial intelligence (AI), and the widespread attention given to its capabilities, creates a fecund moment for those teaching ethics. Every day, there are new developments and news reports on how AI already predicts what will draw our interest (Netflix and Instagram), analyzes more data faster than humans can (financial and x-ray analysis), drives our cars (Tesla, and now all “legacy” automakers), answers our questions (Alexa and Siri), and even generates text, images, and code (ChatGPT, DALL-E, GitHub Copilot).
At an immediate level, the development of AI raises a host of normative and epistemic questions. What kinds of cognitive tasks should be offloaded to technology? How should they be regulated? How does AI development necessitate the reconsideration of ethical and legal approaches to privacy and bias? How do we trust, and whom (or what) should we trust? What sort of transparency or explainability is required? To what extent are individuals and corporations responsible for the impacts of their technology?
At a more fundamental level, the development of AI prompts reconsideration of our humanity, our interactions with each other, and our ethical concepts. How should we think about humanity as machines absorb many cognitive and communicative tasks that had been considered distinctly human? As generative AI populates images, text, and deep fakes, how should we evaluate authenticity, knowledge claims, and even the nature of the interlocutors in civil (or uncivil) exchanges? As new AI technologies are trained, how should legacies of discrimination and oppression be addressed, and what will lead to a just society? And in the face of massive data collection needed for the development of new AI applications, how should one conceptualize individual and collective rights and agency?
The special issue seeks to compile a robust multidisciplinary collection of papers—drawing from philosophy, the humanities, social sciences, computer science, and other STEM fields—that explore novel pedagogical methodology and practice that use this moment of AI attention to promote the ethical development of students, broadly understood. We welcome both empirical and conceptual papers. Papers may be theoretical, developing intellectual resources, concepts, potential, and anticipated problems of AI and ethics, or practical. Papers that describe actual or planned courses should follow the following structure:
· Overview (course name, description, and institutional context)
· Course Trajectory (high level narrative that explicates the progression of ideas, skills, and techniques; learning objectives and their importance; and key resources)
· Pedagogy and Assessment (pedagogical methodology and practice, assignments, and assessment of student work)
· Lessons Learned and Future Plans (if the course has been taught)
The deadline for submitting 500 word abstracts is February 1, 2024. Authors will be notified by February 15, and full paper drafts of 4,500 – 6,000 words (excluding references) must be submitted by March 31, 2024. This special issue is scheduled to be published in Fall 2024. Contributions should be emailed to Glen Miller ([email protected]) and Deborah Mower ([email protected]). Information regarding preparation of manuscripts is available in the journal’s Submission Guidelines.