CFP: Using AI tools to support ethical inquiry—Can it be done? If so, how, and what are the risks and potential benefits?

Submission deadline: June 1, 2026

Topic areas

Details

CFPUsing AI tools to support ethical inquiry—Can it be done? If so, how, and what are the risks and potential benefits?

Thus far, there has been little academic discussion on whether and, if so, how, different types of AI tools could be incorporated into ethics research, and what the risks and potential benefits are of such uses. This is perhaps surprising, given that there has been much discussion on how AI can support scientific research, as well as on how AI can be incorporated into digital humanities research projects. Furthermore, from another angle, there has been discussion on whether AI could support the moral reasoning and decision-making of individuals. And since 2022, there has been extensive coverage in popular media on how students are using generative AI within the classroom, and the various challenges that such uses pose. Nonetheless, what role AI might play in philosophy research, including ethics research, has yet to receive substantial attention.

For this topical collection in Philosophy & Technology, we invite submissions that address philosophical questions regarding the use of AI tools to support ethical inquiry. (We interpret ethical inquiry broadly, to incorporate normative ethics, metaethics, applied ethics, and professional ethics, such as medical ethics or research ethics. Note that the focus of this special issue is not on the use of AI in education except inasmuch as such uses advance ethics research.)

Relevant questions on this topic may include:

- What kinds of ethics research tasks can or cannot be supported or performed by AI tools? For instance, are there ways in which AI can help with processes of deliberation, reasoning, argumentation, evaluation, theorizing, or other activities? Or could AI be used to support one or more methods that are used (whether explicitly or implicitly) in ethics, like conceptual analysis, conceptual engineering, reflective equilibrium, argument and objection generation, thought experiment generation, casuistry, introspection, moral perception, anticipation and prospection, etc.?

- Which uses of AI tools in ethics research involve special kinds of wrongs or risks? Are particular uses especially likely to make philosophy less interesting or less valuable?

- What sorts of goods are lost when people attempt to use AI to “summarize” or “analyze” philosophical texts or literatures or to “articulate” an idea or argument?

- Are AI tools generally better suited for incorporation into processes of discovery than processes of justification? Or are there other generalizations that can be made about the kinds of AI uses within ethics research that are more likely to be fruitful or are less risky?

- Does AI have special potential to aid research in some areas of applied or professional ethics, as opposed to more fundamental areas of ethics, or metaethics?

- Might efforts to incorporate AI tools into ethics tasks help us better understand or refine the methods we use? Could philosophers use AI to develop new methods for ethics?

- If AI can support ethics research in particular ways, does this have implications for how we should think about ethics as a research field?

- Could AI systems be ethics experts, and might human ethicists have reason to defer to AI systems on some ethics questions?

Editors: 

Elizabeth O'Neill, Assistant Professor, Philosophy & Ethics, Eindhoven University of Technology

Philip Nickel, Associate Professor, Philosophy & Ethics, Eindhoven University of Technology

Supporting material

Add supporting material (slides, programs, etc.)