CFP: "AI Agents: Choice, Autonomy, and the Concept of the Agency" (Special Issue, Inquiry: An Interdisciplinary Journal of Philosophy)
Submission deadline: May 10, 2026
Details
Call for Papers – Special Issue of:
Inquiry: An Interdisciplinary Journal of Philosophy
AI Agents: Choice, Autonomy, and the Concept of the Agency
Submission deadline: May 10 2026
---
Inquiry: An Interdisciplinary Journal of Philosophy invites submissions for a Special Issue on the metaphysics and individuation of artificial systems, edited by Herman Cappelen and John Hawthorne.
Overview
Are contemporary AI systems—especially large language models—agents? Can they make choices, form intentions, act for reasons, or exercise something like autonomy? If the answer is yes (even in a deflated or partial sense), what does that reveal about the nature of agency, freedom, and responsibility? If the answer is no, what explains the powerful pull of agentive description in practice—and what conceptual or political work is it doing?
This special issue invites papers that treat “AI agency” not only as a metaphysical or empirical question, but also as a methodological and conceptual-engineering problem: when we apply “agency” to novel systems, are we tracking a mind-independent fact, negotiating a useful terminology, or creating a legal/social fiction with downstream consequences? In many domains—ethics, governance, product design, and law—we are not merely discovering the answer; we are actively settling it.
Guiding questions
-
What is an agent? Necessary/sufficient conditions; minimal vs robust agency; action vs behavior; reasons-responsiveness.
-
Can LLMs (or agentic AI systems) make choices? What would count as choosing, intending, planning, or acting—and what would rule it out?
-
Autonomy and free will: Are these coherent in artificial systems? Is “freedom” the wrong frame, or a helpful one?
-
Comparative models: Is AI agency more like corporate agency, group agency, tool use, delegation, or a legal fiction?
-
Methodology and concept application: Is there a truth of the matter about AI agency, or are we deciding how to extend “agency” to new cases? What criteria should guide that decision (explanatory power, predictive control, moral risk, legal administrability, political legitimacy)?
Suggested topics (illustrative)
-
Accounts of agency (causal, functionalist, representational, constitutive, normative) and their implications for AI
-
Choice, control, and reasons: decision theory, planning, self-models, “intention-like” states, counterfactual robustness
-
Agency without consciousness? Agency without experience? (and vice versa)
-
Tool vs agent framings in AI practice; “agentic workflows”; delegation and responsibility gaps
-
Corporate and collective agency as analogies (and disanalogies) for AI systems
-
Legal personhood, liability, and fiction: when is “the AI did it” a useful attribution vs a category mistake?
-
Evaluative and political dimensions: who benefits from agent-ascriptions (or denials)? how do attributions distribute blame, credit, and control?
-
Operationalization: tests, benchmarks, interpretability, and auditing approaches that purport to measure agency-relevant capacities
-
Cross-cultural perspectives on action, autonomy, and personhood (and how they reshape the agency debate)
Submission details
- Manuscripts should be around or under 10,000 words. Submissions will be considered on a rolling-review basis until the final deadline of 25 April 2026.
- Please submit through the journal’s website: https://www.tandfonline.com/journals/sinq20
- When uploading your manuscript, select the Special Issue title from the drop-down menu on the submission form.
Queries
For questions regarding the Special Issue, please contact: [email protected]