In Moderation
This event is online
Details
Recent experiences around the world evince digital platforms' ability to propagate disinformation, escalating political and ethnic conflicts. Automated moderation promises to enhance social media companies' ability to timely detect and remove toxic speech, but raises concerns over its potential impact on democratic deliberation.
This discussion constitutes a new chapter in the philosophical debate regarding the appropriate treatment of speech that threatens the public interest. A central aspect of this debate has been the relation between free speech and democratic deliberation. Recent arguments have been held that the removal of disinformation (via automated moderation) is necessary for safeguiding autonomous deliberation, insofar as it improves agents' epistemic environment. This talk argues that, in its current form, automated moderation is insufficient to uphold autonomous and democratic deliberation. Firstly, the removal of toxic content is in tension with engagement optimization algorithms and design features in platforms that disincentivize deliberation. Secondly, automated moderation inherits the shortcomings of algorithmic decisions, such as opacity, lack of accountability, and disparate negative impact on historically marginalized populations. Thirdly, the automated removal of disinformation fails to create fertile conditions for epistemic agency, which are necessary for democratic deliberation.
Registration
No
Who is attending?
1 person is attending:
1 person may be attending:
Will you attend this event?
Custom tags:
#westernphilosophy