Submission deadline: February 15, 2020

Topic areas




Guest Editor: Maurizio Balistreri (University of Turin)

We still cannot predict the speed at which our society will change, but tomorrow it will be ever more normal to meet and deal with intelligent robots. Robots are already a reality, not just industrially but also in the field of ‘services’, for example as self-service tills and customer assistance services. In agriculture and breeding, they are often used to increase productivity, with the reduction of clutter on land, the use of pesticides and energy consumption, but tomorrow they could also serve for gathering fruit and plant health treatment. Domestically, on the other hand, they may serve to mow the lawn or clean the floor, but in the future they could also make dinner and serve a fine breakfast. Further, robots are now starting to be used in medicine for care, assistance and entertainment of patients and the elderly, but tomorrow they could also fight the enemy or transport goods en route and passengers without a driver.

The development of increasingly intelligent machines raises numerous questions: for example, will the unstoppable advance of robots make it harder to find work? Some jobs probably still remain activities for human beings alone, but other duties may be performed entirely by autonomous machines. But in the future robots may also take over more complex tasks. Some fear that life without work would be increasingly boring: we would have the whole day on our hands, but no longer know what to do with it. We would spend time in unproductive or unhealthy activities, without caring for tomorrow: not only would drug consumption rise, but we would also be less able to enjoy any relationship with other people. And should the robots malfunction, whose responsibility would it be? Building and planning robots involves people of different skills, but when we have machines able to learn from experience, who will we deem responsible in case of accident or breakdown? Moreover, could artificial intelligence threaten our survival? This means, could we control ever more intelligent machines? Or will superintelligent entities no longer be willing to serve and obey us? In the face of these fears, the ideal solution would be to build moral machines, that is, programmed to behave virtuously at all times. Yet is it truly possible to program a robot to act morally? And is it enough for it to follow principles, or must it also be equipped with moral character?

Further, with the spread of robots across different areas of society, tomorrow it could be normal to interact with a machine and share part of our everyday life with it. The relation to the robot could become a constant, not only professionally, but also when we move, are at home or fall ill. For now, these robots do not feel any sentiment or even have a conscience. Does this mean we can treat them as we like? Can we, then, destroy them? A robot cannot feel better or worse: so if we kick it, impede its functioning or turn it off, we do not harm it. Yet does the way we treat a person and deal with an ever more intelligent, interactive machine not reveal conclusions on our moral character? For example, could a virtuous person ever wish to ‘rape’ a doll? Or does the fact that a person mistreats his robot of company – for example, spiting or manhandling it – not give away character aspects worth criticizing? And by dint of ‘practicing’ immoral actions on a robot, does one’s character not get corrupted? There is wide ranging debate on the consequences of violent video games for people’s characters, and there are several scholars who maintain that violence practiced before a

computer may contribute to developing violent, anti-social dispositions. Is this also true for robots programmed to permit violent games and activities? And would it be right to ban or limit their production and/or trade?

Finally, for the moment a human being may only happen to feel affection or real love for a robot on television, or in the cinema or books. Yet if humanoid robots increasingly like humans are produced in the future, we could not only become fond of them, but also fall in love and go mad for them. We want to be loved, and for us it is of course important that those loving us are really interested and it is not simply for convenience: the robot’s affection could feel precisely like this disinterested love we seek, in that a robot cannot have ulterior motives or betray our trust. We can only try to imagine the consequences: for example, by dint of loving a robot programmed to agree with us at all times, could we over a period lose our ability and habit in getting to know other people? Further, would it become increasingly difficult to face our mistakes and partiality? Is there not, however, the chance that a robot who merely reflects ourselves finally bores us, in that all gestures or opinions are predictable? And what would be the sense in the company of a superintelligent machine, but always programming it to please and agree with us? Indeed, an intelligent robot could process an impressive quantity of information and swiftly fix the most rational choice. For example, which way to go to reach the required destination, how to invest our savings or in which study course to enroll: but an intelligent assistant could also serve for our moral choices.

Before a humanoid robot able to interact with us as if it were a human being and make out it is part of our suffering and passions, will we resist the temptation to give it some moral significance? At least for now, a robot seems not to deserve moral weight, in that it is a machine with no self- awareness, lacking the ability to feel emotion. But will we tomorrow still be able to treat it as a mere object? Also, can a robot think? And what does being intelligent mean? Is the Turing test enough to recognize a machine’s intelligence? Finally, the robot is a programmed machine, but does this mean it is not free? Is freedom only for human beings or could robots be free too?

Hoping to raise the most inter-disciplinary, profound debate possible on this theme, the next edition of “Filosofia” will take into consideration for publication contributions facing at least one of the fol- lowing thematic areas:

  • -  Ethics and robotics;

  • -  Machine Ethics;

  • -  Ethical and legal regulation of artificial intelligence and robotic systems;

  • -  Ethical, social and political questions on the development of artificial intelligence and robot- ic revolution;

  • -  Benefits and risks in using robots in medicine, services, transport, the military field etc.;

  • -  Robotics and education;

  • -  Responsibility in using intelligent machines;

  • -  Psychological, philosophical aspects of human-robot interaction.

“Filosofia” accepts submissions in Italian, English and French. The articles should comply with the following norms:

All contributions will be blindly peer-reviewed.

Authors will be notified of the result of the selection and will receive a detailed referee report on their submission

Deadline for submission is February 15, 2020.


All submissions should be sent to the following address

Each contribution should include:

• one .doc file (no pdf et alia) intended for a blind referee.

The text should be anonymous and preceded by an abstract in English of no more than 150 words;

• a .doc file intended for the editorial board, which should include the author’s name, academic affiliation and an e-mail address.

All files should not exceed 50,000 characters (including spaces and footnotes). For further information please contact:

For further information please contact:

Supporting material

Add supporting material (slides, programs, etc.)