Moral mechanisms
Date
Authors
Editor(s)
Advisor
Supervisor
Co-Advisor
Co-Supervisor
Instructor
Source Title
Print ISSN
Electronic ISSN
Publisher
Volume
Issue
Pages
Language
Type
Journal Title
Journal ISSN
Volume Title
Citation Stats
Attention Stats
Usage Stats
views
downloads
Series
Abstract
As highly intelligent autonomous robots are gradually introduced into the home and workplace, ensuring public safety becomes extremely important. Given that such machines will learn from interactions with their environment, standard safety engineering methodologies may not be applicable. Instead, we need to ensure that the machines themselves know right from wrong; we need moral mechanisms. Morality, however, has traditionally been considered a defining characteristic, indeed the sole realm of human beings; that which separates us from animals. But if only humans can be moral, can we build safe robots? If computationalism - roughly the thesis that cognition, including human cognition, is fundamentally computational - is correct, then morality cannot be restricted to human beings (since equivalent cognitive systems can be implemented in any medium). On the other hand, perhaps there is something special about our biological makeup that gives rise to morality, and so computationalism is effectively falsified. This paper examines these issues by looking at the nature of morals and the influence of biology. It concludes that moral behaviour is concerned solely with social well-being, independent of the nature of the individual agents that comprise the group. While our biological makeup is the root of our concept of morals and clearly affects human moral reasoning, there is no basis for believing that it will restrict the development of artificial moral agents. The consequences of such sophisticated artificial mechanisms living alongside natural human ones are also explored.