Moral mechanisms
dc.citation.epage | 60 | en_US |
dc.citation.issueNumber | 1 | en_US |
dc.citation.spage | 47 | en_US |
dc.citation.volumeNumber | 27 | en_US |
dc.contributor.author | Davenport, D. | en_US |
dc.date.accessioned | 2016-02-08T11:03:18Z | |
dc.date.available | 2016-02-08T11:03:18Z | |
dc.date.issued | 2014 | en_US |
dc.department | Department of Computer Engineering | en_US |
dc.description.abstract | As highly intelligent autonomous robots are gradually introduced into the home and workplace, ensuring public safety becomes extremely important. Given that such machines will learn from interactions with their environment, standard safety engineering methodologies may not be applicable. Instead, we need to ensure that the machines themselves know right from wrong; we need moral mechanisms. Morality, however, has traditionally been considered a defining characteristic, indeed the sole realm of human beings; that which separates us from animals. But if only humans can be moral, can we build safe robots? If computationalism - roughly the thesis that cognition, including human cognition, is fundamentally computational - is correct, then morality cannot be restricted to human beings (since equivalent cognitive systems can be implemented in any medium). On the other hand, perhaps there is something special about our biological makeup that gives rise to morality, and so computationalism is effectively falsified. This paper examines these issues by looking at the nature of morals and the influence of biology. It concludes that moral behaviour is concerned solely with social well-being, independent of the nature of the individual agents that comprise the group. While our biological makeup is the root of our concept of morals and clearly affects human moral reasoning, there is no basis for believing that it will restrict the development of artificial moral agents. The consequences of such sophisticated artificial mechanisms living alongside natural human ones are also explored. | en_US |
dc.description.provenance | Made available in DSpace on 2016-02-08T11:03:18Z (GMT). No. of bitstreams: 1 bilkent-research-paper.pdf: 70227 bytes, checksum: 26e812c6f5156f83f0e77b261a471b5a (MD5) Previous issue date: 2014 | en |
dc.identifier.doi | 10.1007/s13347-013-0147-2 | en_US |
dc.identifier.issn | 2210-5433 | |
dc.identifier.uri | http://hdl.handle.net/11693/26680 | |
dc.language.iso | English | en_US |
dc.publisher | Kluwer Academic Publishers | en_US |
dc.relation.isversionof | https://doi.org/10.1007/s13347-013-0147-2 | en_US |
dc.source.title | Philosophy and Technology | en_US |
dc.subject | AI | en_US |
dc.subject | Computationalism | en_US |
dc.subject | Moral agent | en_US |
dc.subject | Moral patient | en_US |
dc.subject | Safety | en_US |
dc.title | Moral mechanisms | en_US |
dc.type | Article | en_US |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- Moral mechanisms.pdf
- Size:
- 221.81 KB
- Format:
- Adobe Portable Document Format
- Description:
- Full printable version