Machine ethics produces moral and immoral machines. The morality is usually fixed, e.g. by programmed meta-rules and rules. The machine is thus capable of certain actions, not others. However, another approach is the morality menu (MOME for short). With this, the owner or user transfers his or her own morality onto the machine. The machine behaves in the same way as he or she would behave, in detail. Together with his teams, Prof. Dr. Oliver Bendel developed several artifacts of machine ethics at his university from 2013 to 2018. For one of them, he designed a morality menu that has not yet been implemented. Another concept exists for a virtual assistant that can make reservations and orders for its owner more or less independently. In the article „The Morality Menu“ the author introduces the idea of the morality menu in the context of two concrete machines. Then he discusses advantages and disadvantages and presents possibilities for improvement. A morality menu can be a valuable extension for certain moral machines. You can download the article here.
At the end of 2018, the article entitled „Learning How to Behave: Moral Competence for Social Robots“ by Bertram F. Malle and Matthias Scheutz was published in the „Handbuch Maschinenethik“ („Handbook Machine Ethics“) (ed.: Oliver Bendel). An excerpt from the abstract: „We describe a theoretical framework and recent research on one key aspect of robot ethics: the development and implementation of a robot’s moral competence.“ The authors propose „that moral competence consists of five elements, two constituents (moral norms and moral vocabulary) and three activities (moral judgment, moral action, and moral communication)“. „A robot’s computational representations of social and moral norms is a prerequisite for all three moral activities. However, merely programming in advance the vast network of human norms is impossible, so new computational learning algorithms are needed that allow robots to acquire and update the context-specific and graded norms relevant to their domain of deployment. Moral vocabulary is needed primarily for moral communication, which expresses moral judgments of others’ violations and explains one’s own moral violations – to justify them, apologize, or declare intentions to do better. Current robots have at best rudimentary moral competence, but with improved learning and reasoning they may begin to show the kinds of capacities that humans will expect of future social robots.“ (Abstract) An overview of the contributions that have been published electronically since 2017 can be found on link.springer.com/referencework/10.1007/978-3-658-17484-2.
Robots have no rights from a philosophical and ethical point of view and cannot currently get any rights. You only have such rights if you can feel or suffer, if you have a consciousness or a will to live. Accordingly, animals can have certain rights, stones cannot. Only human beings have human rights. Certain animals can be granted basic rights, such as chimpanzees or gorillas. But to grant these animals human rights makes no sense. They are not human beings. If one day robots can feel or suffer, if they have a consciousness or a will to live, they must be granted rights. However, Oliver Bendel does not see any way to get there at the moment. According to him, one could at best develop „reverse cyborgs“, i.e. let brain and nerve cells grow on technical structures (or in a robot). Such reverse or inverted cyborgs might at some point feel something. The newspaper Daily Star dealt with this topic on 28 December 2018. The article can be accessed via www.dailystar.co.uk/news/latest-news/748890/robots-ai-human-rights-legal-status-eu-proposal.
Fig.: A human brain could be part of a reverse cyborg
In 2018, Paladyn Journal of Behavioral Robotics published several articles on robot and machine ethics. In a message to the authors, the editors noted: „Our special attention in recent months has been paid to ethical and moral issues that seem to be of daily debate of researchers from different disciplines.“ The current issue „Roboethics“ includes the articles „Towards animal-friendly machines“ by Oliver Bendel, „Liability for autonomous and artificially intelligent robots“ by Woodrow Barfield, „Corporantia: Is moral consciousness above individual brains/robots?“ by Christopher Charles Santos-Lang, „The soldier’s tolerance for autonomous systems“ by Jai Galliott and „GenEth: a general ethical dilemma analyzer“ by Michael Anderson and Susan Leigh Anderson. The following articles will be published in December 2019: „Autonomy in surgical robots and its meaningful human control“ by Fanny Ficuciello, Guglielmo Tamburrini, Alberto Arezzo, Luigi Villani, and Bruno Siciliano, and „AI for the Common Good?! Pitfalls, challenges, and Ethics Pen-Testing“ by Bettina Berendt. More information via www.degruyter.com/page/1498.
In November 2018 the „Proceedings of Robophilosophy 2018“ with the title „Envisioning robots in society – power, politics, and public space“ were published. Editors are Mark Coeckelbergh, Janina Loh, Michael Funk, Johanna Seibt, and Marco Nørskov. In addition to the contributions of the participants, the book also contains abstracts and extended abstracts of the keynotes by Simon Penny, Raja Chatila, Hiroshi Ishiguro, Guy Standing, Catelijne Muller, Juha Heikkilä, Joanna Bryson, and Oliver Bendel (his article about service robots from the perspective of ethics is available here). Robophilosophy is the most important international conference for robophilosophy and roboethics. In February 2018 it took place at the University of Vienna. Machine ethics was also an issue this time. „Robots are predicted to play a role in many aspects of our lives in the future, affecting work, personal relationships, education, business, law, medicine and the arts. As they become increasingly intelligent, autonomous, and communicative, they will be able to function in ever more complex physical and social surroundings, transforming the practices, organizations, and societies in which they are embedded.“ (IOS Press) More information via www.ios.com.
Ende Juni 2018 ist der siebte Beitrag im Handbuch zur Maschinenethik bei Springer erschienen. Er stammt von Oliver Bendel, der auch der Herausgeber ist, und trägt den Titel „Pflegeroboter aus Sicht der Maschinenethik“. Er geht u.a. von Ergebnissen aus dem Buchbeitrag „Surgical, Therapeutic, Nursing and Sex Robots in Machine and Information Ethics“ aus, der 2015 im Springer-Buch „Machine Medical Ethics“ veröffentlicht wurde, erweitert die Fragestellungen zu Pflegerobotern und vertieft die Diskussion darüber. Seit Ende 2016 sitzen renommierte Wissenschaftlerinnen und Wissenschaftler an ihren Kapiteln zum „Handbuch Maschinenethik“. Ende 2018 soll das gedruckte Werk erscheinen. Mit dabei sind etwa Luís Moniz Pereira aus Lissabon, einer der bekanntesten Maschinenethiker der Welt, und Roboterethikerin Janina Loh, die in Wien mit Mark Coeckelbergh zusammenarbeitet. Ein Beitrag der Stuttgarter Philosophin Catrin Misselhorn, die ebenfalls einen hervorragenden Ruf in der Disziplin hat, kommt in wenigen Tagen heraus. Eine Übersicht über die Beiträge, die laufend elektronisch veröffentlicht werden, findet sich über link.springer.com/referencework/10.1007/978-3-658-17484-2 …
Abb.: Welche moralischen Regeln soll man dem Pflegeroboter beibringen?
„Robophilosophy 2018 – Envisioning Robots In Society: Politics, Power, And Public Space“ is the third event in the Robophilosophy Conference Series which focusses on robophilosophy, a new field of interdisciplinary applied research in philosophy, robotics, artificial intelligence and other disciplines. The main organizers are Prof. Dr. Mark Coeckelbergh, Dr. Janina Loh and Michael Funk. Plenary speakers are Joanna Bryson (Department of Computer Science, University of Bath, UK), Hiroshi Ishiguro (Intelligent Robotics Laboratory, Osaka University, Japan), Guy Standing (Basic Income Earth Network and School of Oriental and African Studies, University of London, UK), Catelijne Muller (Rapporteur on Artificial Intelligence, European Economic and Social Committee), Robert Trappl (Head of the Austrian Research Institute for Artificial Intelligence, Austria), Simon Penny (Department of Art, University of California, Irvine), Raja Chatila (IEEE Global Initiative for Ethical Considerations in AI and Automated Systems, Institute of Intelligent Systems and Robotics, Pierre and Marie Curie University, Paris, France), Josef Weidenholzer (Member of the European Parliament, domains of automation and digitization) and Oliver Bendel (Institute for Information Systems, FHNW University of Applied Sciences and Arts Northwestern Switzerland). The conference will take place from 14 to 17 February 2018 in Vienna. More information via conferences.au.dk/robo-philosophy/.
A special session „Formalising Robot Ethics“ takes place within the ISAIM conference in Fort Lauderdale (3 to 5 January 2018). The program is now available and can be viewed on http://isaim2018.cs.virginia.edu/program.html. „Practical Challenges in Explicit Ethical Machine Reasoning“ is a talk by Louise Dennis and Michael Fischer, „Contextual Deontic Cognitive Event Calculi for Ethically Correct Robots“ a contribution of Selmer Bringsjord, Naveen Sundar G., Bertram Malle and Matthias Scheutz. Oliver Bendel will present „Selected Prototypes of Moral Machines“. A few words from the summary: „The GOODBOT is a chatbot that responds morally adequate to problems of the users. It’s based on the Verbot engine. The LIEBOT can lie systematically, using seven different strategies. It was written in Java, whereby AIML was used. LADYBIRD is an animal-friendly robot vacuum cleaner that spares ladybirds and other insects. In this case, an annotated decision tree was translated into Java. The BESTBOT should be even better than the GOODBOT.“
Prof. Dr. Oliver Bendel was invited to give a lecture at the ISAIM special session „Formalising Robot Ethics“. „The International Symposium on Artificial Intelligence and Mathematics is a biennial meeting that fosters interactions between mathematics, theoretical computer science, and artificial intelligence.“ (Website ISAIM) Oliver Bendel will present selected prototypes of moral and immoral machines and will discuss a project planned for 2018. The GOODBOT is a chatbot that responds morally adequate to problems of the users. It’s based on the Verbot engine. The LIEBOT can lie systematically, using seven different strategies. It was written in Java, whereby AIML was used. LADYBIRD is an animal-friendly robot vacuum cleaner that spares ladybirds and other insects. In this case, an annotated decision tree was translated into Java. The BESTBOT should be even better than the GOODBOT. Technically everything is still open. The ISAIM conference will take place from 3 to 5 January 2018 in Fort Lauderdale, Florida. Further information is available at isaim2018.cs.virginia.edu/.
The Digital Europe Working Group Conference Robotics will take place on 8 November 2017 at the European Parliament in Brussels. The keynote address will be given by Mariya Gabriel, European Commissioner for Digital Society and Economy. The speakers of the first panel are Oliver Bendel (Professor of Information Systems, Information Ethics and Machine Ethics at the School of Business FHNW, via video conference), Anna Byhovskaya (policy and communications advisor, Trade Union Advisory Council of the OECD) and Malcolm James (Senior Lecturer in Accounting & Taxation, Cardiff Metropolitan University). The third panel will be moderated by Mady Delvaux (Member of the European Parliament). Speaker is Giovanni Sartor (Professor of Legal Informatics and Legal Theory at the European University Institute). The poster can be downloaded here. Further information is available at www.socialistsanddemocrats.eu/events/sd-group-digital-europe-working-group-robotics.