Machine ethics produces moral and immoral machines. The morality is usually fixed, e.g. by programmed meta-rules and rules. The machine is thus capable of certain actions, not others. However, another approach is the morality menu (MOME for short). With this, the owner or user transfers his or her own morality onto the machine. The machine behaves in the same way as he or she would behave, in detail. Together with his teams, Prof. Dr. Oliver Bendel developed several artifacts of machine ethics at his university from 2013 to 2018. For one of them, he designed a morality menu that has not yet been implemented. Another concept exists for a virtual assistant that can make reservations and orders for its owner more or less independently. In the article „The Morality Menu“ the author introduces the idea of the morality menu in the context of two concrete machines. Then he discusses advantages and disadvantages and presents possibilities for improvement. A morality menu can be a valuable extension for certain moral machines. You can download the article here.
At the end of 2018, the article entitled „Learning How to Behave: Moral Competence for Social Robots“ by Bertram F. Malle and Matthias Scheutz was published in the „Handbuch Maschinenethik“ („Handbook Machine Ethics“) (ed.: Oliver Bendel). An excerpt from the abstract: „We describe a theoretical framework and recent research on one key aspect of robot ethics: the development and implementation of a robot’s moral competence.“ The authors propose „that moral competence consists of five elements, two constituents (moral norms and moral vocabulary) and three activities (moral judgment, moral action, and moral communication)“. „A robot’s computational representations of social and moral norms is a prerequisite for all three moral activities. However, merely programming in advance the vast network of human norms is impossible, so new computational learning algorithms are needed that allow robots to acquire and update the context-specific and graded norms relevant to their domain of deployment. Moral vocabulary is needed primarily for moral communication, which expresses moral judgments of others’ violations and explains one’s own moral violations – to justify them, apologize, or declare intentions to do better. Current robots have at best rudimentary moral competence, but with improved learning and reasoning they may begin to show the kinds of capacities that humans will expect of future social robots.“ (Abstract) An overview of the contributions that have been published electronically since 2017 can be found on link.springer.com/referencework/10.1007/978-3-658-17484-2.
In 2018, Paladyn Journal of Behavioral Robotics published several articles on robot and machine ethics. In a message to the authors, the editors noted: „Our special attention in recent months has been paid to ethical and moral issues that seem to be of daily debate of researchers from different disciplines.“ The current issue „Roboethics“ includes the articles „Towards animal-friendly machines“ by Oliver Bendel, „Liability for autonomous and artificially intelligent robots“ by Woodrow Barfield, „Corporantia: Is moral consciousness above individual brains/robots?“ by Christopher Charles Santos-Lang, „The soldier’s tolerance for autonomous systems“ by Jai Galliott and „GenEth: a general ethical dilemma analyzer“ by Michael Anderson and Susan Leigh Anderson. The following articles will be published in December 2019: „Autonomy in surgical robots and its meaningful human control“ by Fanny Ficuciello, Guglielmo Tamburrini, Alberto Arezzo, Luigi Villani, and Bruno Siciliano, and „AI for the Common Good?! Pitfalls, challenges, and Ethics Pen-Testing“ by Bettina Berendt. More information via www.degruyter.com/page/1498.
Michael und Susan Leigh Anderson stellen beim Berliner Kolloquium 2019 ihre neuesten Ergebnisse aus der Disziplin der Maschinenethik vor. Üblicherweise werden sogenannten moralischen Maschinen starre Regeln gegeben, an die sie sich sozusagen sklavisch halten. Dies hat einige Vorteile, aber auch ein paar Nachteile, denen man mit Hilfe von maschinellem Lernen entgegentreten kann. Genau diesen Ansatz haben die Andersons zusammen mit einem Forscher vom Max-Planck-Institut für Intelligente Systeme verfolgt, wobei die Altenpflege als Kontext diente: „Contrary to those who regard such a goal as unattainable, Michael and Susan Leigh Anderson … and Vincent Berenz … have succeeded in programming a SoftBank Nao robot with a set of values that determines its behavior while aiding a person in a simulated eldercare environment. This unprecedented accomplishment uses machine learning to enable the robot to discern how each of its actions would satisfy or violate its ethical duties in the current situation and decide the best action to take.“ (Machine Ethics) Die Ergebnisse werden 2019 in den Proceedings of the IEEE publiziert. Der wissenschaftliche Leiter des 23. Berliner Kolloquiums zu Pflegerobotern ist Oliver Bendel. Die Anmeldung erfolgt über die Website der Daimler und Benz Stiftung.
Abb.: Ein Nao-Roboter (Foto: SoftBank/Philippe Dureuil, CC-BY-SA-4.0)
In November 2018 the „Proceedings of Robophilosophy 2018“ with the title „Envisioning robots in society – power, politics, and public space“ were published. Editors are Mark Coeckelbergh, Janina Loh, Michael Funk, Johanna Seibt, and Marco Nørskov. In addition to the contributions of the participants, the book also contains abstracts and extended abstracts of the keynotes by Simon Penny, Raja Chatila, Hiroshi Ishiguro, Guy Standing, Catelijne Muller, Juha Heikkilä, Joanna Bryson, and Oliver Bendel (his article about service robots from the perspective of ethics is available here). Robophilosophy is the most important international conference for robophilosophy and roboethics. In February 2018 it took place at the University of Vienna. Machine ethics was also an issue this time. „Robots are predicted to play a role in many aspects of our lives in the future, affecting work, personal relationships, education, business, law, medicine and the arts. As they become increasingly intelligent, autonomous, and communicative, they will be able to function in ever more complex physical and social surroundings, transforming the practices, organizations, and societies in which they are embedded.“ (IOS Press) More information via www.ios.com.
„With a few decades, autonomous and semi-autonomous machines will be found throughout Earth’s environments, from homes and gardens to parks and farms and so-called working landscapes – everywhere, really, that humans are found, and perhaps even places we’re not. And while much attention is given to how those machines will interact with people, far less is paid to their impacts on animals.“ (Anthropocene, October 10, 2018) „Machines can disturb, frighten, injure, and kill animals,“ says Oliver Bendel, an information systems professor at the University of Applied Sciences and Arts Northwestern Switzerland, according to the magazine. „Animal-friendly machines are needed.“ (Anthropocene, October 10, 2018) In the article „Will smart machines be kind to animals?“ the magazine Anthropocene deals with animal-friendly machines and introduces the work of the scientist. It is based on his paper „Towards animal-friendly machines“ (Paladyn) and an interview conducted by journalist Brandon Keim with Oliver Bendel. More via www.anthropocenemagazine.org/2018/10/animal-friendly-ai/.
Semi-autonomous machines, autonomous machines and robots inhabit closed, semi-closed and open environments. There they encounter domestic animals, farm animals, working animals and/or wild animals. These animals could be disturbed, displaced, injured or killed. Within the context of machine ethics, the School of Business FHNW developed several design studies and prototypes for animal-friendly machines, which can be understood as moral machines in the spirit of this discipline. They were each linked with an annotated decision tree containing the ethical assumptions or justifications for interactions with animals. Annotated decision trees are seen as an important basis in developing moral machines. They are not without problems and contradictions, but they do guarantee well-founded, secure actions that are repeated at a certain level. The article „Towards animal-friendly machines“ by Oliver Bendel, published in August 2018 in Paladyn, Journal of Behavioral Robotics, documents completed and current projects, compares their relative risks and benefits, and makes proposals for future developments in machine ethics.
The Gatebox was given to some persons and institutions in Japan some time ago. The company announced at the end of July 2018 that it is now going into series production. In fact, it is possible to order on the website the machine that resembles a coffee machine. The anime girl Azuma Hikari lives in a glass „coffee pot“. She is a hologram connected to a dialogue system and an AI system. She communicates with her owner even when he is out and about (by sending messages to his smartphone) and learns. SRF visited a young man who lives with the Gatebox. „I love my wife,“ Kondo Akihiko is quoted. The station writes: „He can’t hug or kiss her. The Japanese guy is with a hologram.“ (SRF) Anyone who thinks that the love for manga and anime girls is a purely Japanese phenomenon is mistaken. In Dortmund’s BorDoll (from „Bordell“ and „Doll“ or „Love Doll“) the corresponding love dolls are in high demand. Here, too, it is young men shy of real girls who have developed a desire in the tradition of Pygmalion. Akihiko Kondo dreams that one day he can go out into the world with Azuma Hikari and hold her hand. But it’s a long way to go, and the anime girl will still need her little prison for a long time.
Ende Juni 2018 ist der siebte Beitrag im Handbuch zur Maschinenethik bei Springer erschienen. Er stammt von Oliver Bendel, der auch der Herausgeber ist, und trägt den Titel „Pflegeroboter aus Sicht der Maschinenethik“. Er geht u.a. von Ergebnissen aus dem Buchbeitrag „Surgical, Therapeutic, Nursing and Sex Robots in Machine and Information Ethics“ aus, der 2015 im Springer-Buch „Machine Medical Ethics“ veröffentlicht wurde, erweitert die Fragestellungen zu Pflegerobotern und vertieft die Diskussion darüber. Seit Ende 2016 sitzen renommierte Wissenschaftlerinnen und Wissenschaftler an ihren Kapiteln zum „Handbuch Maschinenethik“. Ende 2018 soll das gedruckte Werk erscheinen. Mit dabei sind etwa Luís Moniz Pereira aus Lissabon, einer der bekanntesten Maschinenethiker der Welt, und Roboterethikerin Janina Loh, die in Wien mit Mark Coeckelbergh zusammenarbeitet. Ein Beitrag der Stuttgarter Philosophin Catrin Misselhorn, die ebenfalls einen hervorragenden Ruf in der Disziplin hat, kommt in wenigen Tagen heraus. Eine Übersicht über die Beiträge, die laufend elektronisch veröffentlicht werden, findet sich über link.springer.com/referencework/10.1007/978-3-658-17484-2 …
Abb.: Welche moralischen Regeln soll man dem Pflegeroboter beibringen?
The international workshop „Understanding AI & Us“ will take place in Berlin (Alexander von Humboldt Institute for Internet and Society) on 30 June 2018. It is hosted by Joanna Bryson (MIT), Janina Loh (University of Vienna), Stefan Ullrich (Weizenbaum Institute Berlin) and Christian Djeffal (IoT and Government, Berlin). Birgit Beck, Oliver Bendel and Pak-Hang Wong are invited to the panel on the ethical challenges of artificial intelligence. The aim of the workshop is to bring together experts from the field of research reflecting on AI. The event is funded by the Volkswagen Foundation (VolkswagenStiftung). The project „Understanding AI & Us“ furthers and deepens the understanding of artificial intelligence (AI) in an interdisciplinary way. „This is done in order to improve the ways in which AI-systems are invented, designed, developed, and criticised.“ (Invitation letter) „In order to achieve this, we form a group that merges different abilities, competences and methods. The aim is to provide space for innovative and out-of-the-box-thinking that would be difficult to pursue in ordinary academic discourse in our respective disciplines. We are seeking ways to merge different disciplinary epistemological standpoints in order to increase our understanding of the development of AI and its impact upon society.“ (Invitation letter)