At the end of 2018, the article entitled „Learning How to Behave: Moral Competence for Social Robots“ by Bertram F. Malle and Matthias Scheutz was published in the „Handbuch Maschinenethik“ („Handbook Machine Ethics“) (ed.: Oliver Bendel). An excerpt from the abstract: „We describe a theoretical framework and recent research on one key aspect of robot ethics: the development and implementation of a robot’s moral competence.“ The authors propose „that moral competence consists of five elements, two constituents (moral norms and moral vocabulary) and three activities (moral judgment, moral action, and moral communication)“. „A robot’s computational representations of social and moral norms is a prerequisite for all three moral activities. However, merely programming in advance the vast network of human norms is impossible, so new computational learning algorithms are needed that allow robots to acquire and update the context-specific and graded norms relevant to their domain of deployment. Moral vocabulary is needed primarily for moral communication, which expresses moral judgments of others’ violations and explains one’s own moral violations – to justify them, apologize, or declare intentions to do better. Current robots have at best rudimentary moral competence, but with improved learning and reasoning they may begin to show the kinds of capacities that humans will expect of future social robots.“ (Abstract) An overview of the contributions that have been published electronically since 2017 can be found on link.springer.com/referencework/10.1007/978-3-658-17484-2.
In 2018, Paladyn Journal of Behavioral Robotics published several articles on robot and machine ethics. In a message to the authors, the editors noted: „Our special attention in recent months has been paid to ethical and moral issues that seem to be of daily debate of researchers from different disciplines.“ The current issue „Roboethics“ includes the articles „Towards animal-friendly machines“ by Oliver Bendel, „Liability for autonomous and artificially intelligent robots“ by Woodrow Barfield, „Corporantia: Is moral consciousness above individual brains/robots?“ by Christopher Charles Santos-Lang, „The soldier’s tolerance for autonomous systems“ by Jai Galliott and „GenEth: a general ethical dilemma analyzer“ by Michael Anderson and Susan Leigh Anderson. The following articles will be published in December 2019: „Autonomy in surgical robots and its meaningful human control“ by Fanny Ficuciello, Guglielmo Tamburrini, Alberto Arezzo, Luigi Villani, and Bruno Siciliano, and „AI for the Common Good?! Pitfalls, challenges, and Ethics Pen-Testing“ by Bettina Berendt. More information via www.degruyter.com/page/1498.
Fig.: Machines can be friendly to beetles
Semi-autonomous machines, autonomous machines and robots inhabit closed, semi-closed and open environments. There they encounter domestic animals, farm animals, working animals and/or wild animals. These animals could be disturbed, displaced, injured or killed. Within the context of machine ethics, the School of Business FHNW developed several design studies and prototypes for animal-friendly machines, which can be understood as moral machines in the spirit of this discipline. They were each linked with an annotated decision tree containing the ethical assumptions or justifications for interactions with animals. Annotated decision trees are seen as an important basis in developing moral machines. They are not without problems and contradictions, but they do guarantee well-founded, secure actions that are repeated at a certain level. The article „Towards animal-friendly machines“ by Oliver Bendel, published in August 2018 in Paladyn, Journal of Behavioral Robotics, documents completed and current projects, compares their relative risks and benefits, and makes proposals for future developments in machine ethics.
Fig.: An animal-friendly vehicle?
PlayGround is a Spanish online magazine, founded in 2008, with a focus on culture, future and food. Astrid Otal asked the ethicist Oliver Bendel about the conference in London („Love and Sex with Robots“) and in general about sex robots and love dolls. One issue was: „In love, a person can suffer. But in this case, can robots make us suffer sentimentally?“ The reply to it: „Of course, they can make us suffer. By means of their body, body parts and limbs, and by means of their language capabilities. They can hurt us, they can kill us. They can offend us by using certain words and by telling the truth or the untruth. In my contribution for the conference proceedings, I ask this question: It is possible to be unfaithful to the human love partner with a sex robot, and can a man or a woman be jealous because of the robot’s other love affairs? We can imagine how suffering can emerge in this context … But robots can also make us happy. Some years ago, we developed the GOODBOT, a chatbot which can detect problems of the user and escalate on several levels. On the highest level, it hands over an emergency number. It knows its limits.“ Some statements of the interview have been incorporated in the article „Última parada: después del sexo con autómatas, casarse con un Robot“ (February 11, 2017) which is available via www.playgroundmag.net/futuro/sexo-robots-matrimonio-legal-2050-realdolls_0_1918608121.html.
Fig.: What about the robot’s love affairs?
Springer invites scientists to contribute to the Journal on Vehicle Routing Algorithms. Editors-in-chief are Christian Prins, Troyes University of Technology, France, and Marc Sevaux, University of South-Brittany, France. The publishing house declares that the new journal „is an excellent domain for testing new approaches in modeling, optimization, artificial intelligence, computational intelligence, and simulation“ (Mailing, 2 September 2016). „Articles published in the Journal on Vehicle Routing Algorithms will present solutions, methods, algorithms, case studies, or software, attracting the interest of academic and industrial researchers, practitioners, and policymakers.“ (Mailing, 2 September 2016) According to the website, a vehicle routing problem (VRP) „arises whenever a set of spatially disseminated locations must be visited by mobile objects to perform tasks“ (Website Springer). „The mobile objects may be motorized vehicles, pedestrians, drones, mobile sensors, or manufacturing robots; the space covered may range from silicon chips or PCBs to aircraft wings, warehouses, cities, or countries; and the applications include traditional domains, such as freight and passenger transportation, services, logistics, and manufacturing, and also modern issues such as autonomous cars and the Internet of Things (IoT), and the profound environmental and societal implications of achieving efficiencies in resources, power, labor, and time.“ (Website Springer) The moral decisions of cars, drones and vacuum cleaners can also be investigated. More information via www.springer.com.
Machine ethics researches the morality of semi-autonomous and autonomous machines. In 2013 and 2014, the School of Business at the University of Applied Sciences and Arts Northwestern Switzerland FHNW implemented a prototype of the GOODBOT, which is a novelty chatbot and a simple moral machine. One of its meta rules was it should not lie unless not lying would hurt the user. In a follow-up project in 2016 the LIEBOT (aka LÜGENBOT) was developed, as an example of a Munchausen machine. The student Kevin Schwegler, supervised by Prof. Dr. Oliver Bendel and Prof. Dr. Bradley Richards, used the Eclipse Scout framework. The whitepaper which was published on July 25, 2016 via liebot.org outlines the background and the development of the LIEBOT. It describes – after a short introduction to the history and theory of lying and automatic lying (including the term of Munchausen machines) – the principles and pre-defined standards the bad bot will be able to consider. Then it is discussed how Munchausen machines as immoral machines can contribute to constructing and optimizing moral machines. After all the LIEBOT project is a substantial contribution to machine ethics as well as a critical review of electronic language-based systems and services, in particular of virtual assistants and chatbots.
Fig.: A role model for the LIEBOT
The second international congress on „Love and Sex with Robots“ will be taking place in London, from 19 to 20 December 2016. Topics are robot emotions, humanoid robots, clone robots, entertainment robots, teledildonics, intelligent electronic sex hardware and roboethics. In the introduction it is said: „Within the fields of Human-Computer Interaction and Human-Robot Interaction, the past few years have witnessed a strong upsurge of interest in the more personal aspects of human relationships with these artificial partners. This upsurge has not only been apparent amongst the general public, as evidenced by an increase in coverage in the print media, TV documentaries and feature films, but also within the academic community.“ (Website LSR 2016) The congress „provides an excellent opportunity for academics and industry professionals to present and discuss their innovative work and ideas in an academic symposium“ (Website LSR 2016). According to the CfP, full papers should „be no more than 10 pages (excluding references) and extended abstracts should be no more than 3 pages (excluding references)“ (Website LSR 2016). More information via loveandsexwithrobots.org.
Fig.: Logo and mascot of the congress
„The GOODBOT project was realized in 2013/14 in the context of machine ethics. First the tutoring person (the author of this contribution) laid out some general considerations. Then a student practice project was tendered within the school. Three future business informatics scientists applied for the practice-related work, developed the prototype over several months in cooperation with the professor, and presented it early in 2014. The successor project LIEBOT started in 2016.“ These are the initial words of a new contribution in Germany’s oldest online magazine, Telepolis. The author, Oliver Bendel, presents the GOODBOT project which is a part of his research on machine ethics. „The GOODBOT responds more or less appropriately to morally charged statements, thereby it differs from the majority of chatbots. It recognizes problems as the designers anticipated certain emotive words users might enter. It rates precarious statements or questions and escalates on multiple levels. Provided the chat runs according to standard, it is just a standard chatbot, but under extreme conditions it turns into a simple moral machine.“ The article „The GOODBOT Project: A Chatbot as a Moral Machine“ was published on May 17, 2016 and can be opened via http://www.heise.de/tp/artikel/48/48260/1.html.
Das Department of Aerospace Engineering der Pennsylvania State University (www.psu.edu) schreibt eine „Faculty position in Engineering and Ethics of Unmanned Aircraft Systems“ aus. Zu den Unmanned Aircraft Systems gehören Unmanned Aerial Vehicles, ferngesteuerte oder (teil-)autonome Drohnen. In der Annonce, die maschinenethik.net zur Verfügung gestellt wurde, heißt es: „The research area represented by this search could be viewed as a special aspect of a broader one at the intersection of robotics, autonomy, and ethics. Applicants must have an earned doctorate in aerospace engineering or a related field; at least one degree in aerospace engineering or related experience is preferred.“ Der Leiter der Einrichtung, Professor George A. Lesieutre, nennt auf Nachfrage mögliche Forschungsfragen: „For what purposes should we deploy such vehicles, (or not) and what decisions should we permit them to make on our behalf?“ Weitere Informationen über www.aero.psu.edu.
Als Follow-up zu der Tagung „Mensch-Roboter-Interaktionen aus interkultureller Perspektive: Japan und Deutschland im Vergleich“ führt das Japanisch-Deutsche Zentrum Berlin am 4. Dezember 2014 ein Symposium zum Thema „Roboethik: Technikfolgenabschätzung und verantwortungsbewusste Innovation in Japan und Deutschland“ durch. Laut Veranstalter werden Experten aus Wissenschaft und Wirtschaft, darunter Hersteller von Servicerobotern, darüber diskutieren, wie „ethische Fragen, Fragen der Lebensqualität, Risikoabschätzung und Nutzerinteressen frühzeitig in die Entwicklung von Robotertechnologie integriert werden können, so dass ein nachhaltiger Dialog zwischen allen Beteiligten entsteht“ (Information per E-Mail). „Mit der Konferenz soll auf diese Weise auch eine Plattform für den interdisziplinären, interkulturellen Austausch über die Frage, wie wir künftig leben wollen, geboten werden und welche Handlungs- und Gestaltungsmacht dem/der Einzelnen hierbei zukommt.“ (ebd.) Weitere Einzelheiten sind dem Programmentwurf http://www.jdzb.de/fileadmin/Redaktion/PDF/veranstaltungen/tagungen-in-d/P1597-Programm.pdf zu entnehmen. Konferenzsprachen sind Deutsch und Japanisch, wobei simultan gedolmetscht wird.