The Morality Menu Project

„Once we place so-called ’social robots‘ into the social practices of our everyday lives and lifeworlds, we create complex, and possibly irreversible, interventions in the physical and semantic spaces of human culture and sociality. The long-term socio-cultural consequences of these interventions is currently impossible to gauge.“ (Website Robophilosophy Conference) With these words the next Robophilosophy conference was announced. It would have taken place in Aarhus, Denmark, from 18 to 21 August 2019, but due to the COVID 19 pandemic it is being conducted online. One lecture will be given by Oliver Bendel. The abstract of the paper „The Morality Menu Project“ states: „Machine ethics produces moral machines. The machine morality is usually fixed. Another approach is the morality menu (MOME). With this, owners or users transfer their own morality onto the machine, for example a social robot. The machine acts in the same way as they would act, in detail. A team at the School of Business FHNW implemented a MOME for the MOBO chatbot. In this article, the author introduces the idea of the MOME, presents the MOBO-MOME project and discusses advantages and disadvantages of such an approach. It turns out that a morality menu can be a valuable extension for certain moral machines.“ In 2018 Hiroshi Ishiguro, Guy Standing, Catelijne Muller, Joanna Bryson, and Oliver Bendel had been keynote speakers. In 2020, Catrin Misselhorn, Selma Sabanovic, and Shannon Vallor will be presenting. More information via conferences.au.dk/robo-philosophy/.

Fig.: The morality menu project

About MOBO-MOME

From June 2019 to January 2020 the Morality Menu (MOME) was developed under the supervision of Prof. Dr. Oliver Bendel. With it you can transfer your own morality to the chatbot called MOBO. First of all, the user must provide various personal details. He or she opens the „User Personality“ panel in the „Menu“ and can then enter his or her name, age, nationality, gender, sexual orientation, and hair color. These details are important for communication and interaction with the chatbot. In a further step, the user can call up the actual morality menu („Rules of conduct“) via „Menu“. It consists of 9 rules, which a user (or an operator) can activate (1) or deactivate (0). The behaviors 1 – 8, depending on how they are activated, result in the proxy morality of the machine (the proxy machine). It usually represents the morality of the user (or the operator). But you can also give the system the freedom to generate its morality randomly. This is exactly what happens with this option. After the morality menu has been completely set, the dialogue can begin. To do this, the user calls up „Chatbot“ in the „Menu“. The Chatbot MOBO is started. The adventure can begin! A video of the MOBO-MOME is available here.

Fig.: How the MOBO-MOME works

Das siebte Artefakt der Maschinenethik

An der Hochschule für Wirtschaft FHNW entsteht zwischen Juni 2019 und Januar 2020 das siebte Artefakt der Maschinenethik. Ideen- und Auftraggeber ist Prof. Dr. Oliver Bendel. Die Maschinenethik bringt moralische und unmoralische Maschinen hervor, derzeit als Konzepte, Simulationen oder Prototypen. Die maschinelle Moral ist überwiegend fest verankert, über Prinzipien bzw. Metaregeln sowie Regeln. Die Maschinen sind damit zu bestimmten Aktionen in der Lage, zu anderen nicht. Ein Ansatz, der eine gewisse Flexibilität verheißt, ist das Moralmenü (kurz MOME). Über dieses überträgt der Besitzer oder Benutzer seine eigene Moral, seine Vorstellungen und Überzeugungen zu Gut und Böse, seine Wertmaßstäbe, seine Verhaltensregeln auf die Maschine. Diese agiert und reagiert so, wie er dies auch tun würde, und zwar im Detail. Er trifft womöglich auf bestimmte Voreinstellungen, hat aber eine gewisse Freiheit, diese zu verändern oder neue Standardeinstellungen festzulegen. Im Projekt wird ein MOME prototypisch implementiert, das auf ein bereits bestehendes System oder ein im Projekt entwickeltes System zugreift.

Abb.: Ein Moralmenü für einen Sprachassistenten

Towards a Proxy Morality

Machine ethics produces moral and immoral machines. The morality is usually fixed, e.g. by programmed meta-rules and rules. The machine is thus capable of certain actions, not others. However, another approach is the morality menu (MOME for short). With this, the owner or user transfers his or her own morality onto the machine. The machine behaves in the same way as he or she would behave, in detail. Together with his teams, Prof. Dr. Oliver Bendel developed several artifacts of machine ethics at his university from 2013 to 2018. For one of them, he designed a morality menu that has not yet been implemented. Another concept exists for a virtual assistant that can make reservations and orders for its owner more or less independently. In the article „The Morality Menu“ the author introduces the idea of the morality menu in the context of two concrete machines. Then he discusses advantages and disadvantages and presents possibilities for improvement. A morality menu can be a valuable extension for certain moral machines. You can download the article here.

Fig.: A proxy machine