Workshop in Manchester on Formal Ethical Agents and Robots

A workshop will be held at the University of Manchester on 11 November 2024 that can be located in the field of machine ethics. The following information can be found on the website: „Recent advances in artificial intelligence have led to a range of concerns about the ethical impact of the technology. This includes concerns about the day-to-day behaviour of robotic systems that will interact with humans in workplaces, homes and hospitals. One of the themes of these concerns is the need for such systems to take ethics into account when reasoning. This has generated new interest in how we can specify, implement and validate ethical reasoning.“ (Website iFM 2024) The aim of this workshop, to be held in conjunction with iFM 2024, would be to explore formal approaches to these issues. Submission deadline is 8 August, notification is 12 September. More information at ifm2024.cs.manchester.ac.uk/fear.html.

Fig.: The workshop will take place in Manchester

Robophilosophy 2024 in Aarhus

The upcoming international Robophilosophy Conference 2024 in Aarhus is set to tackle the socio-cultural and ethical questions arising from the use of generative multimodal AIs in social robotics. The event will bring together global scholars from humanities, social sciences, social robotics, and computer science, aiming to produce actionable insights and responsibly address the socio-cultural transformations brought about by social robotics. It is part of the Robophilosophy Conference Series, known for its large scale events for humanities research in social robotics. RP2024 highlights the urgency of closer collaboration between tech experts and societal experts to establish research-based regulations. The conference will welcome 80-100 talks in plenaries, special workshops, and parallel sessions of reviewed research papers. Virtual attendance is made possible for those unable to attend in person. Interested parties are invited to submit their papers on the conference topics. Key dates to note: Deadline for workshop/panel proposal submissions is January 31, 2024. Deadline for short papers and posters is February 15, 2024. More information at cas.au.dk/en/robophilosophy/conferences/rpc2024.

Fig.: Generative AI will be a topic (Picture: Ideogram)

Proceedings of the 14th International Conference on Social Robotics

The proceedings of ICSR 2022 were published in early 2023. Included is the paper „The CARE-MOMO Project“ by Oliver Bendel and Marc Heimann. From the abstract: „In the CARE-MOMO project, a morality module (MOMO) with a morality menu (MOME) was developed at the School of Business FHNW in the context of machine ethics. This makes it possible to transfer one’s own moral and social convictions to a machine, in this case the care robot with the name Lio. The current model has extensive capabilities, including motor, sensory, and linguistic. However, it cannot yet be personalized in the moral and social sense. The CARE-MOMO aims to eliminate this state of affairs and to give care recipients the possibility to adapt the robot’s ‚behaviour‘ to their ideas and requirements. This is done in a very simple way, using sliders to activate and deactivate functions. There are three different categories that appear with the sliders. The CARE-MOMO was realized as a prototype, which demonstrates the functionality and aids the company in making concrete decisions for the product. In other words, it can adopt the morality module in whole or in part and further improve it after testing it in facilities.“ The book (part II of the proceedings) can be downloaded or ordered via link.springer.com/book/10.1007/978-3-031-24670-8.

Fig.: A CARE-MOMO for Lio

The Morality Module at the ICSR 2022

Two of the most important conferences for social robotics are Robophilosophy and ICSR. After Robophilosophy, a biennial, was last held in Helsinki in August 2022, ICSR is now coming up in Florence (13 – 16 December 2022). „The 14th International Conference on Social Robotics (ICSR 2022) brings together researchers and practitioners working on the interaction between humans and intelligent robots and on the integration of social robots into our society. … The theme of this year’s conference is Social Robots for Assisted Living and Healthcare, emphasising on the increasing importance of social robotics in human daily living and society.“ (Website ICSR) The committee sent out notifications by October 15, 2022. The paper „The CARE-MOMO Project“ by Oliver Bendel and Marc Heimann was accepted. This is a project that combines machine ethics and social robotics. The invention of the morality menu was transferred to a care robot for the first time. The care recipient can use sliders on the display to determine how he or she wants to be treated. This allows them to transfer their moral and social beliefs and ideas to the machine. The morality module (MOMO) is intended for the Lio assistance robot from F&P Robotics. The result will be presented at the end of October 2022 at the company headquarters in Glattbrugg near Zurich. More information on the conference via www.icsr2022.it.

Fig.: A cathedral in Florence

Programming Machine Ethics

The book „Programming Machine Ethics“ (2016) by Luís Moniz Pereira and Ari Saptawijaya is available for free download from Z-Library. Luís Moniz Pereira is among the best-known machine ethicists. „This book addresses the fundamentals of machine ethics. It discusses abilities required for ethical machine reasoning and the programming features that enable them. It connects ethics, psychological ethical processes, and machine implemented procedures. From a technical point of view, the book uses logic programming and evolutionary game theory to model and link the individual and collective moral realms. It also reports on the results of experiments performed using several model implementations. Opening specific and promising inroads into the terra incognita of machine ethics, the authors define here new tools and describe a variety of program-tested moral applications and implemented systems. In addition, they provide alternative readings paths, allowing readers to best focus on their specific interests and to explore the concepts at different levels of detail.“ (Information by Springer) The download link is eu1lib.vip/book/2677910/9fd009.

Fig.: Programming machine ethics

Paper about Social Robots in Retail

On June 30, 2022, the paper „Should Social Robots in Retail Manipulate Customers?“ by Oliver Bendel and Liliana Margarida Dos Santos Alves was published on arxiv.org. It was presented at the AAAI 2022 Spring Symposium „How Fair is Fair? Achieving Wellbeing AI“ at Stanford University and came in third place in the Best Presentation Awards. From the abstract: „Against the backdrop of structural changes in the retail trade, social robots have found their way into retail stores and shopping malls in order to attract, welcome, and greet customers; to inform them, advise them, and persuade them to make a purchase. Salespeople often have a broad knowledge of their product and rely on offering competent and honest advice, whether it be on shoes, clothing, or kitchen appliances. However, some frequently use sales tricks to secure purchases. The question arises of how consulting and sales robots should “behave”. Should they behave like human advisors and salespeople, i.e., occasionally manipulate customers? Or should they be more honest and reliable than us? This article tries to answer these questions. After explaining the basics, it evaluates a study in this context and gives recommendations for companies that want to use consulting and sales robots. Ultimately, fair, honest, and trustworthy robots in retail are a win-win situation for all concerned.“ The paper will additionally be published in the proceedings volume of the symposium by the end of summer. It can be downloaded via arxiv.org/abs/2206.14571.

Fig.: Does this bag fit me?

The Care Robot Becomes Moral

There are more and more robots being used in health care. Most of them are prototypes, some – like Lio and P-CARE from F&P Robotics – are products that are manufactured in small series. Machine ethics researches and creates moral machines. These are often guided by certain values or meta-rules, they follow predetermined rules, or they learn from situations and adapt their behavior. Michael Anderson and Susan L. Anderson presented their value-driven eldercare robot at the 2019 Berlin Colloquium by invitation of Oliver Bendel. The CARE-MOMO („MOMO“ stands for „morality module“) is a morality module for a robot in the manner of Lio. The idea is that the robot acquires clearly delimited moral abilities in addition to its usual abilities. The focus is to have it perform an act or speech act with high reliability based on a moral assumption or reasoning, with a clearly identifiable benefit to the caregiver or the care recipient. The initiators want to address a common problem in the nursing and care field. Marc Heimann could be recruited for the project at the School of Business FHNW. The supervisor is Oliver Bendel, who has been working with robots in the healthcare sector for ten years and has built numerous moral machines together with his teams.

Fig.: Lio in action

Conversational Agents from the Perspective of Machine Ethics

A group of about 50 scientists from all over the world worked for one week (September 19 – 24, 2021) at Schloss Dagstuhl – Leibniz-Zentrum für Informatik on the topic „Conversational Agent as Trustworthy Autonomous System (Trust-CA)“. Half were on site, the other half were connected via Zoom. Organizers of this event were Asbjørn Følstad (SINTEF – Oslo), Jonathan Grudin (Microsoft – Redmond), Effie Lai-Chong Law (University of Leicester), and Björn Schuller (University of Augsburg). On-site participants from Germany and Switzerland included Elisabeth André (University of Augsburg), Stefan Schaffer (DFKI), Sebastian Hobert (University of Göttingen), Matthias Kraus (University of Ulm), and Oliver Bendel (School of Business FHNW). The complete list of participants can be found on the Schloss Dagstuhl website, as well as some pictures. Oliver Bendel presented projects from ten years of research in machine ethics, namely GOODBOT, LIEBOT, BESTBOT, MOME, and SPACE-THEA. Further information is available here.

Fig.: A discussion betweens humans

SPACE THEA’s Desire for Mars

SPACE THEA was developd by Martin Spathelf at the School of Business FHNW from April to August 2021. The client and supervisor was Prof. Dr. Oliver Bendel. The voice assistant is supposed to show empathy and emotions towards astronauts on a Mars flight. Technically, it is based on Google Assistant and Dialogflow. The programmer chose a female voice with Canadian English. SPACE THEA’s personality includes functional and emotional intelligence, honesty, and creativity. She follows a moral principle: to maximize the benefit of the passengers of the spacecraft. The prototype was implemented for the following scenarios: conduct general conversations; help the user find a light switch; assist the astronaut when a thruster fails; greet and cheer up in the morning; fend off an insult for no reason; stand by a lonely astronaut; learn about the voice assistant. A video on the latter scenario is available here. Oliver Bendel has been researching conversational agents for 20 years. With his teams, he has developed 20 concepts and artifacts of machine ethics and social robotics since 2012.

Fig.: The red planet

AI and Society

The AAAI Spring Symposia at Stanford University are among the AI community’s most important get-togethers, especially for its experimental division. The years 2016, 2017, and 2018 were memorable highlights for machine ethics, robot ethics, ethics by design, and AI ethics, with the symposia „Ethical and Moral Considerations in Non-Human Agents“ (2016), „Artificial Intelligence for the Social Good“ (2017), and „AI and Society: Ethics, Safety and Trustworthiness in Intelligent Agents“ (2018) … As of 2019, the proceedings are no longer provided directly by the Association for the Advancement of Artificial Intelligence, but by the organizers of each symposium. Since summer 2021, the entire conference volume of 2018 is available free of charge. It includes contributions by Philip C. Jackson, Mark R. Waser, Barry M. Horowitz, John Licato, Stefania Costantini, Biplav Srivastava, and Oliver Bendel, among many others. It can be found via aaai.org/proceeding/01-spring-2018/.

Fig.: Animal-friendly robot cars were a topic in 2016

CARE-MOMO

On July 22, 2021, Prof. Dr. Oliver Bendel, as client, submitted a project to the School of Business FHNW, entitled „CARE-MOMO: A Morality Module for a Care Robot“. There are more and more robots being used in nursing and care. Most of them are prototypes, some – like Lio and P-Care from F&P Robotics – are products that are manufactured in small series. Machine ethics researches and creates moral machines. These are often guided by certain values or meta-rules, they follow predetermined rules, or they learn from situations and adapt their behavior. Michael Anderson and Susan L. Anderson presented their value-driven eldercare robot at the 2019 Berlin Colloquium by invitation of Oliver Bendel. He and his teams have created 16 concepts and implementations of moral machines and social robots over the past decade. He has been researching systems and robots in the healthcare sector for just as long. The CARE-MOMO is a morality module for a robot in the manner of Lio. The idea is that the robot acquires clearly delimited moral abilities in addition to its usual abilities. The focus is to have it perform an act or speech act with high reliability based on a moral assumption or reasoning, with a clearly identifiable benefit to the caregiver or the care recipient. The result is a morality module that can in principle be used by a robot like Lio. In August 2021, it will be decided whether the project can be implemented in this form.

Fig.: LIO with magnetic eyes

Hello Deer, Go Back to the Forest!

We use our natural language, facial expressions and gestures when communicating with our fellow humans. Some of our social robots also have these abilities, and so we can converse with them in the usual way. Many highly evolved animals have a language in which there are sounds and signals that have specific meanings. Some of them – like chimpanzees or gorillas – have mimic and gestural abilities comparable to ours. Britt Selvitelle and Aza Raskin, founders of the Earth Species Project, want to use machine learning to enable communication between humans and animals. Languages, they believe, can be represented not only as geometric structures, but also translated by matching their structures to each other. They say they have started working on whale and dolphin communication. Over time, the focus will broaden to include primates, corvids, and others. It would be important for the two scientists to study not only natural language but also facial expressions, gestures and other movements associated with meaning (they are well aware of this challenge). In addition, there are aspects of animal communication that are inaudible and invisible to humans that would need to be considered. Britt Selvitelle and Aza Raskin believe that translation would open up the world of animals – but it could be the other way around that they would first have to open up the world of animals in order to decode their language. However, should there be breakthroughs in this area, it would be an opportunity for animal welfare. For example, social robots, autonomous cars, wind turbines, and other machines could use animal languages alongside mechanical signals and human commands to instruct, warn and scare away dogs, elks, pigs, and birds. Machine ethics has been developing animal-friendly machines for years. Among other things, the scientists use sensors together with decision trees. Depending on the situation, braking and evasive maneuvers are initiated. Maybe one day the autonomous car will be able to avoid an accident by calling out in deer dialect: Hello deer, go back to the forest!

Fig.: Three fawns

Animals and Machines

Semi-autonomous machines, autonomous machines and robots inhabit closed, semi-closed and open environments, more structured environments like the household or more unstructured environments like cultural landscapes or the wilderness. There they encounter domestic animals, farm animals, working animals, and wild animals. These creatures could be disturbed, displaced, injured, or killed by the machines. Within the context of machine ethics and social robotics, the School of Business FHNW developed several design studies and prototypes for animal-friendly machines, which can be understood as moral and social machines in the spirit of these disciplines. In 2019-20, a team led by Prof. Dr. Oliver Bendel developed a prototype robot lawnmower that can recognize hedgehogs, interrupt its work for them and thus protect them. Every year many of these animals die worldwide because of traditional service robots. HAPPY HEDGEHOG (HHH), as the invention is called, could be a solution to this problem. This article begins by providing an introduction to the background. Then it focuses on navigation (where the machine comes across certain objects that need to be recognized) and thermal and image recognition (with the help of machine learning) of the machine. It also presents obvious weaknesses and possible improvements. The results could be relevant for an industry that wants to market their products as animal-friendly machines. The paper „The HAPPY HEDGEHOG Project“ is available here.

Fig.: Wild animals also collide with machines

Helping Animals

The paper „The HAPPY HEDGEHOG Project“ by Prof. Dr. Oliver Bendel, Emanuel Graf and Kevin Bollier was accepted at the AAAI Spring Symposia 2021. The researchers will present it at the sub-conference „Machine Learning for Mobile Robot Navigation in the Wild“ at the end of March. The project was conducted at the School of Business FHNW between June 2019 and January 2020. Emanuel Graf, Kevin Bollier, Michel Beugger and Vay Lien Chang developed a prototype of a mowing robot in the context of machine ethics and social robotics, which stops its work as soon as it detects a hedgehog. HHH has a thermal imaging camera. When it encounters a warm object, it uses image recognition to investigate it further. At night, a lamp mounted on top helps. After training with hundreds of photos, HHH can quite accurately identify a hedgehog. With this artifact, the team provides a solution to a problem that frequently occurs in practice. Commercial robotic mowers repeatedly kill young hedgehogs in the dark. HAPPY HEDGEHOG could help to save them. The video on in the corresponding section of this website shows it without disguise. The robot is in the tradition of LADYBIRD.

Fig.: Helping hedgehogs and other animals

Trustworthy Conversational Agents

In the fall of 2021, a five-day workshop on trustworthy conversational agents will be held at Schloss Dagstuhl. Prof. Dr. Oliver Bendel is among the invited participants. According to the website, Schloss Dagstuhl – Leibniz Center for Informatics pursues its mission of furthering world class research in computer science by facilitating communication and interaction between researchers. Oliver Bendel and his teams have developed several chatbots in the context of machine ethics since 2013, which were presented at conferences at Stanford University and Jagiellonian University and received international attention. Since the beginning of 2020, he has been preparing to develop several voice assistants that can show empathy and emotion. „Schloss Dagstuhl was founded in 1990 and quickly became established as one of the world’s premier meeting centers for informatics research. Since the very first days of Schloss Dagstuhl, the seminar and workshop meeting program has always been the focus of its programmatic work. In recent years, Schloss Dagstuhl has expanded its operation and also has significant efforts underway in bibliographic services … and in open access publishing.“ (Website Schloss Dagstuhl)

Fig.: Is this voicebot trustworthy?

Morality Transfer with the Help of Sliders

From 18 to 21 August 2020, the Robophilosophy conference took place. Due to the pandemic, participants could not meet in Aarhus as originally planned, but only in virtual space. Nevertheless, the conference was a complete success. At the end of the year, the conference proceedings were published by IOS Press, including the paper „The Morality Menu Project“ by Oliver Bendel. From the abstract: „The discipline of machine ethics examines, designs and produces moral machines. The artificial morality is usually pre-programmed by a manufacturer or developer. However, another approach is the more flexible morality menu (MOME). With this, owners or users replicate their own moral preferences onto a machine. A team at the FHNW implemented a MOME for MOBO (a chatbot) in 2019/2020. In this article, the author introduces the idea of the MOME, presents the MOBO-MOME project and discusses advantages and disadvantages of such an approach. It turns out that a morality menu could be a valuable extension for certain moral machines.“ The book can be ordered on the publisher’s website. An author’s copy is available here.

Fig.: Morality transfer with the help of sliders