From March 27-29, 2023, the AAAI 2023 Spring Symposia featured the symposium „Socially Responsible AI for Well-being“ by Takashi Kido (Teikyo University, Japan) and Keiki Takadama (The University of Electro-Communications, Japan). This time the venue was exceptionally not Stanford University, but the Hyatt Regency SFO Airport. On March 28, Prof. Dr. Oliver Bendel and Lea Peier presented their paper „How Can Bar Robots Enhance the Well-being of Guests?“. It has now been published and can be downloaded via ceur-ws.org/Vol-3527/. From the abstract: „This paper addresses the question of how bar robots can contribute to the well-being of guests. It first develops the basics of service robots and social robots. It gives a brief overview of which gastronomy robots are on the market. It then presents examples of bar robots and describes two models used in Switzerland. A research project at the School of Business FHNW collected empirical data on them, which is used for this article. The authors then discuss how the robots could be improved to increase the well-being of customers and guests and better address their individual wishes and requirements. Artificial intelligence can play an important role in this. Finally, ethical and social problems in the use of bar robots are discussed and possible solutions are suggested to counter these.“ More information on the conference via aaai.org/conference/spring-symposia/sss23/.
From March 27-29, 2023, the AAAI 2023 Spring Symposia featured the symposium „Socially Responsible AI for Well-being“ by Takashi Kido (Teikyo University, Japan) and Keiki Takadama (The University of Electro-Communications, Japan). This time the venue was exceptionally not Stanford University, but the Hyatt Regency SFO Airport. On March 28, Prof. Dr. Oliver Bendel presented the paper „Increasing Well-being through Robotic Hugs“, written by himself, Andrea Puljic, Robin Heiz, Furkan Tömen, and Ivan De Paola. It has now been published and can be downloaded via ceur-ws.org/Vol-3527/. From the abstract: „This paper addresses the question of how to increase the acceptability of a robot hug and whether such a hug contributes to well-being. It combines the lead author’s own research with pioneering research by Alexis E. Block and Katherine J. Kuchenbecker. First, the basics of this area are laid out with particular attention to the work of the two scientists. The authors then present HUGGIE Project I, which largely consisted of an online survey with nearly 300 participants, followed by HUGGIE Project II, which involved building a hugging robot and testing it on 136 people. At the end, the results are linked to current research by Block and Kuchenbecker, who have equipped their hugging robot with artificial intelligence to better respond to the needs of subjects.“ More information on the conference via aaai.org/conference/spring-symposia/sss23/.
In late August 2023, AAAI announced the continuation of the AAAI Spring Symposium Series, to be held at Stanford University from 25-27 March 2024. Due to staff shortages, the prestigious conference had to be held at the Hyatt Regency SFO Airport in San Francisco in March 2023 – and will now return to its traditional venue. The call for proposals is available on the AAAI Spring Symposium Series page. Proposals are due by 6 October 2023. They should be submitted to the symposium co-chairs, Christopher Geib (SIFT, USA) and Ron Petrick (Heriot-Watt University, UK), via the online submission page. Over the past ten years, the AAAI Spring Symposia have been relevant not only to classical AI, but also to roboethics and machine ethics. Groundbreaking symposia were, for example, „Ethical and Moral Considerations in Non-Human Agents“ in 2016, „AI for Social Good“ in 2017, or „AI and Society: Ethics, Safety and Trustworthiness in Intelligent Agents“ in 2018. More information is available at aaai.org/conference/spring-symposia/sss24/.
Fig.: Hoover Tower on the campus of Stanford University
„Oliver Bendel antwortet im März auf eine Interviewanfrage aus San Francisco. Der Informationsethiker ist auf dem Weg an ein Symposium, wo er über Umarmungsroboter und Barroboter referieren wird. Vor einigen Jahren hat er an derselben Veranstaltung der renommierten Association for the Advancement of Artificial Intelligence (AAAI) annotierte Entscheidungsbäume für moralische Maschinen vorgestellt. Wenn sich Bendel nicht gerade auf einer seiner vielen wissenschaftlichen Reisen befindet, forscht er in beschaulicher Umgebung: Seit 2009 bekleidet der 55-Jährige eine Professur für Informationsethik, Maschinenethik und Soziale Robotik an der Fachhochschule Nordwestschweiz (FHNW) in Brugg-Windisch.“ (Inside IT, 24. April 2023) Mit diesen Worten beginnt ein Artikel von Thomas Schwandener, der am 24. April 2023 in Inside IT erschienen ist. Weiter heißt es: „In seiner Arbeit geht es um das Zusammenleben von sozialen Robotern und Menschen, um die Potenziale künstlicher Intelligenz, aber auch um ethische Folgefragen. Bendel ist eine Autorität auf seinem Gebiet. Er hat mehrere Standardwerke verfasst, spricht an internationalen Fachtagungen und trat mehrfach vor dem Deutschen Bundestag als Sachverständiger auf.“ (Inside IT, 24. April 2023) Es folgt ein längeres Interview, das am Rande der Shift geführt wurde. Darin weist der Informations- und Maschinenethiker auf die tiefe Kluft zwischen Menschen und Maschinen hin. Der ganze Artikel kann hier aufgerufen werden.
The Association for the Advancement of Artificial Intelligence (AAAI) is pleased to present the AAAI 2023 Spring Symposia, to be held at the Hyatt Regency, San Francisco Airport, California, March 27-29. According to the organizers, Stanford University cannot act as host this time because of insufficient staff. Symposia of particular interest from a philosophical point of view are „AI Climate Tipping-Point Discovery“, „AI Trustworthiness Assessment“, „Computational Approaches to Scientific Discovery“, „Evaluation and Design of Generalist Systems (EDGeS): Challenges and methods for assessing the new generation of AI“, and „Socially Responsible AI for Well-being“. According to AAAI, symposia generally range from 40–75 participants each. „Participation will be open to active participants as well as other interested individuals on a first-come, first-served basis.“ (Website AAAI) Over the past decade, the conference has become one of the most important venues in the world for discussions on robot ethics, machine ethics, and AI ethics. It will be held again at History Corner from 2024. Further information via www.aaai.org/Symposia/Spring/sss23.php.
On June 30, 2022, the paper „Should Social Robots in Retail Manipulate Customers?“ by Oliver Bendel and Liliana Margarida Dos Santos Alves was published on arxiv.org. It was presented at the AAAI 2022 Spring Symposium „How Fair is Fair? Achieving Wellbeing AI“ at Stanford University and came in third place in the Best Presentation Awards. From the abstract: „Against the backdrop of structural changes in the retail trade, social robots have found their way into retail stores and shopping malls in order to attract, welcome, and greet customers; to inform them, advise them, and persuade them to make a purchase. Salespeople often have a broad knowledge of their product and rely on offering competent and honest advice, whether it be on shoes, clothing, or kitchen appliances. However, some frequently use sales tricks to secure purchases. The question arises of how consulting and sales robots should “behave”. Should they behave like human advisors and salespeople, i.e., occasionally manipulate customers? Or should they be more honest and reliable than us? This article tries to answer these questions. After explaining the basics, it evaluates a study in this context and gives recommendations for companies that want to use consulting and sales robots. Ultimately, fair, honest, and trustworthy robots in retail are a win-win situation for all concerned.“ The paper will additionally be published in the proceedings volume of the symposium by the end of summer. It can be downloaded via arxiv.org/abs/2206.14571.
AAAI has announced the launch of the Interactive AI Magazine. According to the organization, the new platform provides online access to articles and columns from AI Magazine, as well as news and articles from AI Topics and other materials from AAAI. „Interactive AI Magazine is a work in progress. We plan to add lot more content on the ecosystem of AI beyond the technical progress represented by the AAAI conference, such as AI applications, AI industry, education in AI, AI ethics, and AI and society, as well as conference calendars and reports, honors and awards, classifieds, obituaries, etc. We also plan to add multimedia such as blogs and podcasts, and make the website more interactive, for example, by enabling commentary on posted articles. We hope that over time Interactive AI Magazine will become both an important source of information on AI and an online forum for conversations among the AI community.“ (AAAI Press Release) More information via interactiveaimag.org.
Fig.: A magazine for interested people, cyborgs and robots
The Association for the Advancement of Artificial Intelligence (AAAI) and Squirrel AI Learning announced the establishment of a new one million dollars annual award for societal benefits of AI. According to a press release of the AAAI, the award will be sponsored by Squirrel AI Learning as part of its mission to promote the use of artificial intelligence with lasting positive effects for society. „This new international award will recognize significant contributions in the field of artificial intelligence with profound societal impact that have generated otherwise unattainable value for humanity. The award nomination and selection process will be designed by a committee led by AAAI that will include representatives from international organizations with relevant expertise that will be designated by Squirrel AI Learning.“ (AAAI Press Release, 28 May 2019) The AAAI Spring Symposia have repeatedly devoted themselves to social good, also from the perspective of machine ethics. Further information via aaai.org/Pressroom/Releases//release-19-0528.php.
Im „Handbuch Maschinenethik“ ist Ende Juli 2018 der Beitrag „Das LADYBIRD-Projekt“ von Oliver Bendel erschienen. Die Zusammenfassung: „Im LADYBIRD-Projekt ging es um einen Saugroboter, der aus moralischen Gründen bestimmte Insekten verschonen soll, die sich auf dem Boden befinden. Er sollte das jeweilige Tier mit Hilfe von Sensoren und von Analysesoftware erfassen und, bestimmten Regeln folgend, für eine Weile seine Arbeit einstellen. Das Praxisprojekt wurde 2017 an der Hochschule des Verfassers (und unter seiner Leitung) durchgeführt. Verwendet wurden Vorarbeiten, die ab 2014 entstanden, etwa eine Designstudie und ein annotierter Entscheidungsbaum. Drei Studierende der Wirtschaftsinformatik entwickelten den Roboter mittels vorgefertigter Module. Sie passten den Entscheidungsbaum an und implementierten die Regeln in Java. Das Ergebnis war ein kleiner, mobiler Roboter, der Marienkäfer bzw. ähnliche Objekte erkennen konnte und bei ihrer Anwesenheit seine Arbeit unterbrach. Der vorliegende Beitrag stellt sowohl die Vorarbeiten als auch die Durchführung des Projekts dar und diskutiert die Ergebnisse.“ Die Beiträge des von Oliver Bendel herausgegebenen Springer-Handbuchs erscheinen laufend und werden über link.springer.com/referencework/10.1007/978-3-658-17484-2 aufgeführt.
Die Technical Reports des AAAI 2018 Spring Symposium „AI and Society: Ethics, Safety and Trustworthiness in Intelligent Agents“ sind Ende April 2018 erschienen. Enthalten sind u.a. „From GOODBOT to BESTBOT“ (Oliver Bendel), „The Uncanny Return of Physiognomy“ (Oliver Bendel), „The Heart of the Matter: Patient Autonomy as a Model for the Wellbeing of Technology Users“ (Emanuelle Burton et al.), „Trustworthiness and Safety for Intelligent Ethical Logical Agents via Interval Temporal Logic and Runtime Self-Checking“ (Stefania Costantin et al.), „Ethical Considerations for AI Researchers“ (Kyle Dent), „Interactive Agent that Understands the User“ (Piotr Gmytrasiewicz et al.), „Toward Beneficial Human-Level AI … and Beyond“ (Philip C. Jackson Jr.), „Towards Provably Moral AI Agents in Bottom-Up Learning Frameworks“ (Nolan P. Shaw et al.), „An Architecture for a Military AI System with Ethical Rules“ (Yetian Wang et al.) und „Architecting a Human-Like Emotion-Driven Conscious Moral Mind for Value Alignment and AGI Safety“ (Mark R. Waser et al.). Das Symposium fand vom 26. bis zum 28. März 2018 an der Stanford University statt. Für die Maschinenethik wie auch für Informationsethik und Roboterethik ist die Konferenzreihe seit Jahren eine der wichtigsten Plattformen weltweit.
Abb.: Was tun gegen bestimmte Formen der Gesichtserkennung?
The tentative schedule of AAAI 2018 Spring Symposium on AI and Society at Stanford University (26 – 28 March 2018) has been published. On Tuesday Emma Brunskill from Stanford University, Philip C. Jackson („Toward Beneficial Human-Level AI … and Beyond“) and Andrew Williams („The Potential Social Impact of the Artificial Intelligence Divide“) will give a lecture. Oliver Bendel will have two talks, one on „The Uncanny Return of Physiognomy“ and one on „From GOODBOT to BESTBOT“. From the description on the website: „Artificial Intelligence has become a major player in today’s society and that has inevitably generated a proliferation of thoughts and sentiments on several of the related issues. Many, for example, have felt the need to voice, in different ways and through different channels, their concerns on: possible undesirable outcomes caused by artificial agents, the morality of their use in specific sectors, such as the military, and the impact they will have on the labor market. The goal of this symposium is to gather a diverse group of researchers from many disciplines and to ignite a scientific discussion on this topic.“
Vom 27. bis 29. März 2017 finden die AAAI 2017 Spring Symposia statt. Veranstaltet werden sie von der Association for the Advancement of Artificial Intelligence, in Kooperation mit dem Department of Computer Science der Stanford University. Das Symposium „AI for the Social Good“ an der Stanford University widmet sich auch Themen der Roboter- und der Maschinenethik. Auf der Website heißt es: „A rise in real-world applications of AI has stimulated significant interest from the public, media, and policy makers, including the White House Office of Science and Technology Policy (OSTP). Along with this increasing attention has come media-fueled concerns about purported negative consequences of AI, which often overlooks the societal benefits that AI is delivering and can deliver in the near future. This symposium will focus on the promise of AI across multiple sectors of society.“ (Website AISOC) In einer Talk Session spricht Oliver Bendel über „LADYBIRD: the Animal-Friendly Robot Vacuum Cleaner“. In der Lightning Talks Session ist er nochmals vertreten, mit dem Vortrag „Towards Kant Machines“. In der gleichen Session referiert Mahendra Prasad über „A Framework for Modelling Altruistic Intelligence Explosions“ (vorangestellt ist der Titel „Back to the Future“), und Thomas Doherty geht der Frage „Can Artificial Intelligence have Ecological Intelligence?“ nach. Das ganze Programm kann über scf.usc.edu/~amulyaya/AISOC17/papers.html aufgerufen werden.
Das zweijährige Publikationsprojekt zum „Handbuch Maschinenethik“ hat im November 2016 begonnen. Das Handbuch wird von Oliver Bendel herausgegeben und erscheint bei Springer. Etwa 25 Autorinnen und Autoren werden mitarbeiten. Die Sprache ist Deutsch; 25 Prozent der Beiträge können in Englisch vorliegen. Das Buch richtet sich an ein breites Publikum, an Wissenschaftler und Studierende unterschiedlicher Fachrichtungen, an Journalisten und Politiker. Die Autorinnen und Autoren werden durch persönliche Ansprache gewonnen. Zudem wird Ende des Jahres ein Call for Contributions veröffentlicht. Die Maschinenethik ist eine junge Disziplin und hat sich gleichwohl schon in verschiedene Richtungen spezialisiert und auf unterschiedliche Problemstellungen fokussiert. Bedeutende aktuelle Konferenzen waren bzw. sind „Ethical and Moral Considerations in Non-Human Agents“ (März 2016, Stanford, im Rahmen der AAAI Spring Symposia) und „Machine Ethics and Machine Law“ (November 2016, Krakau). Zudem haben sich populärwissenschaftliche bzw. an Journalisten gerichtete Formate dem Thema gewidmet, etwa „Roboterethik“ in Berlin (2015) und der Workshop „Maschinenethik“ bei „Wissenswerte“ in Bremen (2015). Es wurden Prototypen moralischer Maschinen entwickelt und Forschungen zu unmoralischen Maschinen begonnen.
The proceedings of the AAAI conference 2016 have been published in March 2016 („The 2016 AAAI Spring Symposium Series: Technical Reports“). The symposium „Ethical and Moral Considerations in Non-Human Agents“ was dedicated to the discipline of machine ethics. Ron Arkin (Georgia Institute of Technology), Luís Moniz Pereira (Universidade Nova de Lisboa), Peter Asaro (New School for Public Engagement, New York) and Oliver Bendel (School of Business FHNW) spoke about moral and immoral machines. The contribution „Annotated Decision Trees for Simple Moral Machines“ (Oliver Bendel) can be found on the pages 195 – 201. In the abstract it is said: „Autonomization often follows after the automization on which it is based. More and more machines have to make decisions with moral implications. Machine ethics, which can be seen as an equivalent of human ethics, analyses the chances and limits of moral machines. So far, decision trees have not been commonly used for modelling moral machines. This article proposes an approach for creating annotated decision trees, and specifies their central components. The focus is on simple moral machines. The chances of such models are illustrated with the example of a self-driving car that is friendly to humans and animals. Finally the advantages and disadvantages are discussed and conclusions are drawn.“ The proceedings can be ordered via www.aaai.org.
Im März 2016 ist der Proceedingsband „The 2016 AAAI Spring Symposium Series: Technical Reports“ erschienen, in der AAAI Press (Palo Alto 2016). Die KI-Konferenz fand an der Stanford University statt. Zur Maschinenethik (Symposium „Ethical and Moral Considerations in Non-Human Agents“) referierten u.a. Ron Arkin (Georgia Institute of Technology), Luís Moniz Pereira (Universidade Nova de Lisboa), Peter Asaro (New School for Public Engagement, New York) und Oliver Bendel (Hochschule für Wirtschaft FHNW). Auf den Seiten 195 bis 201 findet sich der Beitrag „Annotated Decision Trees for Simple Moral Machines“ von Oliver Bendel. Im Abstract heißt es: „Autonomization often follows after the automization on which it is based. More and more machines have to make decisions with moral implications. Machine ethics, which can be seen as an equivalent of human ethics, analyses the chances and limits of moral machines. So far, decision trees have not been commonly used for modelling moral machines. This article proposes an approach for creating annotated decision trees, and specifies their central components. The focus is on simple moral machines. The chances of such models are illustrated with the example of a self-driving car that is friendly to humans and animals. Finally the advantages and disadvantages are discussed and conclusions are drawn.“ Der Tagungsband kann über www.aaai.org bestellt werden.
Am 23. März 2016 wurde der Workshop „Ethical and Moral Considerations in Non-Human Agents“ an der Stanford University innerhalb der AAAI Spring Symposium Series fortgeführt. Die Keynote „Programming Machine Ethics“ hielt Kumar Pandey von Aldebaran Robotics (SoftBank Group). Er war aus Krankheitsgründen über Skype zugeschaltet. Er stellte kommerzielle Produkte vor und zeigte Videos zu Pepper. Nicht nur Pepper stammt von dem französischen Unternehmen, sondern auch Nao. Beide sind darauf ausgelegt, mit Menschen zusammenzuleben. Das Feedback der Benutzer war ambivalent. Er sollte nicht oder nicht immer für mich entscheiden, lautete eine Meinung. Eine andere war: Er soll nicht das tun, was ich tun kann, damit ich nicht faul werde. Auch die Meinung der Teilnehmer war ambivalent, in Bezug auf die Visionen und die Videos. Der Referent selbst räumte ein, man spiele mit den Emotionen der Benutzer. Am Ende fragte er nach der Haftung und nach der Zertifizierung in moralischer Hinsicht und stellte die Behauptung auf, der Roboter sollte wissen, was er nicht tun darf, nicht lernen. Und er fragte, was sein wird, wenn der Roboter eines Tages einen Befehl verweigert. In der Panel Discussion arbeitete man die Erkenntnisse der letzten Tage auf, analysierte die Principles of Robotics der EPSRC aus dem Jahre 2011 und diskutierte Möglichkeiten für den weiteren Austausch.