An Award for AI Devoted to the Social Good

The Association for the Advancement of Artificial Intelligence (AAAI) and Squirrel AI Learning announced the establishment of a new one million dollars annual award for societal benefits of AI. According to a press release of the AAAI, the award will be sponsored by Squirrel AI Learning as part of its mission to promote the use of artificial intelligence with lasting positive effects for society. „This new international award will recognize significant contributions in the field of artificial intelligence with profound societal impact that have generated otherwise unattainable value for humanity. The award nomination and selection process will be designed by a committee led by AAAI that will include representatives from international organizations with relevant expertise that will be designated by Squirrel AI Learning.“ (AAAI Press Release, 28 May 2019) The AAAI Spring Symposia have repeatedly devoted themselves to social good, also from the perspective of machine ethics. Further information via aaai.org/Pressroom/Releases//release-19-0528.php.

Fig.: An award for AI

Towards a Proxy Morality

Machine ethics produces moral and immoral machines. The morality is usually fixed, e.g. by programmed meta-rules and rules. The machine is thus capable of certain actions, not others. However, another approach is the morality menu (MOME for short). With this, the owner or user transfers his or her own morality onto the machine. The machine behaves in the same way as he or she would behave, in detail. Together with his teams, Prof. Dr. Oliver Bendel developed several artifacts of machine ethics at his university from 2013 to 2018. For one of them, he designed a morality menu that has not yet been implemented. Another concept exists for a virtual assistant that can make reservations and orders for its owner more or less independently. In the article „The Morality Menu“ the author introduces the idea of the morality menu in the context of two concrete machines. Then he discusses advantages and disadvantages and presents possibilities for improvement. A morality menu can be a valuable extension for certain moral machines. You can download the article here.

Fig.: A proxy machine

Moral Competence for Social Robots

At the end of 2018, the article entitled „Learning How to Behave: Moral Competence for Social Robots“ by Bertram F. Malle and Matthias Scheutz was published in the „Handbuch Maschinenethik“ („Handbook Machine Ethics“) (ed.: Oliver Bendel). An excerpt from the abstract: „We describe a theoretical framework and recent research on one key aspect of robot ethics: the development and implementation of a robot’s moral competence.“ The authors propose „that moral competence consists of five elements, two constituents (moral norms and moral vocabulary) and three activities (moral judgment, moral action, and moral communication)“. „A robot’s computational representations of social and moral norms is a prerequisite for all three moral activities. However, merely programming in advance the vast network of human norms is impossible, so new computational learning algorithms are needed that allow robots to acquire and update the context-specific and graded norms relevant to their domain of deployment. Moral vocabulary is needed primarily for moral communication, which expresses moral judgments of others’ violations and explains one’s own moral violations – to justify them, apologize, or declare intentions to do better. Current robots have at best rudimentary moral competence, but with improved learning and reasoning they may begin to show the kinds of capacities that humans will expect of future social robots.“ (Abstract) An overview of the contributions that have been published electronically since 2017 can be found on link.springer.com/referencework/10.1007/978-3-658-17484-2.

Roboethics as Topical Issue in Paladyn Journal

In 2018, Paladyn Journal of Behavioral Robotics published several articles on robot and machine ethics. In a message to the authors, the editors noted: „Our special attention in recent months has been paid to ethical and moral issues that seem to be of daily debate of researchers from different disciplines.“ The current issue „Roboethics“ includes the articles „Towards animal-friendly machines“ by Oliver Bendel, „Liability for autonomous and artificially intelligent robots“ by Woodrow Barfield, „Corporantia: Is moral consciousness above individual brains/robots?“ by Christopher Charles Santos-Lang, „The soldier’s tolerance for autonomous systems“ by Jai Galliott and „GenEth: a general ethical dilemma analyzer“ by Michael Anderson and Susan Leigh Anderson. The following articles will be published in December 2019: „Autonomy in surgical robots and its meaningful human control“ by Fanny Ficuciello, Guglielmo Tamburrini, Alberto Arezzo, Luigi Villani, and Bruno Siciliano, and „AI for the Common Good?! Pitfalls, challenges, and Ethics Pen-Testing“ by Bettina Berendt. More information via www.degruyter.com.

Fig.: Machines can be friendly to beetles

Moralische Maschinen mit maschinellem Lernen

Michael und Susan Leigh Anderson stellen beim Berliner Kolloquium 2019 ihre neuesten Ergebnisse aus der Disziplin der Maschinenethik vor. Üblicherweise werden sogenannten moralischen Maschinen starre Regeln gegeben, an die sie sich sozusagen sklavisch halten. Dies hat einige Vorteile, aber auch ein paar Nachteile, denen man mit Hilfe von maschinellem Lernen entgegentreten kann. Genau diesen Ansatz haben die Andersons zusammen mit einem Forscher vom Max-Planck-Institut für Intelligente Systeme verfolgt, wobei die Altenpflege als Kontext diente: „Contrary to those who regard such a goal as unattainable, Michael and Susan Leigh Anderson … and Vincent Berenz … have succeeded in programming a SoftBank Nao robot with a set of values that determines its behavior while aiding a person in a simulated eldercare environment. This unprecedented accomplishment uses machine learning to enable the robot to discern how each of its actions would satisfy or violate its ethical duties in the current situation and decide the best action to take.“ (Machine Ethics) Die Ergebnisse werden 2019 in den Proceedings of the IEEE publiziert. Der wissenschaftliche Leiter des 23. Berliner Kolloquiums zu Pflegerobotern ist Oliver Bendel. Die Anmeldung erfolgt über die Website der Daimler und Benz Stiftung.

Abb.: Ein Nao-Roboter (Foto: SoftBank/Philippe Dureuil, CC-BY-SA-4.0)

Proceedings of Robophilosophy 2018

In November 2018 the „Proceedings of Robophilosophy 2018“ with the title „Envisioning robots in society – power, politics, and public space“ were published. Editors are Mark Coeckelbergh, Janina Loh, Michael Funk, Johanna Seibt, and Marco Nørskov. In addition to the contributions of the participants, the book also contains abstracts and extended abstracts of the keynotes by Simon Penny, Raja Chatila, Hiroshi Ishiguro, Guy Standing, Catelijne Muller, Juha Heikkilä, Joanna Bryson, and Oliver Bendel (his article about service robots from the perspective of ethics is available here). Robophilosophy is the most important international conference for robophilosophy and roboethics. In February 2018 it took place at the University of Vienna. Machine ethics was also an issue this time. „Robots are predicted to play a role in many aspects of our lives in the future, affecting work, personal relationships, education, business, law, medicine and the arts. As they become increasingly intelligent, autonomous, and communicative, they will be able to function in ever more complex physical and social surroundings, transforming the practices, organizations, and societies in which they are embedded.“ (IOS Press) More information via www.ios.com.

Fig.: Robophilosophy in Vienna

Smart Machines and Save Animals

„With a few decades, autonomous and semi-autonomous machines will be found throughout Earth’s environments, from homes and gardens to parks and farms and so-called working landscapes – everywhere, really, that humans are found, and perhaps even places we’re not. And while much attention is given to how those machines will interact with people, far less is paid to their impacts on animals.“ (Anthropocene, October 10, 2018) „Machines can disturb, frighten, injure, and kill animals,“ says Oliver Bendel, an information systems professor at the University of Applied Sciences and Arts Northwestern Switzerland, according to the magazine. „Animal-friendly machines are needed.“ (Anthropocene, October 10, 2018) In the article „Will smart machines be kind to animals?“ the magazine Anthropocene deals with animal-friendly machines and introduces the work of the scientist. It is based on his paper „Towards animal-friendly machines“ (Paladyn) and an interview conducted by journalist Brandon Keim with Oliver Bendel. More via www.anthropocenemagazine.org/2018/10/animal-friendly-ai/.

Fig.: Also a cat can be safe, even on the street

Considerations about Animal-friendly Machines

Semi-autonomous machines, autonomous machines and robots inhabit closed, semi-closed and open environments. There they encounter domestic animals, farm animals, working animals and/or wild animals. These animals could be disturbed, displaced, injured or killed. Within the context of machine ethics, the School of Business FHNW developed several design studies and prototypes for animal-friendly machines, which can be understood as moral machines in the spirit of this discipline. They were each linked with an annotated decision tree containing the ethical assumptions or justifications for interactions with animals. Annotated decision trees are seen as an important basis in developing moral machines. They are not without problems and contradictions, but they do guarantee well-founded, secure actions that are repeated at a certain level. The article „Towards animal-friendly machines“ by Oliver Bendel, published in August 2018 in Paladyn, Journal of Behavioral Robotics, documents completed and current projects, compares their relative risks and benefits, and makes proposals for future developments in machine ethics.

Fig.: An animal-friendly vehicle?

In Love with Azuma

The Gatebox was given to some persons and institutions in Japan some time ago. The company announced at the end of July 2018 that it is now going into series production. In fact, it is possible to order on the website the machine that resembles a coffee machine. The anime girl Azuma Hikari lives in a glass „coffee pot“. She is a hologram connected to a dialogue system and an AI system. She communicates with her owner even when he is out and about (by sending messages to his smartphone) and learns. SRF visited a young man who lives with the Gatebox. „I love my wife,“ Kondo Akihiko is quoted. The station writes: „He can’t hug or kiss her. The Japanese guy is with a hologram.“ (SRF) Anyone who thinks that the love for manga and anime girls is a purely Japanese phenomenon is mistaken. In Dortmund’s BorDoll (from „Bordell“ and „Doll“ or „Love Doll“) the corresponding love dolls are in high demand. Here, too, it is young men shy of real girls who have developed a desire in the tradition of Pygmalion. Akihiko Kondo dreams that one day he can go out into the world with Azuma Hikari and hold her hand. But it’s a long way to go, and the anime girl will still need her little prison for a long time.

Fig.: In love with an anime girl

Neuer Beitrag im Buch zur Maschinenethik

Ende Juni 2018 ist der siebte Beitrag im Handbuch zur Maschinenethik bei Springer erschienen. Er stammt von Oliver Bendel, der auch der Herausgeber ist, und trägt den Titel „Pflegeroboter aus Sicht der Maschinenethik“. Er geht u.a. von Ergebnissen aus dem Buchbeitrag „Surgical, Therapeutic, Nursing and Sex Robots in Machine and Information Ethics“ aus, der 2015 im Springer-Buch „Machine Medical Ethics“ veröffentlicht wurde, erweitert die Fragestellungen zu Pflegerobotern und vertieft die Diskussion darüber. Seit Ende 2016 sitzen renommierte Wissenschaftlerinnen und Wissenschaftler an ihren Kapiteln zum „Handbuch Maschinenethik“. Ende 2018 soll das gedruckte Werk erscheinen. Mit dabei sind etwa Luís Moniz Pereira aus Lissabon, einer der bekanntesten Maschinenethiker der Welt, und Roboterethikerin Janina Loh, die in Wien mit Mark Coeckelbergh zusammenarbeitet. Ein Beitrag der Stuttgarter Philosophin Catrin Misselhorn, die ebenfalls einen hervorragenden Ruf in der Disziplin hat, kommt in wenigen Tagen heraus. Eine Übersicht über die Beiträge, die laufend elektronisch veröffentlicht werden, findet sich über link.springer.com/referencework/10.1007/978-3-658-17484-2

Abb.: Welche moralischen Regeln soll man dem Pflegeroboter beibringen?

International Workshop on Ethics and AI

The international workshop „Understanding AI & Us“ will take place in Berlin (Alexander von Humboldt Institute for Internet and Society) on 30 June 2018. It is hosted by Joanna Bryson (MIT), Janina Loh (University of Vienna), Stefan Ullrich (Weizenbaum Institute Berlin) and Christian Djeffal (IoT and Government, Berlin). Birgit Beck, Oliver Bendel and Pak-Hang Wong are invited to the panel on the ethical challenges of artificial intelligence. The aim of the workshop is to bring together experts from the field of research reflecting on AI. The event is funded by the Volkswagen Foundation (VolkswagenStiftung). The project „Understanding AI & Us“ furthers and deepens the understanding of artificial intelligence (AI) in an interdisciplinary way. „This is done in order to improve the ways in which AI-systems are invented, designed, developed, and criticised.“ (Invitation letter) „In order to achieve this, we form a group that merges different abilities, competences and methods. The aim is to provide space for innovative and out-of-the-box-thinking that would be difficult to pursue in ordinary academic discourse in our respective disciplines. We are seeking ways to merge different disciplinary epistemological standpoints in order to increase our understanding of the development of AI and its impact upon society.“ (Invitation letter)

Fig.: Combat robots could also be an issue

Machine Ethics and Artificial Intelligence

The young discipline of machine ethics refers to the morality of semi-autonomous and autonomous machines, robots, bots or software systems. They become special moral agents, and depending on their behavior, we can call them moral or immoral machines. They decide and act in situations where they are left to their own devices, either by following pre-defined rules or by comparing their current situations to case models, or as machines capable of learning and deriving rules. Moral machines have been known for some years, at least as simulations and prototypes. Machine ethics works closely with artificial intelligence and robotics. The term of machine morality can be used similarly to the term of artificial intelligence. Oliver Bendel has developed a graphic that illustrates the relationship between machine ethics and artificial intelligence. He presented it at conferences at Stanford University (AAAI Spring Symposia), in Fort Lauderdale (ISAIM) and Vienna (Robophilosophy) in 2018.

Fig.: The terms of machine ethics and artificial intelligence

Symposium in Helsinki on Moral Machines

„Moral Machines? The Ethics and Politics of the Digital World“ is a symposium organized by two research fellows, Susanna Lindberg and Hanna-Riikka Roine at the Helsinki Collegium for Advanced Studies. „The aim of the symposium is to bring together researchers from all fields addressing the many issues and problems of the digitalization of our social reality, such as thinking in the digital world, the morality and ethics of machines, and the ways of controlling and manipulating the digital world.“ (Website Symposium) The symposium will take place in Helsinki from 6 to 8 March 2019. It welcomes contributions addressing the various aspects of the contemporary digital world. The organizers are especially interested „in the idea that despite everything they can do, the machines do not really think, at least not like us“. „So, what is thinking in the digital world? How does the digital machine ‚think‘?“ (Website Symposium) Proposals can be sent to the e-mail address moralmachines2019@gmail.com by 31 August 2018. Decisions will be made by 31 October 2018. Further information is available on https://blogs.helsinki.fi/moralmachines/.

Humans and Robots Work Hand in Hand

The book chapter „Co-robots from an Ethical Perspective“ by Oliver Bendel was published in March 2018. It is included in the book „Business Information Systems and Technology 4.0“ (Springer). The abstract: „Cooperation and collaboration robots work hand in hand with their human colleagues. This contribution focuses on the use of these robots in production. The co-robots (to use this umbrella term) are defined and classified, and application areas, examples of applications and product examples are mentioned. Against this background, a discussion on moral issues follows, both from the perspective of information and technology ethics and business ethics. Central concepts of these fields of applied ethics are referred to and transferred to the areas of application. In moral terms, the use of cooperation and collaboration robots involves both opportunities and risks. Co-robots can support workers and save them from strains and injuries, but can also displace them in certain activities or make them dependent. Machine ethics is included at the margin; it addresses whether and how to improve the decisions and actions of (partially) autonomous systems with respect to morality. Cooperation and collaboration robots are a new and interesting subject for it.“ The book can be ordered here.

Fig.: Man and robot?

Robophilosophy

„Robophilosophy 2018 – Envisioning Robots In Society: Politics, Power, And Public Space“ is the third event in the Robophilosophy Conference Series which focusses on robophilosophy, a new field of interdisciplinary applied research in philosophy, robotics, artificial intelligence and other disciplines. The main organizers are Prof. Dr. Mark Coeckelbergh, Dr. Janina Loh and Michael Funk. Plenary speakers are Joanna Bryson (Department of Computer Science, University of Bath, UK), Hiroshi Ishiguro (Intelligent Robotics Laboratory, Osaka University, Japan), Guy Standing (Basic Income Earth Network and School of Oriental and African Studies, University of London, UK), Catelijne Muller (Rapporteur on Artificial Intelligence, European Economic and Social Committee), Robert Trappl (Head of the Austrian Research Institute for Artificial Intelligence, Austria), Simon Penny (Department of Art, University of California, Irvine), Raja Chatila (IEEE Global Initiative for Ethical Considerations in AI and Automated Systems, Institute of Intelligent Systems and Robotics, Pierre and Marie Curie University, Paris, France), Josef Weidenholzer (Member of the European Parliament, domains of automation and digitization) and Oliver Bendel (Institute for Information Systems, FHNW University of Applied Sciences and Arts Northwestern Switzerland). The conference will take place from 14 to 17 February 2018 in Vienna. More information via conferences.au.dk/robo-philosophy/.

Fig.: Robophilosophy in Vienna

Green Salon around Robotics and AI

Oliver Bendel was invited by the Green European Foundation to the second edition of the Green Salon around robotics and artificial intelligence in Vienna on the 12th of February 2018. „The Green Salon is an invitation-only event for the Green family and independent experts and thinkers from across Europe, to discuss important topics that will shape the future of the European Union. While research and industry in Europe and beyond have achieved immense progress in recent years, the public and political debate on the moral and legal implications of the use and further development of these new technologies is still in its infancy. A challenging situation, which needs to alarm as well as motivate Greens to meaningfully shape the debate on how we can make sure emerging technologies serve humans appropriately, while remaining under their full control. In particular, the impact of automation on job markets, and of new technologies in general on the very nature and future of work, are at the core of the discussion. Beyond simple adaptation discourses of mainstream media and other political families, the Green Salon aims at taking the debate further for the Greens and their partners.“ (Invitation Letter of the Green European Foundation) The Green European Foundation is a European-level political foundation funded by the European Parliament.

Fig.: To Vienna!