Schlagworte: Machine Ethics

Conversational Agents in Amsterdam

CONVERSATIONS 201 is a full-day workshop on chatbot research. It will take place on November 19, 2019 at the University of Amsterdam. From the description: „Chatbots are conversational agents which allow the user access to information and services though natural language dialogue, through text or voice. … Research is crucial in helping realize the potential of chatbots as a means of help and support, information and entertainment, social interaction and relationships. The CONVERSATIONS workshop contributes to this endeavour by providing a cross-disciplinary arena for knowledge exchange by researchers with an interest in chatbots.“ The topics of interest that may be explored in the papers and at the workshop include humanlike chatbots, networks of users and chatbots, trustworthy chatbot design and privacy and ethical issues in chatbot design and implementation. The submission deadline for CONVERSATIONS 2019 was extended to September 10. More information via conversations2019.wordpress.com/.

Fig.: A normal agent

All about Robophilosophy

Robophilosophy or robot philosophy is a field of philosophy that deals with robots (hardware and software robots) as well as with enhancement options such as artificial intelligence. It is not only about the practice and history of development, but also the history of ideas, starting with the works of Homer and Ovid up to science fiction books and movies. Disciplines such as epistemology, ontology, aesthetics and ethics, including information and machine ethics, are involved. The new platform robophilosophy.com was founded in July 2019 by Oliver Bendel. He invited several authors to write with him about robophilosophy, robot law, information ethics, machine ethics, robotics and artificial intelligence. All of them have a relevant background. Oliver Bendel studied philosophy as well as information science and made his doctoral thesis about anthropomorphic software agents. He has been researching in the fields of information ethics and machine ethics for years.

The Birth of the Hologram Girl

The article „Hologram Girl“ by Oliver Bendel deals first of all with the current and future technical possibilities of projecting three-dimensional human shapes into space or into vessels. Then examples for holograms from literature and film are mentioned, from the fictionality of past and present. Furthermore, the reality of the present and the future of holograms is included, i.e. what technicians and scientists all over the world are trying to achieve, in eager efforts to close the enormous gap between the imagined and the actual. A very specific aspect is of interest here, namely the idea that holograms serve us as objects of desire, that they step alongside love dolls and sex robots and support us in some way. Different aspects of fictional and real holograms are analyzed, namely pictoriality, corporeality, motion, size, beauty and speech capacity. There are indications that three-dimensional human shapes could be considered as partners, albeit in a very specific sense. The genuine advantages and disadvantages need to be investigated further, and a theory of holograms in love could be developed. The article is part of the book „AI Love You“ by Yuefang Zhou and Martin H. Fischer and was published on 18 July 2019. Further information can be found via link.springer.com/book/10.1007/978-3-030-19734-6.

Fig.: Manga girls are often used as role models

Chatbots as Moral and Immoral Machines

The papers of the CHI 2019 workshop „Conversational Agents: Acting on the Wave of Research and Development“ (Glasgow, 5 May 2019) are now listed on convagents.org. The extended abstract by Oliver Bendel (School of Business FHNW) entitled „Chatbots as Moral and Immoral Machines“ can be downloaded here. The workshop brought together experts from all over the world who are working on the basics of chatbots and voicebots and are implementing them in different ways. Companies such as Microsoft, Mozilla and Salesforce were also present. Approximately 40 extended abstracts were submitted. On 6 May, a bagpipe player opened the four-day conference following the 35 workshops. Dr. Aleks Krotoski, Pillowfort Productions, gave the first keynote. One of the paper sessions in the morning was dedicated to the topic „Values and Design“. All in all, both classical specific fields of applied ethics and the young discipline of machine ethics were represented at the conference. More information via chi2019.acm.org.

Save the Hedgehogs!

Between June 2019 and January 2020, the sixth artifact of machine ethics will be created at the FHNW School of Business. Prof. Dr. Oliver Bendel is the initiator, the client and – together with a colleague – the supervisor of the project. Animal-machine interaction is about the design, evaluation and implementation of (usually more sophisticated or complex) machines and computer systems with which animals interact and communicate and which interact and communicate with animals. Machine ethics has so far mainly referred to humans, but can also be useful for animals. It attempts to conceive moral machines and to implement them with the help of further disciplines such as computer science and AI or robotics. The aim of the project is the detailed description and prototypical implementation of an animal-friendly service robot, more precisely a mowing robot called HAPPY HEDGEHOG (HHH). With the help of sensors and moral rules, the robot should be able to recognize hedgehogs (especially young animals) and initiate appropriate measures (interruption of work, expulsion of the hedgehog, information of the owner). The project has similarities with another project carried out earlier, namely LADYBIRD. This time, however, more emphasis will be placed on existing equipment, platforms and software. The first artifact at the university was the GOODBOT – in 2013.

Fig.: A happy hedgehog

A Great Moment in Machine Ethics

The 23rd Berlin Colloquium of the Daimler and Benz Foundation took place on May 22, 2019. It was dedicated to care robots, not only from the familiar positions, but also from new perspectives. The scientific director, Prof. Dr. Oliver Bendel, invited two of the world’s best-known machine ethicists, Prof. Dr. Michael Anderson and Prof. Dr. Susan L. Anderson. Together with Vincent Berenz, they had programmed a Nao robot with a series of values that determine its behavior and simultaneously help a person in a simulated elderly care facility. A contribution to this appeared some time ago in the Proceedings of the IEEE. For the first time, they presented the results of this project to a European audience, and their one-hour presentation, followed by a twenty-minute discussion, can be considered a great moment in machine ethics. Other internationally renowned scientists, such as the Japan expert Florian Coulmas, also took part. He dealt with artefacts from Japan and relativized the frequently heard assertion that the Japanese considered all things to be inspired. Several media reported on the Berlin Colloquium, for example Neues Deutschland.

Fig.: A Nao robot

Basics and Artifacts of Machine Ethics

More and more autonomous and semi-autonomous machines such as intelligent software agents, specific robots, specific drones and self-driving cars make decisions that have moral implications. Machine ethics as a discipline examines the possibilities and limits of moral and immoral machines. It does not only reflect ideas but develops artifacts like simulations and prototypes. In his talk at the University of Potsdam on 23 June 2019 („Fundamentals and Artifacts of Machine Ethics“), Prof. Dr. Oliver Bendel outlined the fundamentals of machine ethics and present selected artifacts of moral and immoral machines, Furthermore, he discussed a project which will be completed by the end of 2019. The GOODBOT (2013) is a chatbot that responds morally adequate to problems of the users. The LIEBOT (2016) can lie systematically, using seven different strategies. LADYBIRD (2017) is an animal-friendly robot vacuum cleaner that spares ladybirds and other insects. The BESTBOT (2018) is a chatbot that recognizes certain problems and conditions of the users with the help of text analysis and facial recognition and reacts morally to them. 2019 is the year of the E-MOMA. The machine should be able to improve its morality on its own.

Fig.: The LIEBOT

An Award for AI Devoted to the Social Good

The Association for the Advancement of Artificial Intelligence (AAAI) and Squirrel AI Learning announced the establishment of a new one million dollars annual award for societal benefits of AI. According to a press release of the AAAI, the award will be sponsored by Squirrel AI Learning as part of its mission to promote the use of artificial intelligence with lasting positive effects for society. „This new international award will recognize significant contributions in the field of artificial intelligence with profound societal impact that have generated otherwise unattainable value for humanity. The award nomination and selection process will be designed by a committee led by AAAI that will include representatives from international organizations with relevant expertise that will be designated by Squirrel AI Learning.“ (AAAI Press Release, 28 May 2019) The AAAI Spring Symposia have repeatedly devoted themselves to social good, also from the perspective of machine ethics. Further information via aaai.org/Pressroom/Releases//release-19-0528.php.

Fig.: An award for AI

Towards a Proxy Morality

Machine ethics produces moral and immoral machines. The morality is usually fixed, e.g. by programmed meta-rules and rules. The machine is thus capable of certain actions, not others. However, another approach is the morality menu (MOME for short). With this, the owner or user transfers his or her own morality onto the machine. The machine behaves in the same way as he or she would behave, in detail. Together with his teams, Prof. Dr. Oliver Bendel developed several artifacts of machine ethics at his university from 2013 to 2018. For one of them, he designed a morality menu that has not yet been implemented. Another concept exists for a virtual assistant that can make reservations and orders for its owner more or less independently. In the article „The Morality Menu“ the author introduces the idea of the morality menu in the context of two concrete machines. Then he discusses advantages and disadvantages and presents possibilities for improvement. A morality menu can be a valuable extension for certain moral machines. You can download the article here.

Fig.: A proxy machine

Moral Competence for Social Robots

At the end of 2018, the article entitled „Learning How to Behave: Moral Competence for Social Robots“ by Bertram F. Malle and Matthias Scheutz was published in the „Handbuch Maschinenethik“ („Handbook Machine Ethics“) (ed.: Oliver Bendel). An excerpt from the abstract: „We describe a theoretical framework and recent research on one key aspect of robot ethics: the development and implementation of a robot’s moral competence.“ The authors propose „that moral competence consists of five elements, two constituents (moral norms and moral vocabulary) and three activities (moral judgment, moral action, and moral communication)“. „A robot’s computational representations of social and moral norms is a prerequisite for all three moral activities. However, merely programming in advance the vast network of human norms is impossible, so new computational learning algorithms are needed that allow robots to acquire and update the context-specific and graded norms relevant to their domain of deployment. Moral vocabulary is needed primarily for moral communication, which expresses moral judgments of others’ violations and explains one’s own moral violations – to justify them, apologize, or declare intentions to do better. Current robots have at best rudimentary moral competence, but with improved learning and reasoning they may begin to show the kinds of capacities that humans will expect of future social robots.“ (Abstract) An overview of the contributions that have been published electronically since 2017 can be found on link.springer.com/referencework/10.1007/978-3-658-17484-2.