Machine ethics researches the morality of semiautonomous and autonomous machines. The School of Business at the University of Applied Sciences and Arts Northwestern Switzerland FHNW realized a project for implementation of a prototype called GOODBOT, a novelty chatbot and a simple moral machine. One of its meta rules was it should not lie unless not lying would hurt the user. It was a stand-alone solution, not linked with other systems and not internet- or web-based. In the LIEBOT project, the mentioned meta rule was reversed. This web-based chatbot, implemented in 2016, could lie systematically. It was an example of a simple immoral machine. A follow-up project in 2018 is going to develop the BESTBOT, considering the restrictions of the GOODBOT and the opportunities of the LIEBOT. The aim is to develop a machine that can detect problems of users of all kinds and can react in an adequate way. It should have textual, auditory and visual capabilities. The paper “From GOODBOT to BESTBOT” describes the preconditions and findings of the GOODBOT project and the results of the LIEBOT project and outlines the subsequent BESTBOT project. A reflection from the perspective of information ethics is included. Oliver Bendel will present his paper in March 2018 at Stanford University (“AI and Society: Ethics, Safety and Trustworthiness in Intelligent Agents”, AAAI 2018 Spring Symposium Series).
Fig.: What will the BESTBOT look like?
Prof. Dr. Oliver Bendel was invited to give a lecture at the ISAIM special session „Formalising Robot Ethics“. „The International Symposium on Artificial Intelligence and Mathematics is a biennial meeting that fosters interactions between mathematics, theoretical computer science, and artificial intelligence.“ (Website ISAIM) Oliver Bendel will present selected prototypes of moral and immoral machines and will discuss a project planned for 2018. The GOODBOT is a chatbot that responds morally adequate to problems of the users. It’s based on the Verbot engine. The LIEBOT can lie systematically, using seven different strategies. It was written in Java, whereby AIML was used. LADYBIRD is an animal-friendly robot vacuum cleaner that spares ladybirds and other insects. In this case, an annotated decision tree was translated into Java. The BESTBOT should be even better than the GOODBOT. Technically everything is still open. The ISAIM conference will take place from 3 to 5 January 2018 in Fort Lauderdale, Florida. Further information is available at isaim2018.cs.virginia.edu/.
Fig.: What should she be able to do?
The Digital Europe Working Group Conference Robotics will take place on 8 November 2017 at the European Parliament in Brussels. The keynote address will be given by Mariya Gabriel, European Commissioner for Digital Society and Economy. The speakers of the first panel are Oliver Bendel (Professor of Information Systems, Information Ethics and Machine Ethics at the School of Business FHNW, via video conference), Anna Byhovskaya (policy and communications advisor, Trade Union Advisory Council of the OECD) and Malcolm James (Senior Lecturer in Accounting & Taxation, Cardiff Metropolitan University). The third panel will be moderated by Mady Delvaux (Member of the European Parliament). Speaker is Giovanni Sartor (Professor of Legal Informatics and Legal Theory at the European University Institute). The poster can be downloaded here. Further information is available at www.socialistsanddemocrats.eu/events/sd-group-digital-europe-working-group-robotics.
Abb.: Das Atomium in Brüssel
Machine ethics researches the morality of semiautonomous and autonomous machines. In the year 2013, the School of Business at the University of Applied Sciences and Arts Northwestern Switzerland FHNW realized a project for implementation of a prototype called GOODBOT, a novelty chatbot and a simple moral machine. One of its meta rules was it should not lie unless not lying would hurt the user. It was a stand-alone solution, not linked with other systems and not internet- or web-based. In the LIEBOT project, the mentioned meta rule was reversed. This web-based chatbot, implemented in 2016, could lie systematically. It was an example of a simple immoral machine. A follow-up project in 2018 is going to develop the BESTBOT, considering the restrictions of the GOODBOT and the opportunities of the LIEBOT. The aim is to develop a machine that can detect problems of users of all kinds and can react in an adequate way. It should have textual, auditory and visual capabilities.
Fig.: The GOODBOT
AAAI announced the launch of the AAAI/ACM Conference on AI, Ethics, and Society, to be co-located with AAAI-18, February 2-3, 2018 in New Orleans. The Call for Papers is available at http://www.aies-conference.com. October 31 is the deadline for submissions. „As AI is becoming more pervasive in our life, its impact on society is more significant and concerns and issues are raised regarding aspects such as value alignment, data bias and data policy, regulations, and workforce displacement. Only a multi-disciplinary and multi-stakeholder effort can find the best ways to address these concerns, including experts of various disciplines, such as AI, computer science, ethics, philosophy, economics, sociology, psychology, law, history, and politics.“ (AAAI information) The new conference complements and expands the classical AAAI Spring Symposia at Stanford University (including symposia like „AI for Social Good“ in 2017 or „AI and Society: Ethics, Safety and Trustworthiness in Intelligent Agents“ in 2018).
Fig.: AI and ethics could help society
The conference „Robophilosophy 2018 – Envisioning Robots In Society: Politics, Power, And Public Space“ will take place in Vienna (February 14 – 17, 2018). According to the website, it has three main aims; it shall present interdisciplinary humanities research „in and on social robotics that can inform policy making and political agendas, critically and constructively“, investigate „how academia and the private sector can work hand in hand to assess benefits and risks of future production formats and employment conditions“ and explore how research in the humanities, including art and art research, in the social and human sciences, „can contribute to imagining and envisioning the potentials of future social interactions in the public space“ (Website Robophilosophy). Plenary speakers are Joanna Bryson (Department of Computer Science, University of Bath, UK), Alan Winfield (FET – Engineering, Design and Mathematics, University of the West of England, UK) and Catelijne Muller (Rapporteur on Artificial Intelligence, European Economic and Social Committee). Deadline for submission of abstracts for papers and posters is October 31. More information via conferences.au.dk/robo-philosophy/.
Fig.: Reflexions on robots
The synthetization of voices, or speech synthesis, has been an object of interest for centuries. It is mostly realized with a text-to-speech system (TTS), an automaton that interprets and reads aloud. This system refers to text available for instance on a website or in a book, or entered via popup menu on the website. Today, just a few minutes of samples are enough in order to be able to imitate a speaker convincingly in all kinds of statements. The article „The Synthetization of Human Voices“ by Oliver Bendel (published on 26 July 2017) abstracts from actual products and actual technological realization. Rather, after a short historical outline of the synthetization of voices, exemplary applications of this kind of technology are gathered for promoting the development, and potential applications are discussed critically in order to be able to limit them if necessary. The ethical and legal challenges should not be underestimated, in particular with regard to informational and personal autonomy and the trustworthiness of media. The article can be viewed via rdcu.be/uvxm.
Fig.: Can you hear my voice?
PlayGround is a Spanish online magazine, founded in 2008, with a focus on culture, future and food. Astrid Otal asked the ethicist Oliver Bendel about the conference in London („Love and Sex with Robots“) and in general about sex robots and love dolls. One issue was: „In love, a person can suffer. But in this case, can robots make us suffer sentimentally?“ The reply to it: „Of course, they can make us suffer. By means of their body, body parts and limbs, and by means of their language capabilities. They can hurt us, they can kill us. They can offend us by using certain words and by telling the truth or the untruth. In my contribution for the conference proceedings, I ask this question: It is possible to be unfaithful to the human love partner with a sex robot, and can a man or a woman be jealous because of the robot’s other love affairs? We can imagine how suffering can emerge in this context … But robots can also make us happy. Some years ago, we developed the GOODBOT, a chatbot which can detect problems of the user and escalate on several levels. On the highest level, it hands over an emergency number. It knows its limits.“ Some statements of the interview have been incorporated in the article „Última parada: después del sexo con autómatas, casarse con un Robot“ (February 11, 2017) which is available via www.playgroundmag.net/futuro/sexo-robots-matrimonio-legal-2050-realdolls_0_1918608121.html.
Fig.: What about the robot’s love affairs?
Auf maschinenethik.net und informationsethik.net gibt es die neue Rubrik „Abstracts“. Dort werden Abstracts von ausgewählten wissenschaftlichen Beiträgen zur Maschinenethik und zur Informationsethik von Oliver Bendel gesammelt. Es sind ausschließlich englische Abstracts, und die dazugehörigen Texte sind ebenfalls mehrheitlich in englischer Sprache. Auch andere europäische Autoren können dort aufgenommen werden. Sie können ihre Abstracts an die E-Mail-Adresse im Impressum senden, wobei sie die Erlaubnis zur Veröffentlichung erteilen müssen. In den USA wird auf die Maschinenethik und die Informationsethik in Europa zu wenig Bezug genommen, vor allem, wenn die Beiträge in deutscher, französischer, italienischer, spanischer oder portugiesischer Sprache vorliegen. Von allen wichtigen Vertretern liegen natürlich auch Texte in englischer Sprache vor. Aber viele europäische Wissenschaftler, gerade Philosophen, schreiben gerne in ihrer Muttersprache. Über die Rubrik kann sowohl auf nichtenglische als auch auf englische Beiträge aufmerksam gemacht werden. Das Entscheidende ist eben, dass die Abstracts in englischer Sprache vorhanden sind.
Die Mitglieder der IEEE Global Initiative haben im Dezember 2016 ihre ersten Ergebnisse vorgelegt. „Ethically Aligned Design, Version 1“ ist online verfügbar. „A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems“, so der Untertitel, wird entworfen. In der Executive Summary heißt es: „To fully benefit from the potential of Artificial Intelligence and Autonomous Systems (AI/AS), we need to go beyond perception and beyond the search for more computational power or solving capabilities. We need to make sure that these technologies are aligned to humans in terms of our moral values and ethical principles. AI/AS have to behave in a way that is beneficial to people beyond reaching functional goals and addressing technical problems. This will allow for an elevated level of trust between humans and our technology that is needed for a fruitful pervasive use of AI/AS in our daily lives.“ Zu den Mitgliedern des Komitees „Classical Ethics in Information & Communication Technologies“ gehören Rafael Capurro, Wolfgang Hofkirchner und Oliver Bendel, um nur diejenigen zu nennen, die im deutschsprachigen Raum angesiedelt sind. Das zweite Treffen der IEEE Global Initiative findet am 5. Juni 2017 in Austin (Texas) statt. Beim „Symposium on Ethics of Autonomous Systems (SEAS North America)“ wird eine zweite Version des Dokuments erarbeitet.
Abb.: Das Symposium findet an der University of Texas statt