Schlagworte: Machine Ethics

Green Salon around Robotics and AI

Oliver Bendel was invited by the Green European Foundation to the second edition of the Green Salon around robotics and artificial intelligence in Vienna on the 12th of February 2018. „The Green Salon is an invitation-only event for the Green family and independent experts and thinkers from across Europe, to discuss important topics that will shape the future of the European Union. While research and industry in Europe and beyond have achieved immense progress in recent years, the public and political debate on the moral and legal implications of the use and further development of these new technologies is still in its infancy. A challenging situation, which needs to alarm as well as motivate Greens to meaningfully shape the debate on how we can make sure emerging technologies serve humans appropriately, while remaining under their full control. In particular, the impact of automation on job markets, and of new technologies in general on the very nature and future of work, are at the core of the discussion. Beyond simple adaptation discourses of mainstream media and other political families, the Green Salon aims at taking the debate further for the Greens and their partners.“ (Invitation Letter of the Green European Foundation) The Green European Foundation is a European-level political foundation funded by the European Parliament.

Fig.: To Vienna!

Program of ISAIM Conference

A special session „Formalising Robot Ethics“ takes place within the ISAIM conference in Fort Lauderdale (3 to 5 January 2018). The program is now available and can be viewed on http://isaim2018.cs.virginia.edu/program.html. „Practical Challenges in Explicit Ethical Machine Reasoning“ is a talk by Louise Dennis and Michael Fischer, „Contextual Deontic Cognitive Event Calculi for Ethically Correct Robots“ a contribution of Selmer Bringsjord, Naveen Sundar G., Bertram Malle and Matthias Scheutz. Oliver Bendel will present „Selected Prototypes of Moral Machines“. A few words from the summary: „The GOODBOT is a chatbot that responds morally adequate to problems of the users. It’s based on the Verbot engine. The LIEBOT can lie systematically, using seven different strategies. It was written in Java, whereby AIML was used. LADYBIRD is an animal-friendly robot vacuum cleaner that spares ladybirds and other insects. In this case, an annotated decision tree was translated into Java. The BESTBOT should be even better than the GOODBOT.“

The BESTBOT at Stanford University

Machine ethics researches the morality of semiautonomous and autonomous machines. The School of Business at the University of Applied Sciences and Arts Northwestern Switzerland FHNW realized a project for implementation of a prototype called GOODBOT, a novelty chatbot and a simple moral machine. One of its meta rules was it should not lie unless not lying would hurt the user. It was a stand-alone solution, not linked with other systems and not internet- or web-based. In the LIEBOT project, the mentioned meta rule was reversed. This web-based chatbot, implemented in 2016, could lie systematically. It was an example of a simple immoral machine. A follow-up project in 2018 is going to develop the BESTBOT, considering the restrictions of the GOODBOT and the opportunities of the LIEBOT. The aim is to develop a machine that can detect problems of users of all kinds and can react in an adequate way. It should have textual, auditory and visual capabilities. The paper “From GOODBOT to BESTBOT” describes the preconditions and findings of the GOODBOT project and the results of the LIEBOT project and outlines the subsequent BESTBOT project. A reflection from the perspective of information ethics is included. Oliver Bendel will present his paper in March 2018 at Stanford University (“AI and Society: Ethics, Safety and Trustworthiness in Intelligent Agents”, AAAI 2018 Spring Symposium Series).

Fig.: What will the BESTBOT look like?

Formalising Robot Ethics

Prof. Dr. Oliver Bendel was invited to give a lecture at the ISAIM special session „Formalising Robot Ethics“. „The International Symposium on Artificial Intelligence and Mathematics is a biennial meeting that fosters interactions between mathematics, theoretical computer science, and artificial intelligence.“ (Website ISAIM) Oliver Bendel will present selected prototypes of moral and immoral machines and will discuss a project planned for 2018. The GOODBOT is a chatbot that responds morally adequate to problems of the users. It’s based on the Verbot engine. The LIEBOT can lie systematically, using seven different strategies. It was written in Java, whereby AIML was used. LADYBIRD is an animal-friendly robot vacuum cleaner that spares ladybirds and other insects. In this case, an annotated decision tree was translated into Java. The BESTBOT should be even better than the GOODBOT. Technically everything is still open. The ISAIM conference will take place from 3 to 5 January 2018 in Fort Lauderdale, Florida. Further information is available at isaim2018.cs.virginia.edu/.

Fig.: What should she be able to do?

Digital Europe Working Group Conference

The Digital Europe Working Group Conference Robotics will take place on 8 November 2017 at the European Parliament in Brussels. The keynote address will be given by Mariya Gabriel, European Commissioner for Digital Society and Economy. The speakers of the first panel are Oliver Bendel (Professor of Information Systems, Information Ethics and Machine Ethics at the School of Business FHNW, via video conference), Anna Byhovskaya (policy and communications advisor, Trade Union Advisory Council of the OECD) and Malcolm James (Senior Lecturer in Accounting & Taxation, Cardiff Metropolitan University). The third panel will be moderated by Mady Delvaux (Member of the European Parliament). Speaker is Giovanni Sartor (Professor of Legal Informatics and Legal Theory at the European University Institute). The poster can be downloaded here. Further information is available at www.socialistsanddemocrats.eu/events/sd-group-digital-europe-working-group-robotics.

Abb.: Das Atomium in Brüssel

Towards the BESTBOT

Machine ethics researches the morality of semiautonomous and autonomous machines. In the year 2013, the School of Business at the University of Applied Sciences and Arts Northwestern Switzerland FHNW realized a project for implementation of a prototype called GOODBOT, a novelty chatbot and a simple moral machine. One of its meta rules was it should not lie unless not lying would hurt the user. It was a stand-alone solution, not linked with other systems and not internet- or web-based. In the LIEBOT project, the mentioned meta rule was reversed. This web-based chatbot, implemented in 2016, could lie systematically. It was an example of a simple immoral machine. A follow-up project in 2018 is going to develop the BESTBOT, considering the restrictions of the GOODBOT and the opportunities of the LIEBOT. The aim is to develop a machine that can detect problems of users of all kinds and can react in an adequate way. It should have textual, auditory and visual capabilities.

Fig.: The BESTBOT will be better than the GOODBOT

Conference on AI, Ethics, and Society

AAAI announced the launch of the AAAI/ACM Conference on AI, Ethics, and Society, to be co-located with AAAI-18, February 2-3, 2018 in New Orleans. The Call for Papers is available at http://www.aies-conference.com. October 31 is the deadline for submissions. „As AI is becoming more pervasive in our life, its impact on society is more significant and concerns and issues are raised regarding aspects such as value alignment, data bias and data policy, regulations, and workforce displacement. Only a multi-disciplinary and multi-stakeholder effort can find the best ways to address these concerns, including experts of various disciplines, such as AI, computer science, ethics, philosophy, economics, sociology, psychology, law, history, and politics.“ (AAAI information) The new conference complements and expands the classical AAAI Spring Symposia at Stanford University (including symposia like „AI for Social Good“ in 2017 or „AI and Society: Ethics, Safety and Trustworthiness in Intelligent Agents“ in 2018).

Fig.: AI and ethics could help society

Robophilosophy 2018

The conference „Robophilosophy 2018 – Envisioning Robots In Society: Politics, Power, And Public Space“ will take place in Vienna (February 14 – 17, 2018). According to the website, it has three main aims; it shall present interdisciplinary humanities research „in and on social robotics that can inform policy making and political agendas, critically and constructively“, investigate „how academia and the private sector can work hand in hand to assess benefits and risks of future production formats and employment conditions“ and explore how research in the humanities, including art and art research, in the social and human sciences, „can contribute to imagining and envisioning the potentials of future social interactions in the public space“ (Website Robophilosophy). Plenary speakers are Joanna Bryson (Department of Computer Science, University of Bath, UK), Alan Winfield (FET – Engineering, Design and Mathematics, University of the West of England, UK) and Catelijne Muller (Rapporteur on Artificial Intelligence, European Economic and Social Committee). Deadline for submission of abstracts for papers and posters is October 31. More information via conferences.au.dk/robo-philosophy/.

Fig.: Reflexions on robots

Reflections on Individual Synthetic Voices

The synthetization of voices, or speech synthesis, has been an object of interest for centuries. It is mostly realized with a text-to-speech system (TTS), an automaton that interprets and reads aloud. This system refers to text available for instance on a website or in a book, or entered via popup menu on the website. Today, just a few minutes of samples are enough in order to be able to imitate a speaker convincingly in all kinds of statements. The article „The Synthetization of Human Voices“ by Oliver Bendel (published on 26 July 2017) abstracts from actual products and actual technological realization. Rather, after a short historical outline of the synthetization of voices, exemplary applications of this kind of technology are gathered for promoting the development, and potential applications are discussed critically in order to be able to limit them if necessary. The ethical and legal challenges should not be underestimated, in particular with regard to informational and personal autonomy and the trustworthiness of media. The article can be viewed via rdcu.be/uvxm.

Fig.: Can you hear my voice?

The Robot’s Love Affairs

PlayGround is a Spanish online magazine, founded in 2008, with a focus on culture, future and food. Astrid Otal asked the ethicist Oliver Bendel about the conference in London („Love and Sex with Robots“) and in general about sex robots and love dolls. One issue was: „In love, a person can suffer. But in this case, can robots make us suffer sentimentally?“ The reply to it: „Of course, they can make us suffer. By means of their body, body parts and limbs, and by means of their language capabilities. They can hurt us, they can kill us. They can offend us by using certain words and by telling the truth or the untruth. In my contribution for the conference proceedings, I ask this question: It is possible to be unfaithful to the human love partner with a sex robot, and can a man or a woman be jealous because of the robot’s other love affairs? We can imagine how suffering can emerge in this context … But robots can also make us happy. Some years ago, we developed the GOODBOT, a chatbot which can detect problems of the user and escalate on several levels. On the highest level, it hands over an emergency number. It knows its limits.“ Some statements of the interview have been incorporated in the article „Última parada: después del sexo con autómatas, casarse con un Robot“ (February 11, 2017) which is available via www.playgroundmag.net/futuro/sexo-robots-matrimonio-legal-2050-realdolls_0_1918608121.html.

Fig.: What about the robot’s love affairs?