„Robophilosophy 2018 – Envisioning Robots In Society: Politics, Power, And Public Space“ is the third event in the Robophilosophy Conference Series which focusses on robophilosophy, a new field of interdisciplinary applied research in philosophy, robotics, artificial intelligence and other disciplines. The main organizers are Prof. Dr. Mark Coeckelbergh, Dr. Janina Loh and Michael Funk. Plenary speakers are Joanna Bryson (Department of Computer Science, University of Bath, UK), Hiroshi Ishiguro (Intelligent Robotics Laboratory, Osaka University, Japan), Guy Standing (Basic Income Earth Network and School of Oriental and African Studies, University of London, UK), Catelijne Muller (Rapporteur on Artificial Intelligence, European Economic and Social Committee), Robert Trappl (Head of the Austrian Research Institute for Artificial Intelligence, Austria), Simon Penny (Department of Art, University of California, Irvine), Raja Chatila (IEEE Global Initiative for Ethical Considerations in AI and Automated Systems, Institute of Intelligent Systems and Robotics, Pierre and Marie Curie University, Paris, France), Josef Weidenholzer (Member of the European Parliament, domains of automation and digitization) and Oliver Bendel (Institute for Information Systems, FHNW University of Applied Sciences and Arts Northwestern Switzerland). The conference will take place from 14 to 17 February 2018 in Vienna. More information via conferences.au.dk/robo-philosophy/.
Fig.: Robophilosophy in Vienna
A special session „Formalising Robot Ethics“ takes place within the ISAIM conference in Fort Lauderdale (3 to 5 January 2018). The program is now available and can be viewed on http://isaim2018.cs.virginia.edu/program.html. „Practical Challenges in Explicit Ethical Machine Reasoning“ is a talk by Louise Dennis and Michael Fischer, „Contextual Deontic Cognitive Event Calculi for Ethically Correct Robots“ a contribution of Selmer Bringsjord, Naveen Sundar G., Bertram Malle and Matthias Scheutz. Oliver Bendel will present „Selected Prototypes of Moral Machines“. A few words from the summary: „The GOODBOT is a chatbot that responds morally adequate to problems of the users. It’s based on the Verbot engine. The LIEBOT can lie systematically, using seven different strategies. It was written in Java, whereby AIML was used. LADYBIRD is an animal-friendly robot vacuum cleaner that spares ladybirds and other insects. In this case, an annotated decision tree was translated into Java. The BESTBOT should be even better than the GOODBOT.“
Prof. Dr. Oliver Bendel was invited to give a lecture at the ISAIM special session „Formalising Robot Ethics“. „The International Symposium on Artificial Intelligence and Mathematics is a biennial meeting that fosters interactions between mathematics, theoretical computer science, and artificial intelligence.“ (Website ISAIM) Oliver Bendel will present selected prototypes of moral and immoral machines and will discuss a project planned for 2018. The GOODBOT is a chatbot that responds morally adequate to problems of the users. It’s based on the Verbot engine. The LIEBOT can lie systematically, using seven different strategies. It was written in Java, whereby AIML was used. LADYBIRD is an animal-friendly robot vacuum cleaner that spares ladybirds and other insects. In this case, an annotated decision tree was translated into Java. The BESTBOT should be even better than the GOODBOT. Technically everything is still open. The ISAIM conference will take place from 3 to 5 January 2018 in Fort Lauderdale, Florida. Further information is available at isaim2018.cs.virginia.edu/.
Fig.: What should she be able to do?
The Digital Europe Working Group Conference Robotics will take place on 8 November 2017 at the European Parliament in Brussels. The keynote address will be given by Mariya Gabriel, European Commissioner for Digital Society and Economy. The speakers of the first panel are Oliver Bendel (Professor of Information Systems, Information Ethics and Machine Ethics at the School of Business FHNW, via video conference), Anna Byhovskaya (policy and communications advisor, Trade Union Advisory Council of the OECD) and Malcolm James (Senior Lecturer in Accounting & Taxation, Cardiff Metropolitan University). The third panel will be moderated by Mady Delvaux (Member of the European Parliament). Speaker is Giovanni Sartor (Professor of Legal Informatics and Legal Theory at the European University Institute). The poster can be downloaded here. Further information is available at www.socialistsanddemocrats.eu/events/sd-group-digital-europe-working-group-robotics.
Abb.: Das Atomium in Brüssel
Machine ethics researches the morality of semiautonomous and autonomous machines. In the year 2013, the School of Business at the University of Applied Sciences and Arts Northwestern Switzerland FHNW realized a project for implementation of a prototype called GOODBOT, a novelty chatbot and a simple moral machine. One of its meta rules was it should not lie unless not lying would hurt the user. It was a stand-alone solution, not linked with other systems and not internet- or web-based. In the LIEBOT project, the mentioned meta rule was reversed. This web-based chatbot, implemented in 2016, could lie systematically. It was an example of a simple immoral machine. A follow-up project in 2018 is going to develop the BESTBOT, considering the restrictions of the GOODBOT and the opportunities of the LIEBOT. The aim is to develop a machine that can detect problems of users of all kinds and can react in an adequate way. It should have textual, auditory and visual capabilities.
Fig.: The GOODBOT
AAAI announced the launch of the AAAI/ACM Conference on AI, Ethics, and Society, to be co-located with AAAI-18, February 2-3, 2018 in New Orleans. The Call for Papers is available at http://www.aies-conference.com. October 31 is the deadline for submissions. „As AI is becoming more pervasive in our life, its impact on society is more significant and concerns and issues are raised regarding aspects such as value alignment, data bias and data policy, regulations, and workforce displacement. Only a multi-disciplinary and multi-stakeholder effort can find the best ways to address these concerns, including experts of various disciplines, such as AI, computer science, ethics, philosophy, economics, sociology, psychology, law, history, and politics.“ (AAAI information) The new conference complements and expands the classical AAAI Spring Symposia at Stanford University (including symposia like „AI for Social Good“ in 2017 or „AI and Society: Ethics, Safety and Trustworthiness in Intelligent Agents“ in 2018).
Fig.: AI and ethics could help society
PlayGround is a Spanish online magazine, founded in 2008, with a focus on culture, future and food. Astrid Otal asked the ethicist Oliver Bendel about the conference in London („Love and Sex with Robots“) and in general about sex robots and love dolls. One issue was: „In love, a person can suffer. But in this case, can robots make us suffer sentimentally?“ The reply to it: „Of course, they can make us suffer. By means of their body, body parts and limbs, and by means of their language capabilities. They can hurt us, they can kill us. They can offend us by using certain words and by telling the truth or the untruth. In my contribution for the conference proceedings, I ask this question: It is possible to be unfaithful to the human love partner with a sex robot, and can a man or a woman be jealous because of the robot’s other love affairs? We can imagine how suffering can emerge in this context … But robots can also make us happy. Some years ago, we developed the GOODBOT, a chatbot which can detect problems of the user and escalate on several levels. On the highest level, it hands over an emergency number. It knows its limits.“ Some statements of the interview have been incorporated in the article „Última parada: después del sexo con autómatas, casarse con un Robot“ (February 11, 2017) which is available via www.playgroundmag.net/futuro/sexo-robots-matrimonio-legal-2050-realdolls_0_1918608121.html.
Fig.: What about the robot’s love affairs?
„Artificial intelligence (AI) raises a number of ethical and political challenges in the present and near term, with applications such as driverless cars and search engines and potential issues ranging from job disruption to privacy violations. Over a longer term, if AI becomes as or more intelligent than humans, other governance issues such as safety and control may increase in importance. What policy approaches make sense across different issues and timeframes?“ (Website European Parliament) These are the initial words of a description of the workshop „Robotics and Artificial Intelligence – Ethical Issues and Regulatory approach“, organised by the Policy Department of the European Parliament. The first part „will focus on basic ethical and policy questions raised by the development of robotics and AI on the basis of presentations by experts“ (Website European Parliament). According to the description, this will be followed by a discussion with national parliamentarians on what the legislator should do and on which level, with the European Parliament’s draft legislative initiative report on „Civil Law Rules on Robotics“ as a basis. Further information can be found on the European Parliament’s website (www.europarl.europa.eu).
Springer invites scientists to contribute to the Journal on Vehicle Routing Algorithms. Editors-in-chief are Christian Prins, Troyes University of Technology, France, and Marc Sevaux, University of South-Brittany, France. The publishing house declares that the new journal „is an excellent domain for testing new approaches in modeling, optimization, artificial intelligence, computational intelligence, and simulation“ (Mailing, 2 September 2016). „Articles published in the Journal on Vehicle Routing Algorithms will present solutions, methods, algorithms, case studies, or software, attracting the interest of academic and industrial researchers, practitioners, and policymakers.“ (Mailing, 2 September 2016) According to the website, a vehicle routing problem (VRP) „arises whenever a set of spatially disseminated locations must be visited by mobile objects to perform tasks“ (Website Springer). „The mobile objects may be motorized vehicles, pedestrians, drones, mobile sensors, or manufacturing robots; the space covered may range from silicon chips or PCBs to aircraft wings, warehouses, cities, or countries; and the applications include traditional domains, such as freight and passenger transportation, services, logistics, and manufacturing, and also modern issues such as autonomous cars and the Internet of Things (IoT), and the profound environmental and societal implications of achieving efficiencies in resources, power, labor, and time.“ (Website Springer) The moral decisions of cars, drones and vacuum cleaners can also be investigated. More information via www.springer.com.
Machine ethics researches the morality of semi-autonomous and autonomous machines. In 2013 and 2014, the School of Business at the University of Applied Sciences and Arts Northwestern Switzerland FHNW implemented a prototype of the GOODBOT, which is a novelty chatbot and a simple moral machine. One of its meta rules was it should not lie unless not lying would hurt the user. In a follow-up project in 2016 the LIEBOT (aka LÜGENBOT) was developed, as an example of a Munchausen machine. The student Kevin Schwegler, supervised by Prof. Dr. Oliver Bendel and Prof. Dr. Bradley Richards, used the Eclipse Scout framework. The whitepaper which was published on July 25, 2016 via liebot.org outlines the background and the development of the LIEBOT. It describes – after a short introduction to the history and theory of lying and automatic lying (including the term of Munchausen machines) – the principles and pre-defined standards the bad bot will be able to consider. Then it is discussed how Munchausen machines as immoral machines can contribute to constructing and optimizing moral machines. After all the LIEBOT project is a substantial contribution to machine ethics as well as a critical review of electronic language-based systems and services, in particular of virtual assistants and chatbots.
Fig.: A role model for the LIEBOT