The young discipline of machine ethics refers to the morality of semi-autonomous and autonomous machines, robots, bots or software systems. They become special moral agents, and depending on their behavior, we can call them moral or immoral machines. They decide and act in situations where they are left to their own devices, either by following pre-defined rules or by comparing their current situations to case models, or as machines capable of learning and deriving rules. Moral machines have been known for some years, at least as simulations and prototypes. Machine ethics works closely with artificial intelligence and robotics. The term of machine morality can be used similarly to the term of artificial intelligence. Oliver Bendel has developed a graphic that illustrates the relationship between machine ethics and artificial intelligence. He presented it at conferences at Stanford University (AAAI Spring Symposia), in Fort Lauderdale (ISAIM) and Vienna (Robophilosophy) in 2018.
Fig.: The terms of machine ethics and artificial intelligence
Oliver Bendel was invited by the Green European Foundation to the second edition of the Green Salon around robotics and artificial intelligence in Vienna on the 12th of February 2018. „The Green Salon is an invitation-only event for the Green family and independent experts and thinkers from across Europe, to discuss important topics that will shape the future of the European Union. While research and industry in Europe and beyond have achieved immense progress in recent years, the public and political debate on the moral and legal implications of the use and further development of these new technologies is still in its infancy. A challenging situation, which needs to alarm as well as motivate Greens to meaningfully shape the debate on how we can make sure emerging technologies serve humans appropriately, while remaining under their full control. In particular, the impact of automation on job markets, and of new technologies in general on the very nature and future of work, are at the core of the discussion. Beyond simple adaptation discourses of mainstream media and other political families, the Green Salon aims at taking the debate further for the Greens and their partners.“ (Invitation Letter of the Green European Foundation) The Green European Foundation is a European-level political foundation funded by the European Parliament.
Fig.: To Vienna!
AAAI announced the launch of the AAAI/ACM Conference on AI, Ethics, and Society, to be co-located with AAAI-18, February 2-3, 2018 in New Orleans. The Call for Papers is available at http://www.aies-conference.com. October 31 is the deadline for submissions. „As AI is becoming more pervasive in our life, its impact on society is more significant and concerns and issues are raised regarding aspects such as value alignment, data bias and data policy, regulations, and workforce displacement. Only a multi-disciplinary and multi-stakeholder effort can find the best ways to address these concerns, including experts of various disciplines, such as AI, computer science, ethics, philosophy, economics, sociology, psychology, law, history, and politics.“ (AAAI information) The new conference complements and expands the classical AAAI Spring Symposia at Stanford University (including symposia like „AI for Social Good“ in 2017 or „AI and Society: Ethics, Safety and Trustworthiness in Intelligent Agents“ in 2018).
Fig.: AI and ethics could help society
The synthetization of voices, or speech synthesis, has been an object of interest for centuries. It is mostly realized with a text-to-speech system (TTS), an automaton that interprets and reads aloud. This system refers to text available for instance on a website or in a book, or entered via popup menu on the website. Today, just a few minutes of samples are enough in order to be able to imitate a speaker convincingly in all kinds of statements. The article „The Synthetization of Human Voices“ by Oliver Bendel (published on 26 July 2017) abstracts from actual products and actual technological realization. Rather, after a short historical outline of the synthetization of voices, exemplary applications of this kind of technology are gathered for promoting the development, and potential applications are discussed critically in order to be able to limit them if necessary. The ethical and legal challenges should not be underestimated, in particular with regard to informational and personal autonomy and the trustworthiness of media. The article can be viewed via rdcu.be/uvxm.
Fig.: Can you hear my voice?
EU rules for the fields of robotics and artificial intelligence, to settle issues such as compliance with ethical standards and liability for accidents involving self-driving cars, should be put forward by the EU Commission, urged the Legal Affairs Committee on January 12, 2017. The media has reported on this in television, radio and newspapers. According to the Parliament’s website, rapporteur Mady Delvaux said: „A growing number of areas of our daily lives are increasingly affected by robotics. In order to address this reality and to ensure that robots are and will remain in the service of humans, we urgently need to create a robust European legal framework.“ (Website European Parliament) The members of the European Parliament push „the Commission to consider creating a European agency for robotics and artificial intelligence to supply public authorities with technical, ethical and regulatory expertise“ (Website European Parliament). „They also propose a voluntary ethical conduct code to regulate who would be accountable for the social, environmental and human health impacts of robotics and ensure that they operate in accordance with legal, safety and ethical standards.“ (Website European Parliament) To be more concrete, roboticists could include „kill“ switches so that robots can be turned off in emergencies. This poses questions about, for example, which robots should be enhanced, and which persons should be able to „kill“ them. More information via www.europarl.europa.eu/news/en/news-room/20170110IPR57613/robots-legal-affairs-committee-calls-for-eu-wide-rules.
Fig.: A robot reads the ethical conduct code
„Artificial intelligence (AI) raises a number of ethical and political challenges in the present and near term, with applications such as driverless cars and search engines and potential issues ranging from job disruption to privacy violations. Over a longer term, if AI becomes as or more intelligent than humans, other governance issues such as safety and control may increase in importance. What policy approaches make sense across different issues and timeframes?“ (Website European Parliament) These are the initial words of a description of the workshop „Robotics and Artificial Intelligence – Ethical Issues and Regulatory approach“, organised by the Policy Department of the European Parliament. The first part „will focus on basic ethical and policy questions raised by the development of robotics and AI on the basis of presentations by experts“ (Website European Parliament). According to the description, this will be followed by a discussion with national parliamentarians on what the legislator should do and on which level, with the European Parliament’s draft legislative initiative report on „Civil Law Rules on Robotics“ as a basis. Further information can be found on the European Parliament’s website (www.europarl.europa.eu).
Automatisierte Maschinen und (teil-)autonome Systeme verbreiten sich immer mehr. Sie treffen selbstständig Entscheidungen, auch moralischer Art. Zugleich verschmelzen Menschen und Maschinen, Menschen vermessen sich bei Aktivitäten (Quantified Self), zeichnen ihr Leben auf (Lifelogging), ergänzen und verbessern Körper und Geist und werden zu Cyborgs. Das Europäische Parlament in Brüssel widmet sich diesen Entwicklungen am 8. September 2016 von 9.30 bis 13.00 Uhr. Titel der Veranstaltung, zu der Jan Philipp Albrecht einlädt, ist „Merging of man and machines: questions of ethics in dealing with emerging“. Der Newsletter DIGITAL AGENDA wartet mit folgenden Informationen auf: „The Working Group Green Robotics would like to invite you to a public hearing on ‚Merging of man and machines: questions of ethics in dealing with emerging technology‘. With this and further discussions we would like to develop a position on how society should respond to questions like How will our lives and our society change with the increasing fusion with modern technology? What role have politics and law in this context? Is there a need for regulation and if so, how? How can human rights be addressed?“ Im Track „Ethics & Society: Examples of how our lives, values and society will change“ sprechen drei Expertinnen und Experten zu den genannten Themen, nämlich Yvonne Hofstetter (Direktorin der Teramark Technologies GmbH), Prof. Dr. Oliver Bendel (Professor an der Hochschule für Wirtschaft FHNW) und Constanze Kurz (Sprecherin des CCC). Der Track „Politics & Law: Examples of how we do/can debate and regulate this field“ wird bestritten von Juho Heikkilä (Bereichsleiter bei DG Connect) und Dr. Hielke Hijmans (Special Advisor at the Offices of the European Data Protection Supervisor). Zwei weitere Vortragende sind Enno Park (Vorsitzender von Cyborgs e.V.) und Dana Lewis (Unternehmensgründerin). Weitere Informationen über www.janalbrecht.eu/termine/merging-of-man-and-machines-questions-of-ethics-in-dealing-with-emerging-technology.html.
Abb.: Der LIEBOT ist eine unmoralische Maschine
Prior to the hearing in the Parliament of the Federal Republic of Germany on 22 June 2016 from 4 – 6 pm, the contracted experts had sent their written comments on ethical and legal issues with respect to the use of robots and artificial intelligence. The video for the hearing can be accessed via www.bundestag.de/dokumente/textarchiv/2016/kw25-pa-digitale-agenda/427996. The documents of Oliver Bendel (School of Business FHNW), Eric Hilgendorf (University of Würzburg), Norbert Elkman (Fraunhofer IPK) and Ryan Calo (University of Washington) were published in July on the website of the German Bundestag. Answering the question „Apart from legal questions, for example concerning responsibility and liability, where will ethical questions, in particular, also arise with regard to the use of artificial intelligence or as a result of the aggregation of information and algorithms?“ the US scientist explained: „Robots and artificial intelligence raise just as many ethical questions as legal ones. We might ask, for instance, what sorts of activities we can ethically outsource to machines. Does Germany want to be a society that relegates the use of force, the education of children, or eldercare to robots? There are also serious challenges around the use of artificial intelligence to make material decisions about citizens in terms of minimizing bias and providing for transparency and accountability – issues already recognized to an extent by the EU Data Directive.“ (Website German Bundestag) All documents (most of them in German) are available via www.bundestag.de/bundestag/ausschuesse18/a23/anhoerungen/fachgespraech/428268.
The call for papers for „Machine Ethics and Machine Law“ has been released. This international conference will take place in Cracow (Poland) from 18 to 19 November 2016. According to the announcement, the deadline for abstract submissions is 9 September 2016. The following information is provided on the website: „Artificial Intelligence systems have become an important part of our everyday lives. What used to be a subject of science fiction novels and movies has trespassed into the realm of facts. Many decision making processes are delegated to machines and these decisions have direct impact on humans and societies at large. This leads directly to the question: What are the ethical and legal limitations of those artificial agents? Issues such as liability, moral and legal responsibility (in different contexts: from autonomous cars to military drones) are coming into the forefront. It is clear that some constraints should be imposed; both the unintended and often unforeseen negative consequences of the technological progress, as well as speculative and frightening views of the future portrayed in the works of fiction, leave no doubt that there ought to be some guidelines. The problem is to work out these constraints in a reasonable manner so that machine can be a moral and legal agent, or else argue that it is impossible and why.“ (conference website) The conference is a follow-up of the AAAI Spring Symposium on „Ethical and Moral Considerations in Non-Human Agents“ which was held in March 2016 at Stanford University. Further information via machinelaw.philosophyinscience.com.
Fig.: Moral machines are also relevant in farming
Vom 21. bis zum 23. März 2016 findet an der Stanford University der Workshop „Ethical and Moral Considerations in Non-Human Agents“ statt, im Rahmen der AAAI Spring Symposia. Keynote-Speaker sind Ron Arkin (Georgia Institute of Technology), Maja Matarić (University of Southern California) und Luis Moriz Pereira (Universidade Nova de Lisboa). Organisiert wird der Workshop von Bipin Indurkhya (Jagiellonian University, Krakau) und Georgi Stojanov (The American University of Paris). Im Programmkomitee sitzen u.a. Peter Asaro (The New School, New York) und Patrick Lin (California Polytechnic State University). Einen der wissenschaftlichen Vorträge wird Oliver Bendel (Hochschule für Wirtschaft FHNW, Basel, Olten und Brugg-Windisch) halten. Aus dem Abstract von „Annotated Decision Trees for Simple Moral Machines“: „This article proposes an approach for creating annotated decision trees, and specifies their central components. The focus is on simple moral machines. The chances of such models are illustrated with the example of a self-driving car that is friendly to humans and animals. Finally the advantages and disadvantages are discussed and conclusions are drawn.“ Weitere Informationen über sites.google.com/site/ethicalnonhumanagents/.
Abb.: Ein als Polizeiwagen getarntes Roboterauto