Schlagworte: Ethics

Moral Competence for Social Robots

At the end of 2018, the article entitled „Learning How to Behave: Moral Competence for Social Robots“ by Bertram F. Malle and Matthias Scheutz was published in the „Handbuch Maschinenethik“ („Handbook Machine Ethics“) (ed.: Oliver Bendel). An excerpt from the abstract: „We describe a theoretical framework and recent research on one key aspect of robot ethics: the development and implementation of a robot’s moral competence.“ The authors propose „that moral competence consists of five elements, two constituents (moral norms and moral vocabulary) and three activities (moral judgment, moral action, and moral communication)“. „A robot’s computational representations of social and moral norms is a prerequisite for all three moral activities. However, merely programming in advance the vast network of human norms is impossible, so new computational learning algorithms are needed that allow robots to acquire and update the context-specific and graded norms relevant to their domain of deployment. Moral vocabulary is needed primarily for moral communication, which expresses moral judgments of others’ violations and explains one’s own moral violations – to justify them, apologize, or declare intentions to do better. Current robots have at best rudimentary moral competence, but with improved learning and reasoning they may begin to show the kinds of capacities that humans will expect of future social robots.“ (Abstract) An overview of the contributions that have been published electronically since 2017 can be found on link.springer.com/referencework/10.1007/978-3-658-17484-2.

The Spy who Loved and Nursed Me

Robots in the health sector are important, valuable innovations and supplements. As therapy and nursing robots, they take care of us and come close to us. In addition, other service robots are widespread in nursing and retirement homes and hospitals. With the help of their sensors, all of them are able to recognize us, to examine and classify us, and to evaluate our behavior and appearance. Some of these robots will pass on our personal data to humans and machines. They invade our privacy and challenge the informational autonomy. This is a problem for the institutions and the people that needs to be solved. The article „The Spy who Loved and Nursed Me: Robots and AI Systems in Healthcare from the Perspective of Information Ethics“ by Oliver Bendel presents robot types in the health sector, along with their technical possibilities, including their sensors and their artificial intelligence capabilities. Against this background, moral problems are discussed, especially from the perspective of information ethics and with respect to privacy and informational autonomy. One of the results shows that such robots can improve the personal autonomy, but the informational autonomy is endangered in an area where privacy has a special importance. At the end, solutions are proposed from various disciplines and perspectives. The article was published in Telepolis on December 17, 2018 and can be accessed via www.heise.de/tp/features/The-Spy-who-Loved-and-Nursed-Me-4251919.html.

Fig.: What have I got to hide?

Smart Machines and Save Animals

„With a few decades, autonomous and semi-autonomous machines will be found throughout Earth’s environments, from homes and gardens to parks and farms and so-called working landscapes – everywhere, really, that humans are found, and perhaps even places we’re not. And while much attention is given to how those machines will interact with people, far less is paid to their impacts on animals.“ (Anthropocene, October 10, 2018) „Machines can disturb, frighten, injure, and kill animals,“ says Oliver Bendel, an information systems professor at the University of Applied Sciences and Arts Northwestern Switzerland, according to the magazine. „Animal-friendly machines are needed.“ (Anthropocene, October 10, 2018) In the article „Will smart machines be kind to animals?“ the magazine Anthropocene deals with animal-friendly machines and introduces the work of the scientist. It is based on his paper „Towards animal-friendly machines“ (Paladyn) and an interview conducted by journalist Brandon Keim with Oliver Bendel. More via www.anthropocenemagazine.org/2018/10/animal-friendly-ai/.

Fig.: Also a cat can be safe, even on the street

In Love with Azuma

The Gatebox was given to some persons and institutions in Japan some time ago. The company announced at the end of July 2018 that it is now going into series production. In fact, it is possible to order on the website the machine that resembles a coffee machine. The anime girl Azuma Hikari lives in a glass „coffee pot“. She is a hologram connected to a dialogue system and an AI system. She communicates with her owner even when he is out and about (by sending messages to his smartphone) and learns. SRF visited a young man who lives with the Gatebox. „I love my wife,“ Kondo Akihiko is quoted. The station writes: „He can’t hug or kiss her. The Japanese guy is with a hologram.“ (SRF) Anyone who thinks that the love for manga and anime girls is a purely Japanese phenomenon is mistaken. In Dortmund’s BorDoll (from „Bordell“ and „Doll“ or „Love Doll“) the corresponding love dolls are in high demand. Here, too, it is young men shy of real girls who have developed a desire in the tradition of Pygmalion. Akihiko Kondo dreams that one day he can go out into the world with Azuma Hikari and hold her hand. But it’s a long way to go, and the anime girl will still need her little prison for a long time.

Fig.: In love with an anime girl

Machine Ethics and Artificial Intelligence

The young discipline of machine ethics refers to the morality of semi-autonomous and autonomous machines, robots, bots or software systems. They become special moral agents, and depending on their behavior, we can call them moral or immoral machines. They decide and act in situations where they are left to their own devices, either by following pre-defined rules or by comparing their current situations to case models, or as machines capable of learning and deriving rules. Moral machines have been known for some years, at least as simulations and prototypes. Machine ethics works closely with artificial intelligence and robotics. The term of machine morality can be used similarly to the term of artificial intelligence. Oliver Bendel has developed a graphic that illustrates the relationship between machine ethics and artificial intelligence. He presented it at conferences at Stanford University (AAAI Spring Symposia), in Fort Lauderdale (ISAIM) and Vienna (Robophilosophy) in 2018.

Fig.: The terms of machine ethics and artificial intelligence

Green Salon around Robotics and AI

Oliver Bendel was invited by the Green European Foundation to the second edition of the Green Salon around robotics and artificial intelligence in Vienna on the 12th of February 2018. „The Green Salon is an invitation-only event for the Green family and independent experts and thinkers from across Europe, to discuss important topics that will shape the future of the European Union. While research and industry in Europe and beyond have achieved immense progress in recent years, the public and political debate on the moral and legal implications of the use and further development of these new technologies is still in its infancy. A challenging situation, which needs to alarm as well as motivate Greens to meaningfully shape the debate on how we can make sure emerging technologies serve humans appropriately, while remaining under their full control. In particular, the impact of automation on job markets, and of new technologies in general on the very nature and future of work, are at the core of the discussion. Beyond simple adaptation discourses of mainstream media and other political families, the Green Salon aims at taking the debate further for the Greens and their partners.“ (Invitation Letter of the Green European Foundation) The Green European Foundation is a European-level political foundation funded by the European Parliament.

Fig.: To Vienna!

Conference on AI, Ethics, and Society

AAAI announced the launch of the AAAI/ACM Conference on AI, Ethics, and Society, to be co-located with AAAI-18, February 2-3, 2018 in New Orleans. The Call for Papers is available at http://www.aies-conference.com. October 31 is the deadline for submissions. „As AI is becoming more pervasive in our life, its impact on society is more significant and concerns and issues are raised regarding aspects such as value alignment, data bias and data policy, regulations, and workforce displacement. Only a multi-disciplinary and multi-stakeholder effort can find the best ways to address these concerns, including experts of various disciplines, such as AI, computer science, ethics, philosophy, economics, sociology, psychology, law, history, and politics.“ (AAAI information) The new conference complements and expands the classical AAAI Spring Symposia at Stanford University (including symposia like „AI for Social Good“ in 2017 or „AI and Society: Ethics, Safety and Trustworthiness in Intelligent Agents“ in 2018).

Fig.: AI and ethics could help society

Reflections on Individual Synthetic Voices

The synthetization of voices, or speech synthesis, has been an object of interest for centuries. It is mostly realized with a text-to-speech system (TTS), an automaton that interprets and reads aloud. This system refers to text available for instance on a website or in a book, or entered via popup menu on the website. Today, just a few minutes of samples are enough in order to be able to imitate a speaker convincingly in all kinds of statements. The article „The Synthetization of Human Voices“ by Oliver Bendel (published on 26 July 2017) abstracts from actual products and actual technological realization. Rather, after a short historical outline of the synthetization of voices, exemplary applications of this kind of technology are gathered for promoting the development, and potential applications are discussed critically in order to be able to limit them if necessary. The ethical and legal challenges should not be underestimated, in particular with regard to informational and personal autonomy and the trustworthiness of media. The article can be viewed via rdcu.be/uvxm.

Fig.: Can you hear my voice?

Kill Switches for Robots

EU rules for the fields of robotics and artificial intelligence, to settle issues such as compliance with ethical standards and liability for accidents involving self-driving cars, should be put forward by the EU Commission, urged the Legal Affairs Committee on January 12, 2017. The media has reported on this in television, radio and newspapers. According to the Parliament’s website, rapporteur Mady Delvaux said: „A growing number of areas of our daily lives are increasingly affected by robotics. In order to address this reality and to ensure that robots are and will remain in the service of humans, we urgently need to create a robust European legal framework.“ (Website European Parliament) The members of the European Parliament push „the Commission to consider creating a European agency for robotics and artificial intelligence to supply public authorities with technical, ethical and regulatory expertise“ (Website European Parliament). „They also propose a voluntary ethical conduct code to regulate who would be accountable for the social, environmental and human health impacts of robotics and ensure that they operate in accordance with legal, safety and ethical standards.“ (Website European Parliament) To be more concrete, roboticists could include „kill“ switches so that robots can be turned off in emergencies. This poses questions about, for example, which robots should be enhanced, and which persons should be able to „kill“ them. More information via www.europarl.europa.eu/news/en/news-room/20170110IPR57613/robots-legal-affairs-committee-calls-for-eu-wide-rules.

Fig.: A robot reads the ethical conduct code

Workshop on Robotics and Artificial Intelligence

„Artificial intelligence (AI) raises a number of ethical and political challenges in the present and near term, with applications such as driverless cars and search engines and potential issues ranging from job disruption to privacy violations. Over a longer term, if AI becomes as or more intelligent than humans, other governance issues such as safety and control may increase in importance. What policy approaches make sense across different issues and timeframes?“ (Website European Parliament) These are the initial words of a description of the workshop „Robotics and Artificial Intelligence – Ethical Issues and Regulatory approach“, organised by the Policy Department of the European Parliament. The first part „will focus on basic ethical and policy questions raised by the development of robotics and AI on the basis of presentations by experts“ (Website European Parliament). According to the description, this will be followed by a discussion with national parliamentarians on what the legislator should do and on which level, with the European Parliament’s draft legislative initiative report on „Civil Law Rules on Robotics“ as a basis. Further information can be found on the European Parliament’s website (www.europarl.europa.eu).