Schlagworte: Ethics

Stephen A. Schwarzman Centre in Gründung

„Die britische Elite-Universität Oxford hat eine Spende in Höhe von 150 Millionen Pfund (rund 168 Millionen Euro) von US-Milliardär Stephen A. Schwarzman erhalten. Mit der höchsten Einzelspende in der Geschichte der Hochschule soll das ‚Stephen A. Schwarzman Centre‘ für Geisteswissenschaften entstehen.“ (SPON, 19. Juni 2019) Dies meldete der Spiegel am 19. Juni 2019. Weiter heißt es: „In dem Gebäude sollen unter anderem die Fakultäten für … Geschichts- und Sprachwissenschaften, Philosophie, Musik und Theologie zusammengelegt werden. Rund ein Viertel aller Oxford-Studenten sind in diesen Fächern eingeschrieben. Zusätzlich soll dort ein neues Institut für Ethik im Umgang mit Künstlicher Intelligenz entstehen, wie die Universität mitteilte.“ (SPON, 19. Juni 2019) Der Schwerpunkt scheint auf Informations- und Roboterethik zu liegen. Schwarzman selbst sagte laut Spiegel, Universitäten müssten dabei helfen, ethische Grundsätze für den schnellen technologischen Wandel zu entwickeln. Über die Herkunft der Mittel wird debattiert. Weitere Informationen über

Workshop for a Free and Beautiful World

Dr. Mathilde Noual (Freie Universität Berlin) and Prof. Dr. Oliver Bendel (School of Business FHNW) are organizing a workshop on the social implications of artificial intelligence (AI) and robotics. They are looking for constructive proposals of technological and conceptual utopias, of counter-cultures and counter-systems offering strategies for preserving privacy, individuality, and freedom in a technological world, for going beyond the AI’s present limitations and frustrations, and for emphasising the beauty of the world and of humans’ way of accessing it (with high degree of nuance, contextuality, subjectivity, adaptability and acutality). The tracks are: The territory today: core limitations and prospects of AI; technologies and approaches against surveillance technologies (examples: the virtual burka; the hacked social credit system); technologies and approaches for an intact environment (examples: AI and robots for clean waters and seas; animals with weapons for self-defence); technologies and approaches for a new policy (examples: AI as a president); technologies and approaches for shared knowledge and education (example: open research solutions). The workshop will take place on the 29th and 30th of June 2019 in Berlin, at the Weizenbaum Institute. The CfP is addressed exclusively to the invited persons.

Fig.: Free and beautiful

Moral Competence for Social Robots

At the end of 2018, the article entitled „Learning How to Behave: Moral Competence for Social Robots“ by Bertram F. Malle and Matthias Scheutz was published in the „Handbuch Maschinenethik“ („Handbook Machine Ethics“) (ed.: Oliver Bendel). An excerpt from the abstract: „We describe a theoretical framework and recent research on one key aspect of robot ethics: the development and implementation of a robot’s moral competence.“ The authors propose „that moral competence consists of five elements, two constituents (moral norms and moral vocabulary) and three activities (moral judgment, moral action, and moral communication)“. „A robot’s computational representations of social and moral norms is a prerequisite for all three moral activities. However, merely programming in advance the vast network of human norms is impossible, so new computational learning algorithms are needed that allow robots to acquire and update the context-specific and graded norms relevant to their domain of deployment. Moral vocabulary is needed primarily for moral communication, which expresses moral judgments of others’ violations and explains one’s own moral violations – to justify them, apologize, or declare intentions to do better. Current robots have at best rudimentary moral competence, but with improved learning and reasoning they may begin to show the kinds of capacities that humans will expect of future social robots.“ (Abstract) An overview of the contributions that have been published electronically since 2017 can be found on

The Spy who Loved and Nursed Me

Robots in the health sector are important, valuable innovations and supplements. As therapy and nursing robots, they take care of us and come close to us. In addition, other service robots are widespread in nursing and retirement homes and hospitals. With the help of their sensors, all of them are able to recognize us, to examine and classify us, and to evaluate our behavior and appearance. Some of these robots will pass on our personal data to humans and machines. They invade our privacy and challenge the informational autonomy. This is a problem for the institutions and the people that needs to be solved. The article „The Spy who Loved and Nursed Me: Robots and AI Systems in Healthcare from the Perspective of Information Ethics“ by Oliver Bendel presents robot types in the health sector, along with their technical possibilities, including their sensors and their artificial intelligence capabilities. Against this background, moral problems are discussed, especially from the perspective of information ethics and with respect to privacy and informational autonomy. One of the results shows that such robots can improve the personal autonomy, but the informational autonomy is endangered in an area where privacy has a special importance. At the end, solutions are proposed from various disciplines and perspectives. The article was published in Telepolis on December 17, 2018 and can be accessed via

Fig.: What have I got to hide?

Smart Machines and Save Animals

„With a few decades, autonomous and semi-autonomous machines will be found throughout Earth’s environments, from homes and gardens to parks and farms and so-called working landscapes – everywhere, really, that humans are found, and perhaps even places we’re not. And while much attention is given to how those machines will interact with people, far less is paid to their impacts on animals.“ (Anthropocene, October 10, 2018) „Machines can disturb, frighten, injure, and kill animals,“ says Oliver Bendel, an information systems professor at the University of Applied Sciences and Arts Northwestern Switzerland, according to the magazine. „Animal-friendly machines are needed.“ (Anthropocene, October 10, 2018) In the article „Will smart machines be kind to animals?“ the magazine Anthropocene deals with animal-friendly machines and introduces the work of the scientist. It is based on his paper „Towards animal-friendly machines“ (Paladyn) and an interview conducted by journalist Brandon Keim with Oliver Bendel. More via

Fig.: Also a cat can be safe, even on the street

In Love with Azuma

The Gatebox was given to some persons and institutions in Japan some time ago. The company announced at the end of July 2018 that it is now going into series production. In fact, it is possible to order on the website the machine that resembles a coffee machine. The anime girl Azuma Hikari lives in a glass „coffee pot“. She is a hologram connected to a dialogue system and an AI system. She communicates with her owner even when he is out and about (by sending messages to his smartphone) and learns. SRF visited a young man who lives with the Gatebox. „I love my wife,“ Kondo Akihiko is quoted. The station writes: „He can’t hug or kiss her. The Japanese guy is with a hologram.“ (SRF) Anyone who thinks that the love for manga and anime girls is a purely Japanese phenomenon is mistaken. In Dortmund’s BorDoll (from „Bordell“ and „Doll“ or „Love Doll“) the corresponding love dolls are in high demand. Here, too, it is young men shy of real girls who have developed a desire in the tradition of Pygmalion. Akihiko Kondo dreams that one day he can go out into the world with Azuma Hikari and hold her hand. But it’s a long way to go, and the anime girl will still need her little prison for a long time.

Fig.: In love with an anime girl

Machine Ethics and Artificial Intelligence

The young discipline of machine ethics refers to the morality of semi-autonomous and autonomous machines, robots, bots or software systems. They become special moral agents, and depending on their behavior, we can call them moral or immoral machines. They decide and act in situations where they are left to their own devices, either by following pre-defined rules or by comparing their current situations to case models, or as machines capable of learning and deriving rules. Moral machines have been known for some years, at least as simulations and prototypes. Machine ethics works closely with artificial intelligence and robotics. The term of machine morality can be used similarly to the term of artificial intelligence. Oliver Bendel has developed a graphic that illustrates the relationship between machine ethics and artificial intelligence. He presented it at conferences at Stanford University (AAAI Spring Symposia), in Fort Lauderdale (ISAIM) and Vienna (Robophilosophy) in 2018.

Fig.: The terms of machine ethics and artificial intelligence

Green Salon around Robotics and AI

Oliver Bendel was invited by the Green European Foundation to the second edition of the Green Salon around robotics and artificial intelligence in Vienna on the 12th of February 2018. „The Green Salon is an invitation-only event for the Green family and independent experts and thinkers from across Europe, to discuss important topics that will shape the future of the European Union. While research and industry in Europe and beyond have achieved immense progress in recent years, the public and political debate on the moral and legal implications of the use and further development of these new technologies is still in its infancy. A challenging situation, which needs to alarm as well as motivate Greens to meaningfully shape the debate on how we can make sure emerging technologies serve humans appropriately, while remaining under their full control. In particular, the impact of automation on job markets, and of new technologies in general on the very nature and future of work, are at the core of the discussion. Beyond simple adaptation discourses of mainstream media and other political families, the Green Salon aims at taking the debate further for the Greens and their partners.“ (Invitation Letter of the Green European Foundation) The Green European Foundation is a European-level political foundation funded by the European Parliament.

Fig.: To Vienna!

Conference on AI, Ethics, and Society

AAAI announced the launch of the AAAI/ACM Conference on AI, Ethics, and Society, to be co-located with AAAI-18, February 2-3, 2018 in New Orleans. The Call for Papers is available at October 31 is the deadline for submissions. „As AI is becoming more pervasive in our life, its impact on society is more significant and concerns and issues are raised regarding aspects such as value alignment, data bias and data policy, regulations, and workforce displacement. Only a multi-disciplinary and multi-stakeholder effort can find the best ways to address these concerns, including experts of various disciplines, such as AI, computer science, ethics, philosophy, economics, sociology, psychology, law, history, and politics.“ (AAAI information) The new conference complements and expands the classical AAAI Spring Symposia at Stanford University (including symposia like „AI for Social Good“ in 2017 or „AI and Society: Ethics, Safety and Trustworthiness in Intelligent Agents“ in 2018).

Fig.: AI and ethics could help society

Reflections on Individual Synthetic Voices

The synthetization of voices, or speech synthesis, has been an object of interest for centuries. It is mostly realized with a text-to-speech system (TTS), an automaton that interprets and reads aloud. This system refers to text available for instance on a website or in a book, or entered via popup menu on the website. Today, just a few minutes of samples are enough in order to be able to imitate a speaker convincingly in all kinds of statements. The article „The Synthetization of Human Voices“ by Oliver Bendel (published on 26 July 2017) abstracts from actual products and actual technological realization. Rather, after a short historical outline of the synthetization of voices, exemplary applications of this kind of technology are gathered for promoting the development, and potential applications are discussed critically in order to be able to limit them if necessary. The ethical and legal challenges should not be underestimated, in particular with regard to informational and personal autonomy and the trustworthiness of media. The article can be viewed via

Fig.: Can you hear my voice?