Neue Runde des Bundeswettbewerbs KI

Der Bundeswettbewerb KI unter der Schirmherrschaft des Ministerpräsidenten des Landes Baden-Württemberg, Winfried Kretschmann, geht in eine neue Runde. Auf der Website wird mit den Worten geworben: „Eure Ideen sind gefragt! Verändert die Welt mit Künstlicher Intelligenz und entwickelt euer eigenes KI-Projekt. Setzt eure Idee um und nutzt dazu die Methoden des maschinellen Lernens. Lasst euch von den Projekten aus dem Vorjahr inspirieren.“ (Website BWKI) Der Wettbewerb richtet sich nach den Angaben der Initiatoren und Organisatoren an Schüler und Schülerinnen weiterführender Schulen. Eine Teilnahme im ersten Jahr nach dem Schulabschluss sei ebenfalls möglich. Auf Instagram stehen Materialien und Interviews zu interessanten Themen und Disziplinen zur Verfügung. Dazu gehört die Tier-Maschine-Interaktion (mitsamt Ansätzen der Maschinenethik), die von Prof. Dr. Oliver Bendel erklärt wird, etwa in den Beiträgen „Wie sollen sich Maschinen gegenüber Tieren verhalten“ (14. März 2024), „Können Tiere und Maschinen Freunde werden?“ (17. März 2024) und „Schützt KI Igel vor dem Rasenmähertod?“ (21. März 2024). Im Teaser der Website heißt es: „Meldet euch, euer Team und eure Idee bis zum 2. Juni 2024 an. Euer Projekt könnt ihr dann bis zum 15. September 2024 fertigstellen. Wer ist dabei?“ (Website BWKI) Weitere Informationen über www.bw-ki.de.

Abb.: Tier-Maschine-Interaktion kann eine Perspektive sein

When Robots Speak Dialect

On January 29, 2024, the article „‚Ick bin een Berlina‘: dialect proficiency impacts a robot’s trustworthiness and competence evaluation“ was published in Frontiers in Robotics and AI. Authors are Katharina Kühne, Erika Herbold, Oliver Bendel, Yuefang Zhou, and Martin H. Fischer. With the exception of Oliver Bendel – who is a professor at the School of FHNW and an associated researcher in the PECoG group – all of them are members of the University of Potsdam. The paper says about the background: „Robots are increasingly used as interaction partners with humans. Social robots are designed to follow expected behavioral norms when engaging with humans and are available with different voices and even accents. Some studies suggest that people prefer robots to speak in the user’s dialect, while others indicate a preference for different dialects.“ The following results are mentioned: „We found a positive relationship between participants’ self-reported Berlin dialect proficiency and trustworthiness in the dialect-speaking robot. Only when controlled for demographic factors, there was a positive association between participants’ dialect proficiency, dialect performance and their assessment of robot’s competence for the standard German-speaking robot. Participants‘ age, gender, length of residency in Berlin, and device used to respond also influenced assessments. Finally, the robot’s competence positively predicted its trustworthiness.“ The article can be accessed at www.frontiersin.org/articles/10.3389/frobt.2023.1241519/full.

Fig.: A Berlin robot (Image: Ideogram)

Artificial Intelligence & Animals

The online event „Artificial Intelligence & Animals“ took take place on 16 September 2023. Speakers were Prof. Dr. Oliver Bendel (FHNW University of Applied Sciences and Arts Northwestern Switzerland), Yip Fai Tse (University Center for Human Values, Center for Information Technology Policy, Princeton University), and Sam Tucker (CEO VegCatalyst, AI-Powered Marketing, Melbourne). Panelists were Ian McDougall (Executive Vice President and General Counsel, LexisNexis London), Jamie McLaughlin (Animal Law Commission Vice President, UIA), and Joan Schaffner (Associate Professor of Law, George Washington University). Oliver Bendel „has been thinking on animal ethics since the 1980s and on information and machine ethics since the 1990s“. „Since 2012, he has been systematically researching machine ethics, combining it with animal ethics and animal welfare. With his changing teams, he develops animal-friendly robots and AI systems.“ (Website Eventbrite) Yip Fai Tse co-wrote the article „AI ethics: the case for including animals“ with Peter Singer. Sam Tucker is an animal rights activist.

Fig.: One topic was facial recognition for bears

AAAI Symposium on AI for Well-being

As part of the AAAI 2023 Spring Symposia in San Francisco, the symposium „Socially Responsible AI for Well-being“ is organized by Takashi Kido (Teikyo University, Japan) and Keiki Takadama (The University of Electro-Communications, Japan). The AAAI website states: „For our happiness, AI is not enough to be productive in exponential growth or economic/financial supremacies but should be socially responsible from the viewpoint of fairness, transparency, accountability, reliability, safety, privacy, and security. For example, AI diagnosis system should provide responsible results (e.g., a high-accuracy of diagnostics result with an understandable explanation) but the results should be socially accepted (e.g., data for AI (machine learning) should not be biased (i.e., the amount of data for learning should be equal among races and/or locations). Like this example, a decision of AI affects our well-being, which suggests the importance of discussing ‚What is socially responsible?‘ in several potential situations of well-being in the coming AI age.“ (Website AAAI) According to the organizers, the first perspective is „(Individually) Responsible AI“, which aims to clarify what kinds of mechanisms or issues should be taken into consideration to design Responsible AI for well-being. The second perspective is „Socially Responsible AI“, which aims to clarify what kinds of mechanisms or issues should be taken into consideration to implement social aspects in Responsible AI for well-being. More information via www.aaai.org/Symposia/Spring/sss23.php#ss09.

Fig.: Golden Gate Bridge

Responsible AI

„HASLER RESPONSIBLE AI“ is a research program of the Hasler Foundation open to research institutions within the higher education sector or non-commercial research institutions outside the higher education sector. The foundation explains the goals of the program in a call for project proposals: „The HASLER RESPONSIBLE AI program will support research projects that investigate machine-learning algorithms and artificial intelligence systems whose results meet requirements on responsibility and trustworthiness. Projects are expected to seriously engage in the application of the new models and methods in scenarios that are relevant to society. In addition, projects should respect the interdisciplinary character of research in the area of RESPONSIBLE AI by involving the necessary expertise.“ (CfPP by Hasler Foundation) Deadline for submission of short proposals is 24 January 2021. More information at haslerstiftung.ch.

Fig.: Responsible AI

Care Robots with Sexual Assistance Functions?

The paper „Care Robots with Sexual Assistance Functions“ by Oliver Bendel, accepted at the AAAI 2020 Spring Symposium „Applied AI in Healthcare: Safety, Community, and the Environment“, can be accessed via arxiv.org/abs/2004.04428. From the abstract: „Residents in retirement and nursing homes have sexual needs just like other people. However, the semi-public situation makes it difficult for them to satisfy these existential concerns. In addition, they may not be able to meet a suitable partner or find it difficult to have a relationship for mental or physical reasons. People who live or are cared for at home can also be affected by this problem. Perhaps they can host someone more easily and discreetly than the residents of a health facility, but some elderly and disabled people may be restricted in some ways. This article examines the opportunities and risks that arise with regard to care robots with sexual assistance functions. First of all, it deals with sexual well-being. Then it presents robotic systems ranging from sex robots to care robots. Finally, the focus is on care robots, with the author exploring technical and design issues. A brief ethical discussion completes the article. The result is that care robots with sexual assistance functions could be an enrichment of the everyday life of people in need of care, but that we also have to consider some technical, design and moral aspects.“ Due to the outbreak of the COVID-19 pandemic, the physical meeting to be held at Stanford University was postponed. It will take place in November 2020 in Washington (AAAI 2020 Fall Symposium Series).

Fig.: Can robots complement or replace sexual relationships?

AI, ME und MC

Immer wieder hört man, oft von Theologen, manchmal von Philosophen, dass Maschinen nicht autonom seien, nicht intelligent, nicht moralisch etc. Sie übertragen den Begriff, den sie aus ihrem Bereich kennen, auf technische Wissenschaften wie Informatik, Künstliche Intelligenz (KI) und Maschinenethik (die technisch geprägt ist und eng mit KI und Robotik zusammenarbeitet). Sie anerkennen nicht, dass jede Disziplin ihre eigenen Begriffe haben kann (und in der Regel hat). Bei einer Tagung im Jahre 2015 beschimpfte Bundestagspräsident Prof. Dr. Norbert Lammert, ein zutiefst religiöser Mann, die Referenten mit den Worten, Maschinen seien nicht autonom, sie hätten sich nämlich nicht selbst ein Gesetz gegeben. Nun sprechen Informatik und Robotik aber nun einmal von autonomen Systemen und Maschinen, und selbstverständlich dürfen sie das, wenn sie darlegen, wie sie das meinen. Eine solche Begriffsklärung und -aneignung steht sogar am Anfang jeder wissenschaftlichen Betätigung, und dass die Begriffe gleich lauten wie die anderer Bereiche, heißt keineswegs, dass sie dasselbe bedeuten und bedeuten müssen. Eine neue Grafik von Prof. Dr. Oliver Bendel, die auf früheren Entwürfen aufbaut, stellt dar, was der Gegenstandsbereich der Disziplinen oder Arbeitsbereiche der KI, der Maschinenethik und des Maschinellen Bewusstseins ist, und macht für sie terminologische Vorschläge. Im Kern geht es diesen darum, etwas in bestimmten Aspekten ab- oder nachzubilden bzw. zu simulieren. So schafft es die Künstliche Intelligenz eben, künstliche Intelligenz hervorzubringen, etwa Dialogsysteme oder Maschinen, die bestimmte Probleme lösen. Ob diese „wirklich“ intelligent sind oder nicht, ist keine sinnvolle Frage, und der Terminus technicus benutzt nicht umsonst das Adjektiv „künstlich“ – hier wäre noch einfacher als im Falle von „autonom“ zu verstehen, dass es sich um eine „neue“ (immerhin seit über 50 Jahren erklärte) Bedeutung handelt.

Abb.: AI, ME und MC

Stephen A. Schwarzman Centre in Gründung

„Die britische Elite-Universität Oxford hat eine Spende in Höhe von 150 Millionen Pfund (rund 168 Millionen Euro) von US-Milliardär Stephen A. Schwarzman erhalten. Mit der höchsten Einzelspende in der Geschichte der Hochschule soll das ‚Stephen A. Schwarzman Centre‘ für Geisteswissenschaften entstehen.“ (SPON, 19. Juni 2019) Dies meldete der Spiegel am 19. Juni 2019. Weiter heißt es: „In dem Gebäude sollen unter anderem die Fakultäten für … Geschichts- und Sprachwissenschaften, Philosophie, Musik und Theologie zusammengelegt werden. Rund ein Viertel aller Oxford-Studenten sind in diesen Fächern eingeschrieben. Zusätzlich soll dort ein neues Institut für Ethik im Umgang mit Künstlicher Intelligenz entstehen, wie die Universität mitteilte.“ (SPON, 19. Juni 2019) Der Schwerpunkt scheint auf Informations- und Roboterethik zu liegen. Schwarzman selbst sagte laut Spiegel, Universitäten müssten dabei helfen, ethische Grundsätze für den schnellen technologischen Wandel zu entwickeln. Über die Herkunft der Mittel wird debattiert. Weitere Informationen über www.spiegel.de/lebenundlernen/uni/oxford-elite-uni-erhaelt-150-millionen-pfund-spende-a-1273161.html.

AI Love You

Using an interdisciplinary approach, the book „AI love you“ explores the emerging topics and rapid technological developments of robotics and artificial intelligence through the lens of the evolving role of sex robots, and how they should best be designed to serve human needs. „An international panel of authors provides the most up-to-date, evidence-based empirical research on the potential sexual applications of artificial intelligence. Early chapters discuss the objections to sexual activity with robots while also providing a counterargument to each objection. Subsequent chapters present the implications of robot sex as well as the security and data privacy issues associated with sexual interactions with artificial intelligence.“ (Information by Springer) Topics featured in this book include: the Sexual Interaction Illusion Model, the personal companion system, Harmony, designed by Realbotix, an exposition of the challenges of personal data control and protection when dealing with artificial intelligence, and the current and future technological possibilities of projecting three-dimensional holograms. Oliver Bendel is the author of the contribution to the latter topic, entitled „Hologram Girl“. The book is edited by Yuefang Zhou and Martin H. Fischer and will be published in summer 2019. More information via www.springer.com/gp/book/9783030197339.

Fig.: AI love you

The Effects of Voices

The University of Potsdam dedicates its current research to voices. The scientists – among them Dr. Yuefang Zhou and Katharina Kühne – are studying the first impression during communication. The survey website says: „The current study will last approximately 20 minutes. You will be asked some questions about the voice you hear. Please answer them honestly and spontaneously. There are no right or wrong answers; we are interested in your subjective perception. Just choose one out of the suggested alternatives.“ Prof. Dr. Oliver Bendel, FHNW School of Business, produced three samples and donated them to the project. „Your responses will be treated confidentially and your anonymity will be ensured. Your responses cannot be identified and related to you as an individual, if you choose to leave your e-mail address at the end of the study this cannot be linked back to your responses. All responses will be compiled together and analysed as a group.“ The questionnaire can be accessed via www.soscisurvey.de/impress/ (Link nicht mehr gültig).

Fig.: The effects of voices

Smart Machines and Save Animals

„With a few decades, autonomous and semi-autonomous machines will be found throughout Earth’s environments, from homes and gardens to parks and farms and so-called working landscapes – everywhere, really, that humans are found, and perhaps even places we’re not. And while much attention is given to how those machines will interact with people, far less is paid to their impacts on animals.“ (Anthropocene, October 10, 2018) „Machines can disturb, frighten, injure, and kill animals,“ says Oliver Bendel, an information systems professor at the University of Applied Sciences and Arts Northwestern Switzerland, according to the magazine. „Animal-friendly machines are needed.“ (Anthropocene, October 10, 2018) In the article „Will smart machines be kind to animals?“ the magazine Anthropocene deals with animal-friendly machines and introduces the work of the scientist. It is based on his paper „Towards animal-friendly machines“ (Paladyn) and an interview conducted by journalist Brandon Keim with Oliver Bendel. More via www.anthropocenemagazine.org/2018/10/animal-friendly-ai/.

Fig.: Also a cat can be safe, even on the street

Maschinenethik und Philosophie

Im „Handbuch Maschinenethik“, herausgegeben von Oliver Bendel, ist Anfang Juli 2018 ein Beitrag von Catrin Misselhorn erschienen, mit dem Titel „Maschinenethik und Philosophie“. Die Zusammenfassung: „Die Maschinenethik ist ein Forschungsgebiet an der Schnittstelle von Philosophie und Informatik. Dieser Beitrag beschäftigt sich zum einen mit den philosophischen Grundbegriffen und Voraussetzungen der Maschinenethik. Diese sind von besonderer Bedeutung, da sie Fragen aufwerfen, die die Möglichkeit der Maschinenethik teilweise grundsätzlich in Zweifel ziehen. Zum zweiten werden die verschiedenen Rollen der Philosophie auf unterschiedlichen Ebenen innerhalb der Maschinenethik thematisiert und die methodologische Umsetzung dieses interdisziplinären Forschungsprogramms dargelegt.“ (Website Springer) Eine Übersicht über die Beiträge, die laufend elektronisch veröffentlicht werden, findet sich über link.springer.com/referencework/10.1007/978-3-658-17484-2 … Das gedruckte Buch kommt in wenigen Monaten heraus.

Abb.: Die Maschinenethik gestaltet Maschinen mit

International Workshop on Ethics and AI

The international workshop „Understanding AI & Us“ will take place in Berlin (Alexander von Humboldt Institute for Internet and Society) on 30 June 2018. It is hosted by Joanna Bryson (MIT), Janina Loh (University of Vienna), Stefan Ullrich (Weizenbaum Institute Berlin) and Christian Djeffal (IoT and Government, Berlin). Birgit Beck, Oliver Bendel and Pak-Hang Wong are invited to the panel on the ethical challenges of artificial intelligence. The aim of the workshop is to bring together experts from the field of research reflecting on AI. The event is funded by the Volkswagen Foundation (VolkswagenStiftung). The project „Understanding AI & Us“ furthers and deepens the understanding of artificial intelligence (AI) in an interdisciplinary way. „This is done in order to improve the ways in which AI-systems are invented, designed, developed, and criticised.“ (Invitation letter) „In order to achieve this, we form a group that merges different abilities, competences and methods. The aim is to provide space for innovative and out-of-the-box-thinking that would be difficult to pursue in ordinary academic discourse in our respective disciplines. We are seeking ways to merge different disciplinary epistemological standpoints in order to increase our understanding of the development of AI and its impact upon society.“ (Invitation letter)

Fig.: Combat robots could also be an issue

Machine Ethics and Artificial Intelligence

The young discipline of machine ethics refers to the morality of semi-autonomous and autonomous machines, robots, bots or software systems. They become special moral agents, and depending on their behavior, we can call them moral or immoral machines. They decide and act in situations where they are left to their own devices, either by following pre-defined rules or by comparing their current situations to case models, or as machines capable of learning and deriving rules. Moral machines have been known for some years, at least as simulations and prototypes. Machine ethics works closely with artificial intelligence and robotics. The term of machine morality can be used similarly to the term of artificial intelligence. Oliver Bendel has developed a graphic that illustrates the relationship between machine ethics and artificial intelligence. He presented it at conferences at Stanford University (AAAI Spring Symposia), in Fort Lauderdale (ISAIM) and Vienna (Robophilosophy) in 2018.

Fig.: The terms of machine ethics and artificial intelligence

AAAI Spring Symposium on AI and Society

The tentative schedule of AAAI 2018 Spring Symposium on AI and Society at Stanford University (26 – 28 March 2018) has been published. On Tuesday Emma Brunskill from Stanford University, Philip C. Jackson („Toward Beneficial Human-Level AI … and Beyond“) and Andrew Williams („The Potential Social Impact of the Artificial Intelligence Divide“) will give a lecture. Oliver Bendel will have two talks, one on „The Uncanny Return of Physiognomy“ and one on „From GOODBOT to BESTBOT“. From the description on the website: „Artificial Intelligence has become a major player in today’s society and that has inevitably generated a proliferation of thoughts and sentiments on several of the  related issues. Many, for example, have felt the need to voice, in different ways and through different channels, their concerns on: possible undesirable outcomes caused by artificial agents, the morality of their use in specific sectors, such as the military, and the impact they will have on the labor market. The goal of this symposium is to gather a diverse group of researchers from many disciplines and to ignite a scientific discussion on this topic.“

Fig.: The symposium is about AI and society

Robophilosophy

„Robophilosophy 2018 – Envisioning Robots In Society: Politics, Power, And Public Space“ is the third event in the Robophilosophy Conference Series which focusses on robophilosophy, a new field of interdisciplinary applied research in philosophy, robotics, artificial intelligence and other disciplines. The main organizers are Prof. Dr. Mark Coeckelbergh, Dr. Janina Loh and Michael Funk. Plenary speakers are Joanna Bryson (Department of Computer Science, University of Bath, UK), Hiroshi Ishiguro (Intelligent Robotics Laboratory, Osaka University, Japan), Guy Standing (Basic Income Earth Network and School of Oriental and African Studies, University of London, UK), Catelijne Muller (Rapporteur on Artificial Intelligence, European Economic and Social Committee), Robert Trappl (Head of the Austrian Research Institute for Artificial Intelligence, Austria), Simon Penny (Department of Art, University of California, Irvine), Raja Chatila (IEEE Global Initiative for Ethical Considerations in AI and Automated Systems, Institute of Intelligent Systems and Robotics, Pierre and Marie Curie University, Paris, France), Josef Weidenholzer (Member of the European Parliament, domains of automation and digitization) and Oliver Bendel (Institute for Information Systems, FHNW University of Applied Sciences and Arts Northwestern Switzerland). The conference will take place from 14 to 17 February 2018 in Vienna. More information via conferences.au.dk/robo-philosophy/.

Fig.: Robophilosophy in Vienna