Before self-driving cars can be released onto the streets, several challenges have to be resolved, and not just of a technical nature. The development of autonomous vehicles also raises ethical questions. In this interview, Prof. Christoph Lütge, holder of the Chair of Business Ethics at the Technical University of Munich, talks about the opportunities and risks inherent in AI and autonomous driving and explains how he teaches his students about ethical behavior.
Ethicists are facing a vast array of questions on this subject. And many times, their responses to everyday questions are surprisingly pragmatic. During the pandemic, you criticized hard lockdowns as a member of the Bavarian Board of Ethics (Bayerischer Ethikrat), explaining that the collateral damage was too high. That sounds a bit harsh at first. How easy is it to weigh the opportunities and the risks as a basis for creating a guide on moral behavior?
Maybe we should start by clarifying what ethics actually are: risk management on multiple fronts. And what ethics are not: namely, complying with principles at all costs. This applies in particular to the field of applied ethics, which covers topics such as artificial intelligence, autonomous driving, and business ethics. As ethicists, our job is to weigh the risks against one another and to look at the big picture. It is not enough to single out just one aspect, as has often been the case in the corona pandemic. Here, the medical aspects have been prioritized. However, there are many other scientific disciplines aside from medicine that deal with the measures, proportionality, and collateral damage, such as social scientists, economists, and ethicists. In theory, this might sound somewhat detached from reality, but it becomes much more straightforward as we get more specific.
I am convinced that we need uniform directives, which we have been waiting for in vain for years now.
You head the Institute for Ethics in Artificial Intelligence (IEAI). There, researchers from the fields of medicine, science, and engineering work together with sociologists and ethicists in interdisciplinary teams. What are the areas of research at IEAI?
The projects are separated into different research clusters. One of them is devoted to the subject of Artificial Intelligence, Mobility, and Safety, which includes autonomous driving. Another area investigates AI-based decision-making tools for ethical questions in clinical settings. In one major field of investigation, we explore a range of topics related to AI and sustainability: for example, in agriculture, water management, and biodiversity. Currently, we are setting up a focus area to research how we can design AI at the workplace responsibly so that it gains acceptance.
What are the risks associated with AI?
There is a whole host of risks that are stressed time and again in public de- bates. We consider risks from various perspectives: One perspective looks at the purely technical risks related to safety, data protection, and the robustness of algorithms. Another perspective regards the fairness of algorithms as well as their explainability and transparency since many people see AI as a black box and are worried because they do not comprehend how decisions are made. And some fear they will lose their autonomy. But despite any risks, we must not lose sight of the ethical and economic opportunities of AI.
Let us take a closer look at autonomous driving: From an ethical standpoint, does it make sense to permit driverless cars?
Roughly five years ago, we already underscored one specific point in the first international Ethics Commission on Automated and Connected Driving: The ethical opportunities consist of preventing many accidents and saving human lives. This is a clear ethical advantage that arises even if we are not fully autonomous but are working with highly automated systems.
What is being investigated in the project on the ethics of autonomous driving at IEAI?
In this project, we are collaborating directly with the Chair of Automotive Technology and working on specific tasks such as trajectory planning systems. In this case, we are frequently faced with detailed decisions, for example, the question of how far away a truck should stay from other vehicles or cyclists. In order to calculate exact distances, we carry out studies that can then be used for the purpose of responsible programming.
In 2017, the German Ethics Commission on Automated and Connected Driving set forth several rules: Are there any significant differences in other countries around the world?
Over the last few years, various ethical principles have been defined for AI everywhere in the world. There is not much variance across the abstract principles; I would estimate that 80 to 85 percent of them are the same. Nevertheless, they might differ in the fine print. Let us take a look at the issue of vehicle-to-vehicle distances: I visited Delhi some time ago. The distances between vehicles are much smaller than in Germany. This can affect the programming. It starts to get interesting when we cross a national border: a situation that would never arise in Japan, occurs rather seldomly in China, but happens quite frequently in Europe. This is why I am convinced that we need uniform directives, which we have been waiting for in vain for years now.
In addition to research, you also teach: How can you help students understand ethical behavior? Or to put it another way: How can you teach students to develop ethical algorithms?
Allow me to elaborate on this a little. As the holder of the Chair of Business Ethics, eight years ago I was able to introduce business ethics as a core subject for students of business administration. In my experience, it is not a matter of preaching morality and ex- plaining, “here is a list of ethics and this is how you should apply them.” I always point out that you also have to be aware of the difficulties that stand in the way of practicing ethics. In business administration, for example, this could be constraints or cost pressure. We do not have any compulsory classes yet on how to deal with digital issues and AI. However, the president of the Technical University of Munich has presented human-centered engineering as his vision, thus laying the foundation for the introduction of ethics in technical disciplines. Human-centered engineering involves incorporating parts of the humanities and social sciences into technical disciplines. One objective is to impart more competency to students by raising awareness for ethical issues. Engineers and computer scientists should be aware of the fact that their actions have an ethical dimension.
When will we see fully autonomous vehicles on the road, and to what extent will they be capable of making good and moral decisions?
If we are talking about self-driving cars according to the current level 4 (high driving automation) or level 5 (full driving automation) specifications, then that may take some time. However, today we already have quite extensive applications for certain purposes that work very well. In response to the question of how such systems could be rolled out broadly onto the streets, I believe that this is more of an ethical and legal issue and less of a technical one since a great deal is already feasible from a technical perspective. Several advances were made in US legislation in the last one or two years. As a result, the first driverless delivery vehicles, for example, are now out and about on the streets. We must not underestimate these advances; unfortunately, they often take place outside of Germany. I think it was a mistake for German car manufacturers to postpone their research on autonomous driving. I think in ten years we will see considerably more automation on our roads. Especially in Germany, we have a tendency to set the bar very high, saying that we cannot possibly permit autonomous driving until we are prepared for every conceivable eventuality. We need to abandon this everything-or-nothing mindset.
In cooperation, the Governance Lab (The GovLab), the NYU Tandon School of Engineering, the Global AI Ethics Consortium (GAIEC), the Center for Responsible AI @ NYU (R/AI), and the Institute for Ethics in Artificial Intelligence at the Technical University of Munich (TUM) (IEAI) have launched a free online course: AI Ethics: Global Perspectives. Designed for a global audience, it conveys the breadth and depth of the ongoing interdisciplinary debate on AI ethics and seeks to bring together diverse perspectives from the field of ethical AI to raise awareness and help institutions work toward more responsible use. AI Ethics: Global Perspectives (https://aiethicscourse.org/)
About the interviewed:
Prof. Christoph Lütge
Prof. Christoph Lütge conducts research in the field of economic and business ethics. He advocates the order ethics approach, which investigates ethical action in the context of the underlying economic and social conditions of globalization. His research focuses on the role of competition and that of incentives stemming from orders and assesses the reasonableness of ethical categories. He has held the Peter Löscher Chair of Business Ethics at the Technical University of Munich since 2010.