AI and Ethics: Will this work out?
Symposium

AI and Ethics: Will this work out?

With AI on the rise, the relationship between machines and our society’s ethics is a big concern. Experts from across Europe met up for a debate in Luxembourg to discuss their respective views on the matter and to search for solutions.

The University of Luxembourg is currently hosting a debate series addressing the questions raised by our society’s current and future relation to artificial intelligence. As a part of this series, a debate conference took place on November 21 and more specifically tackled the ethical issues we might encounter with current AI technologies.

Moderated by Dr. Nora Schleich from the University of Luxembourg, Prof. Dr. Marija Slavkovik from the University of Bergen, Norway, and Dr. Antonio Bikić from the Ludwig-Maximilian-University Munich, Germany, were joined by Dr. Enzo Maria Le Fevre Cervini from the European Comission and Prof. Dr. Christophe Schommer from the University of Luxembourg to discuss their respective points of view about AI and ethics.

Before the start of the event, the audience, which was encouraged to participate in the discussions, was asked to vote whether they believed AI could be ethical. 56% thought that it was possible, while 41% believed that it was not going to work out. This vote was then repeated after the debate in order to see if any opinions had changed.

Two opposing views

Prof. Dr. Slavkovik’s view on the matter is optimistic. She believes that AI and Ethics will work out since there is no other choice. She sees two sides of the problem: the behavior of the machines themselves and the way us humans use the machines. According to her, the behavior of the machines might reflect different values than our own and thus every decision we offload to machines also means offloading the responsibility of determining our values.

"We should always see AI as a tool that may replace any utensils we used before and not as a higher intelligence that can replace ourselves" - Dr. Le Fevre

We use AI to automate cognitive tasks but Prof. Dr. Slavkovik warns that we must not forget that AI is not intelligent in the conventional human way. The notion of something being artificial can have two meanings that should not be mixed up. While artificial lighting is made by humans, it is still made of photons and essentially the same as natural light. Artificial intelligence however does not at all work like natural intelligence and to use AI ethically, we need to always be aware of the difference between AI and humans in order for us not to forget our values.

Dr. Antonio Bikić’s stance however is less optimistic. He believes that machines follow utilitarian values. Artificial intelligences are optimizing algorithms that are trained with a goal in mind and the AI will always employ any means necessary to reach that goal. Rules will be circumvented to reach the goal and when doing so there are no limits. This however does not align with Western-European deontological ethics in which dignity and human rights can, under no circumstances, be violated.

During the debate, Prof. Dr. Slavkovik replied to this argument by suggesting that any unwanted actions from AI can be formulated as goals and will therefore be avoided by the AI. But Dr. Bikić maintained his stance by pointing out that the user simply needs to be smart enough to bypass limitations to still receive an answer from the AI. He explained how ChatGPT would not give out the plans for a homemade bomb when asked directly, but would do so without any issues if the question was masked well enough. ChatGPT reaches the goal of providing the user with the desired answer even if that means giving out dangerous information. This further underlines the utilitarian nature of AI.

Shutterstock 2287184085

AI as a decision maker

When a member of the audience asked to what extent we can allow AI to make decisions on politics, both Dr Bikić and Prof. Dr. Slavkovik explained that we should see AI as solely being recommendation systems that people should be trained to use. Prof. Dr. Slavkovik compared AI with current human experts. He underlined this statement by saying that experts may find, through their scientific research, that in order to prevent climate change we should stop using cars and have a limit of one child per family. This is similar to what an AI could answer. Such findings would however stay theoretical and never be implemented into real life, since we likely would not accept such fundamental meddling into our lives. Just like the findings of experts, answers given by AI should thus be seen as recommendations, but the final decision should always be a political one – pondered and made by trained humans.

Dr. Le Fevre supported that statement by explaining that we want to replace the desk with the tools but not the actual human. We should always see AI as a tool that may replace any utensils we used before and not as a higher intelligence that can replace ourselves.

Regulating AI

When asked about how laws will deal with ethics in relation to AI, Dr. Le Fevre said that no specific law could be future-proofed as AI evolves very quickly, but that generic barriers that can never be crossed in our democracy must be put in place through different regulations.

Moreover, implementing precise laws for specific cases concerning AI is not an easy task as it is difficult to add rules to AI. Indeed, programming a rule for a specific lose-lose scenario like an autonomous car choosing what person to run over is impossible, as our society’s current state of ethics does not give an answer to such problems. An audience member suggested implementing rules based on region, since ethical beliefs differ from one place to another, but Prof. Dr. Slavkovik argues that this could be misused if people do not actually get the choice which kind of ethics the AI they use acts upon as it could for example be the case in a dictatorship.

Takeaways from the discussion

At the end of the debate, Prof. Dr. Slavkovik concluded by explaining how ethics do not apply in the same way to humans and machines, since machines are for example expected to not act impulsively.

The debate did not conclude with a definitive answer whether AI and ethics could work out. In the end, 53% voted that they still believed it was possible, while 45% voted no. Additionally, 6% were indecisive and voted both while 6% decided to hand in a blank ballot. As the relatively balanced votes from the audience and the heated debate between experts showcased, the relation between AI and ethics remains a difficult subject. The common consensus however seems to be that humans must stay in control over AI and use it as tools without giving it the power to make any decisions on its own.

The experts of the debate:
Prof. Dr. Marija Slavkovik, Head of Department of Information Science and Media Studies, University of Bergen, Norway.

Dr. Antonio Bikić, Ludwig-Maximilian-University Munich, AI, Philosophy, and Neuromorphic Computing.

Dr. Enzo Maria Le Fevre Cervini, European Commission, Head of Sector and Adjunct Professor in the Master's program on Ethical Governance of Artificial Intelligence at Universidad Pontificia de Salamanca, Spain.

Prof. Dr. Lukas Sosoe, University of Luxembourg was replaced by Prof. Dr. Christophe Schommer, Department of Computer Science, University of Luxembourg.

Sources:

Let’s Talk100: AI for our Future, AI and Ethics: Does this work out?, University of Luxembourg, 21 November 2023

0 Comments

Add a comment

Related