Intelligent digital decision making based on values and ethical norms 

Artificial intelligence is often discussed in terms of ethics, how the bias of the data set can be reduced to zero, but also in terms of how the technology can be controlled, monitored, and manipulated. At the moment, that I am writing this, I realize: Why don’t we ask ourselves this question when a human makes a decision? When the perception of a person is distorted by past experiences? What is the difference, for example, between it being okay for a human being to discriminate unconsciously, but problematic when a technology does so only to a fraction of the extent?

Since the terms “artificial intelligence” and “ethics” are often used in the media, but are not yet well understood by many, I would like to begin this article with a brief definition.

Artificial Intelligence:

Can we teach computers to be intelligent? A lot of researchers are looking into this question. What about teaching them to think, learn, and plan? An artificial intelligence is intelligent in a certain area, such as recognizing faces, playing chess, or driving a car. AI learns from data. An example of such data could be pictures of many faces or pictures of other things. When artificial intelligence examines the data, it looks for similarities and differences. As a result, the AI learns what a face looks like. By processing new images with faces, the artificial intelligence can recognize faces on its own. Whenever an artificial intelligence learns from data, it has to be relevant to the task it is trying to solve. Additionally, the data has to be sufficiently different. Therefore, it must be images of men, women, and children. Furthermore, there should be pictures of people of different colours. Otherwise, the artificial intelligence could recognize only images of white people. The most critical thing to understand is that artificial intelligences can only learn from the data that humans provide.  (Find more information here: Künstliche Intelligenz | bpb)


The study of ethics is part of philosophy. In essence, it centers on the famous question raised by the philosopher Immanuel Kant: “What should I do?” Alternatively, it is about rules we can orient ourselves by when we decide what to do. As a result of these rules, we should act as well as possible and avoid inappropriate (“evil”) actions.

The human mind develops behavioral norms (saying what we should ideally do) and behavioral rules (which are usually more concrete) in order to be able to answer the question “What should I do?” as quickly and confidently as possible.

We differentiate:

  1. Individual norms/rules: a set of rules or norms that a certain individual sets for him/herself and wants to adhere to (e.g. waking up at 6.30 a.m. every day).
  2. Social norms/rules: apply to a specific group (school, company, family, clique).
  3. Cultural norms and rules: apply to members of a particular culture (e.g. rules set by members of a religious community)
  4. Political principles and rules: A political rule is a regulation or law that applies to all people living in a given country. Politically responsible bodies (e.g. parliament) have decided on them. They must be followed by everyone. Those who violate these rules can expect sanctions by the state (e.g. administrative fines or criminal charges).  (Was ist Ethik? – Kerninformation – brgdomath)

In terms of political rules, it is easy to see how they can apply to artificial intelligence. Various forms of AI could be banned, and its use in certain fields could be regulated, taxed or even be prohibited. Okay, so that was easy. Now what about other standards? Is it possible, for example, to transfer the main values from a system to an AI? Would that even be possible? Our current set of principles, such as fairness, robustness, or transparency, are only applicable to political regulation. We cannot really control diverse datasets even if we include them in the principles. This will just be a small element we need to consider when developing an algorithm based on the norms of a society or system.  

Basically, this raises two questions for me: 1) How much ethics should we expect of a technology? Do we really expect less bias than we do from humans? In that case, wouldn’t we imply that AI can make better decisions than humans? 2) Is it really the AI system that has to act ethically, or is it the AI developer who has to do so? To ensure this, would ethics and diversity have to be an integral part of an IT job or the university-programs?

  1. My personal opinion is that we sometimes lack rationality when it comes to technical solutions. The approach should be: if an autonomous car reduces the number of accidents even minimally, that should be better than accepting more accidents without autonomous driving. It is logical to argue that way. But if you read the newspapers, the argument is exactly the opposite: If an autonomous vehicle causes even one accident, that is too much. There is no proportionality in the current situation. To me, that is just wrong. Sure: if I were the person who died in an autonomous car, I would also demand liability: But that can be regulated. There is nothing about this that could not have been regulated in the past. Similarly, if a robot malfunctions and injures a person in a factory, liability is governed by the same laws. The death rate on the roads could be reduced by more pragmatism! Several years ago, I was at a conference, and they were discussing whether or not autonomous driving race cars in Formula 1 would be safer: It is obvious, they would be! This would mean no accidents, but no more adventure for the spectators, which is why this change is viewed critically: Because a sport without spectators is not profitable. We should, however, ask ourselves: In 10 years, will we be recognizing the developers of the technology of the race car that won the race instead of the driver who risked his life? Wouldn’t that be something to celebrate? Rather than risky behavior by individuals, why not celebrate scientific excellence? By the way, from a scientific perspective, Formula 1 could be used to advance the field of autonomous driving! There is enough money for research, enough real-life test situations (and extreme ones, too), so why not?
  2. Especially in complex neural networks, it will not be possible to check every decision made by the system. In this sense, the question of whether AI should be regulated or rather understood according to the norms and values within which it was developed is of great importance. It would, however, be easier for us to trust the decisions if we knew that the system was developed according to our ethical standards and also suited our cultural requirements (for instance, it is not radical right-wing). It is, of course, necessary to perform tests if a system has been running for many years and has taken on a life of its own. However, if we know what standards it has been developed according to, that is also easier. Possibly one approach could be to say: “Okay, you can keep your algorithm secret, but I want to know who wrote it and what are the values and norms of the person who wrote it”? To achieve ethical coding, it should be necessary to remove conscious bias and create individual norms that adhere to the values and standards of society. It is only through teamwork with other developers from different cultural backgrounds, that unconscious bias can be minimized. By following this approach alone, the bias would be less than that of a human being.

Imagine a world in which values and norms are in harmony at every level. What if I had the option of choosing the developer based on my values or the desired outcome’s values? In which case we would be certain that we had the same understanding of morality. The real world wouldn’t allow us to hire a criminal as a compliance officer, would it? What’s the point of not checking who designed the system that makes decisions for us? What a ridiculous idea!

A code of ethics for developers won’t solve the problem completely, but I believe it can mitigate it and, in tandem with political regulation, be a partial solution. Ethics have been removed from the curriculum of many universities, and diversity is more of a note than a fundamental requirement. That has to change! Particularly if we see developers and IT-nerds as consultants instead of executing programmers of prefabricated mock-ups without any expectation of creative design possibilities! It’s just like expecting a waiter to put his money in the till instead of in his wallet. Let’s shape the future ethically together – these are first thoughts about what the future can look like.

My learning could be your learning:

  1. Think about whether we expect an algorithm to exhibit much greater ethical behavior than a human being, or whether an equal degree of norm adherence, or the possibility of bias reduction and optimization, is not enough of an incentive to apply these technologies.  
  2. We should ask who should act ethically when artificial intelligence is involved: The developers or the machines? Developers should be required to create algorithms that correspond to expected behaviours in the end. These standards must be consistent with the standards of the systems they contain. It can and should be supported by politically set rules that are punishable in case of non-compliance.
  3. We will never progress if we don’t allow setbacks and mistakes! We live in a world where sometimes we make progress and then we regress. But this is a positive thing, rather than a problem. We learn from our mistakes, establish new values and norms, and build the future ethically and humanely together! 

Would you like to know more about this? Here’s the content to read:

Brent Mittelstadt (2019) Principles alone cannot guarantee ethical AI

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: