Garbage in, garbage out.

For the first time in my life, I ran a marathon this year. My pure joy at finishing it, prompted me to buy a very detailed photo package from the organiser, which used not only my participant number, but also sophisticated facial recognition, to find out which of the hundreds of thousands of photos it had taken at the marathon, showed me. I was somewhat flattered, when I didn’t recognise myself in some of the photos, but instead kept seeing the same young woman, who ran the marathon similarly to me with dark hair, a same skin tone and a similar build.   Along with the direct external comparison, which, I want to emphasize again, was really positive for me, I also questioned: how could the wrong match have occurred in the first place? This either indicates that there is something wrong with the algorithm of the face recognition software or that I was simply recognised incorrectly by the software as a moving object. As long as this data is not used for other purposes, there are no direct serious consequences in my example.  Face recognition software is most often a machine-learning application. That is something we must always keep in mind. This means it first needs to be fed with data. Depending on the purpose of the facial recognition, many types of data can be collected. Moving away from my marathon example, imagine that we are looking for criminals by using data from face recognition. I would not find the incorrect classification of the pictures from the marathon so flattering, if my “marathon twin” was convicted of a crime. Let’s assume that we would only provide the software with data of criminals in Germany, it would quickly become clear, that certain population groups, certain gender characteristics and thus ultimately certain visual characteristics would influence the risk assessment of becoming a criminal. So why does using facial recognition software for this purpose not make us feel safer? In a nutshell: Garbage in – the data is not free of bias. Many different life paths are not determined by visual factors, but rather by life biographies, individual decisions, and personality traits. When our self-learning system learns the wrong connections, garbage comes out.

Face recognition is a technology whose use is not suitable for estimating criminal risk. Those who are familiar with those aspects understand this and conclude that it is not useful for this purpose.

In this case, the problem arises not from the application itself, but rather from the person who sets up and programs the system.

It is therefore astonishing, how much companies rely on new technologies, but do not take their time to comprehend the logic behind the systems. On too rare occasions is the question addressed, on what algorithm this self-learning model is based and what biases could possibly be found in the observation of the data.

We must manage to represent as many facets of life realities in algorithms as possible, in order to be able to make a differentiated assessment of them. It is essential that the developers of these algorithms possess competence and education, either through their own foresight or through suitable experts, who are actively committed to the reduction of prejudice. Unfortunately, I have rarely heard of diversity experts being included in such technical steps of the process. Companies need to realize that, by building self-learning algorithms, they are already expressing their worldview and values. What is the company’s understanding of fairness? Does the system need to recognize or treat certain target groups differently or does the focus remain on equal treatment? In this context, it is impossible to define fairness in a general and purpose-independent manner. The question of fairness in the algorithm must therefore always be asked anew and for a specific purpose. Only then would a self-learning algorithm be able to reduce prejudices instead of unintentionally reinforcing them.

Would you like to get some more impulses on this? Here’s the content to read:

Caroline Criado Perez (2020): Invisible Women: Exposing Data Bias in a World Designed for Men

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: