|

March 5, 2021

|

5 min. read

Ethics in Machine Learning

|

March 5, 2021

|

5 min. read

Ethics in Machine Learning

Google recently made headlines for firing its second senior researcher on ethics in AI in just a few months. Timnit Gebru and her colleague Margaret Mitchell were working on the paper “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?”, which deals with the risks of large transformer models such as BERT, GTP-3 or Switch-C, which are trained on extremely large collections of text extracted from the web. In addition to the enormous energy resources required for training, the paper also draws attention to the fact that biases (=prejudices, statistical distortions) are often also learned with such large amounts of data. It is also difficult to maintain an overview of the quality of the data set and thus counteract such unwanted side effects.

In today’s world, machine learning (ML) is playing an increasingly important role in many areas of life. From streaming services that tell us which music and movies might suit our current tastes, to questions about our credit score or whether we will be invited for a job interview:
In more and more aspects of our human lives, trained ML systems are increasingly making these decisions. As the influence of machine learning algorithms in everyday life continues to grow, we as developers must also become increasingly aware of our ethical responsibilities. After all, the way we handle our data and our algorithms has a major impact on social life and on the lives of individuals. This article is intended to provide a brief insight into some of the ethical questions we need to ask ourselves.

Unfair Decisions and Bias in Machine Learning Algorithms

Imagine a facial recognition algorithm deciding whether you are a wanted person for a crime. In many countries, such as the USA, this is already a reality. However, researchers and human rights activists have repeatedly shown that the systems used fail miserably for people with dark skin:
Scientists have shown, for example, that common facial recognition systems from large IT companies are extremely unreliable in recognizing the gender of women with dark skin. This goes so far that in some systems, the probability of correctly recognizing the gender of a woman with dark skin is similar to correctly predicting a coin toss. I will address the causes and pitfalls of the so-called “bias” in ML systems in more detail in another article.

Serious Consequences of Wrong Decisions for the Individual

Even more shocking, however, are the results of a test conducted by the American Civil Liberties Union in 2018 using mugshots and Amazon’s facial recognition algorithm “Rekognition Scan”: when photos of members of Congress were compared with the mugshots, 28 members of Congress were wrongly assigned to wanted criminals or suspects. While only about 20% percent of members of Congress have dark skin, they accounted for 40% of these misclassifications. Here, too, a massive bias to the detriment of the black population was identified. However, the last example in particular also makes it clear that it is not only the bias itself that poses an ethical problem in machine learning, but that false-positive results can generally have a serious impact on the people affected, depending on the area of application. Sensitive areas of application therefore require much greater precision than “harmless” applications, where wrong decisions are unlikely to have serious consequences.

Are We Endangering Democracy by Creating Opinion Bubbles?

While algorithms that pre-sort content according to our previous preferences may seem harmless at first, this can nevertheless cause social problems. On social networks in particular, we are often only shown content from the same people or content that has previously caught our interest. On video platforms, too, we are mainly shown videos that contain similar content to what we have seen before. In this way, many people are primarily reinforced in their existing opinions and only rarely come into contact with people who contradict their current views.
Controversial discussions and listening to different views and opinions are important for the ability to engage in discourse and therefore also for our democratic society. The less people are used to coming into contact with different arguments, the more likely they are to be trained to react to other opinions with rejection from the outset. Algorithms that seem relatively harmless could therefore promote social division and poison the climate of discourse in our democracy.

Why We Cannot Shirk Our Responsibility by Claiming That Our Algorithms Are Neutral

Of course, there are many other ethical questions in the field of machine learning, such as who and how assumes responsibility for the wrong decisions of autonomous systems or questions relating to data protection or privacy . All of this shows that we developers cannot free ourselves from ethics, even if we develop supposedly ‘neutral systems’. Because even with the best of intentions, such ‘neutral systems’ from ‘neutral developers’ are nothing more than representations of our reality or our data and these are ultimately always interpretations of the world, which ultimately also have consequences in the real world.

Project manager

Andreas studied Technology & Media Communication and is primarily responsible for internal and external communication and documentation within the company. This gives him an optimal overview of the various technologies, applications and customers of MORESOPHY.

More articles from Responsible AI

Woman holding a AI brain in her hands
Andreas Zwick

|

July 10, 2023

|

5 min. read
Bias im Maschinellen Lernen - Roboter spielt Schach
Andreas Zwick

|

March 26, 2021

|

5 min. read
Scroll to Top
Cookie Consent Banner by Real Cookie Banner