Inspired by Cathy O’Neil’s Weapons of Math Destruction: How big data increases inequality and threatens democracy, I got more interested in the fact that algorithms are biased. In her work, O’Neil makes a frightening statement saying that in digital markets, mathematicians and computer scientist (the new gods) encode “human prejudice, misunderstanding and bias into the software systems that increasingly manage our lives” (O’Neil 2016: 3).
Based on her statement, I became aware of the fact that algorithms provide excellent opportunities for biases to penetrate society. Algorithms affect several realms of our digital every-day life such as social media, online news, legal tech, job search as well as online shopping or financial trading. In the following, I want to give you a guided tour through the different ways of which biases can be encoded in algorithms and show the implications of these. Most of the examples are recent ones which is alarming but comforting at the same time. Alarming because we live in the 21st century, and you will not believe the backwardness of these examples, comforting because this is only the beginning. We still have all opportunities to make a change and develop the machines to the better.
So, how do biases get into algorithmic systems? One possibility is to build biases into machine-learning models. In 2016, ProPublica published an article, uncovering that Facebook lets advertisers exclude users by race. These conscious actions are reflecting society’s racist and discriminatory mindset and perception. Further, biases can exist in data collection, meaning that data itself contains biases. The famous project of Joe Buolamwini, Founder of Algorithmic Justice League, emphasises the importance of conscious and inclusive data collection to overcome biased algorithms. Algorithms are learning from the past and, since real-world biases are generally “deeply rooted in human psychology” (Baer 2019: xi), in a mostly unconscious way, they determine every-day life. Defining the variables by currently existing information, it is no wonder that Amazon’s AI started hiring only men, considering the low proportion of women CEOs in the Fortune 500. Furthermore, redundant encodings can create biases, when pieces of data are related to members of a particular class. COMPAS was identified with biases against blacks. With the COMPAS system, blacks were analysed to be more likely to re-offence as whites (Angwin et al. 2016).
What makes biases in machine learning such a huge problem is the fact that algorithms are treated like opaque “black boxes” (Pasquale 2015). O’Neil closes her statement by saying that the verdicts of the mathematicians and computer scientist, “even when wrong or harmful, were beyond dispute or appeal. Moreover, they tended to punish the poor and the oppressed in our society, while making the rich richer” (2016: 3).
Without overcoming the problems of biases in machine learning and algorithms, there is no way out of the ‘vicious circle’. Because of the existing discriminatory practices, society calls for more transparency, through algorithmic auditing or greater legal regulation. Further, people demand diversity in engineering design teams and design inclusive, ethical technologies (O’Neil 2016).
Angwin, J. Larson, J., Mattu, S. and Kirchner, L., 2016. Machine Bias, [online] https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing. [December 3, 2019].
Baer, T., 2019. Understand, Manage, and Prevent Algorithmic Bias A Guide for Business Users and Data Scientists, 1st ed. 2019., Berkeley, CA: Apress.
O’Neil, C., 2017. Weapons of math destruction: how big data increases inequality and threatens democracy, UK: Penguin Books.