TVS

Dealing with biases in algorithms

Inspired by Cathy O’Neil’s Weapons of Math Destruction: How big data increases inequality and threatens democracy, I got more interested in the fact that algorithms are biased. In her work, O’Neil makes a frightening statement saying that in digital markets, mathematicians and computer scientist (the new gods) encode “human prejudice, misunderstanding and bias into the software systems that increasingly manage our lives” (O’Neil 2016: 3).

Based on her statement, I became aware of the fact that algorithms provide excellent opportunities for biases to penetrate society. Algorithms affect several realms of our digital every-day life such as social media, online news, legal tech, job search as well as online shopping or financial trading. In the following, I want to give you a guided tour through the different ways of which biases can be encoded in algorithms and show the implications of these. Most of the examples are recent ones which is alarming but comforting at the same time. Alarming because we live in the 21st century, and you will not believe the backwardness of these examples, comforting because this is only the beginning. We still have all opportunities to make a change and develop the machines to the better.

So, how do biases get into algorithmic systems? One possibility is to build biases into machine-learning models. In 2016, ProPublica published an article, uncovering that Facebook lets advertisers exclude users by race. These conscious actions are reflecting society’s racist and discriminatory mindset and perception. Further, biases can exist in data collection, meaning that data itself contains biases. The famous project of Joe Buolamwini, Founder of Algorithmic Justice League, emphasises the importance of conscious and inclusive data collection to overcome biased algorithms. Algorithms are learning from the past and, since real-world biases are generally “deeply rooted in human psychology” (Baer 2019: xi), in a mostly unconscious way, they determine every-day life. Defining the variables by currently existing information, it is no wonder that Amazon’s AI started hiring only men, considering the low proportion of women CEOs in the Fortune 500. Furthermore, redundant encodings can create biases, when pieces of data are related to members of a particular class. COMPAS was identified with biases against blacks. With the COMPAS system, blacks were analysed to be more likely to re-offence as whites (Angwin et al. 2016).

What makes biases in machine learning such a huge problem is the fact that algorithms are treated like opaque “black boxes” (Pasquale 2015). O’Neil closes her statement by saying that the verdicts of the mathematicians and computer scientist, “even when wrong or harmful, were beyond dispute or appeal. Moreover, they tended to punish the poor and the oppressed in our society, while making the rich richer” (2016: 3).

Without overcoming the problems of biases in machine learning and algorithms, there is no way out of the ‘vicious circle’. Because of the existing discriminatory practices, society calls for more transparency, through algorithmic auditing or greater legal regulation. Further, people demand diversity in engineering design teams and design inclusive, ethical technologies (O’Neil 2016).

Bibliography:
Angwin, J. Larson, J., Mattu, S. and Kirchner, L., 2016. Machine Bias, [online] https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing. [December 3, 2019].
Baer, T., 2019. Understand, Manage, and Prevent Algorithmic Bias A Guide for Business Users and Data Scientists, 1st ed. 2019., Berkeley, CA: Apress.
O’Neil, C., 2017. Weapons of math destruction: how big data increases inequality and threatens democracy, UK: Penguin Books.

Trust Matters

Trust Matters

DLD munich 19 - Sunday

At DLD (Digital, Life Design) an Innovation Conference in Germany, Rachel Botsman, author of Who can we trust? How Technology brought us together and why it might drive us apart, forwards the importance of the concept of ‘trust’ (DLD 2019). In her speech at DLD, she defines trust as a “confident relationship to the unknown”. I thought it would be interesting to share some thoughts with you and hope that the presented ideas will make you curious about this topic as well – If you have further ideas, comments, or questions – let’s discuss!

Botsman (DLD 2019) states that all usage of innovation requires trust. It needs trust-leaps to trust and believe in new technologies. Trust-leaps, she describes, happen “when we take a risk to do something new or different to the way we have always done it” (Botsman 2019). They break down barriers, and once a few people take the trust-leap, others will follow. Continuing her talk, Botsman explains why transparency alone, does not fix existing trust issues. She states that once trust is abused, it is tough to repair and can only be sorted out with competence, reliability, empathy, and, most importantly, integrity. Leading us to a short case study, Lankton and McKnight (2011) analyse trust issues to be a characteristic of tech companies that have had a meteoric rise.

In my opinion, Facebook is a great example to show how not to deal with trust issues. I want to talk about my experiences, on the way to regain trust in Facebook. When Facebook was new, early users (like me) trusted and believed in the latest technologies behind it and managed the trust-leap with little concerns (Botsman 2019). The platform’s slogan ‘Facebook helps you connect and share with the people in your life’ was the spirit of the late 2000s. Since then, our relationship made a 180-degree turn – privacy scandals, and especially the case of Cambridge Analytica encouraged many users to be more careful.

On my way to regain trust in Facebook, out of curiosity, I downloaded all the information they have on me – a very transparent setting – one could say. The world’s biggest social network platform pretends to be collaborative, uses cooperative rhetoric and is open about the information they possess. However, when requesting, Facebook is simply flooding you with your own non-objective, personalised and therefore ‘cooked’ data (Gitleman 2013).

In this context, Pasquale’s comment on transparency is quite suitable: “Transparency may simply provoke complexity that is as effective at defeating understanding as real or legal secrecy” (2015: 8), and it is, as Botsman explains no solution for trust issues (2019). My experience with Facebook strengthens both arguments: Even though Facebook is transparent about my data, its objectives behind collecting the data in the first place remains opaque – such methods of transparency cannot restore the confidence.

Unfortunately, latest hearings in front of the US House of Representatives financial services committee, do not fix the severe trust problem of Facebook either. Democrat Alexandria Ocasio-Cortez questioned Mark Zuckerberg on fact-checking measures that caused serious trust issues. For all questions she asked, she had to be satisfied with an ignorant “I don’t know”, which gives the terrifying impression that the CEO of the world’s largest social media platform is not being reliable, nor competent. This is furthermore is alarming because there seems to be no confession or understanding of the company’s great trust failure.

Trust is a valued but sadly underestimated currency in the digital world. In this context, the central message I want to share with you is that “we need a new way of talking about trust” (Botsman 2019). We need a more precise language. The relationship between trust and new technologies is still an under-explored information research domain. Companies need to be aware of its importance and value.

Rachel Botsman is visiting lecturer at the University of Oxford and author of the book Who can you trust? How Technology brought us together and why it might drive us apart.

Bibliography:
Botsman, R., 2019. The currency of Trust, [online] https://www.youtube.com/watch?v=-vbPXbm8eTw. [November 17, 2019].
Gitelman, L., 2013. “Raw data” is an oxymoron, Cambridge, Massachusetts: The MIT Press.
Lankton, N. and McKnight, D., 2011. What Does it Mean to Trust Facebook? Examining Technology and Interpersonal Trust Beliefs. Database for Advances in Information Systems, pp. 32–54.
Pasquale, F., 2015. The black box society: the secret algorithms that control money and information, Cambridge: Harvard University Press.

Why we should stop using the term ‘artificial intelligence’

“You have heard of software-as-a-service. Well this is human-as-a-service!” That is how Jeff Bezos in 2006 announced Amazon Mechanical Turk, a platform mediating interaction for microwork between task creators and task workers (Irani 2015b). It is worth spreading where the name Amazon Mechanical Turk (AMT) comes from and lifting the secrets about the ubiquitous buzzword ‘artificial intelligence’.

‘The Turk’, known as ‘Mechanical Turk’, was a fake chess-playing machine invented in the 18th century. It was a mechanical illusion, an automated machine, concealing the fact that a human being was hiding inside to operate it (Levitt 2000). The creators of AMT got inspired by ‘The Turk’ of the 18th century. Further, the comparison between the chess master in the machine and the microworkers behind AMT is crucial in order to demystify artificial intelligence.

“AI is made by people and with human input”, says Lilly Irani (2016). Because there is little artificial behind artificial intelligence, a more suitable descriptor would be artificial ‘artificial intelligence’ or ‘machine learning’. However, the term ‘artificial intelligence’ is prevailing in society but the workers at AMT, known informally as ‘turkers’ are no longer keeping silent about their precarious conditions and the fact that their work is pure human labour, as Irani states: There is nothing magical behind artificial intelligence (2015a).

In recent years, more and more turkers were seeking for the right to be recognised as living, breathing human beings and current studies and reports (such as in the Guardian “I am a human being, not an algorithm” or The New York Times article “I Found Work on an Amazon Website. I Made 79 Cents an Hour) unveil the miserable working conditions at Amazon. With their work: transcribing, writing and tagging photos, microworkers at AMT are feeding algorithms with data and contributing to a more powerful ‘artificial’ AI. AMT is treating their digital workers as freelancers not as permanent employees (Irani and Silberman 2013), taking no responsibility for pensions or insurance and no compensation for absence due to sickenss. For task creators, digital workers are invisible, which illustrates in an even more drastic way the non-existent interest for the rights of AMT digital workers (Gray et al. 2016). Most microworkers are based in the US and India (Gray et al. 2016) and in addition to the precarious conditions for human workers behind our so-called artificial intelligence it is worth considering that ‘turkers’ earn minimum wages (Irani and Silberman 2013), which is violating human rights and should be considered as exploitation. Not only from a sociological but also a long-term economic perspective, we need to start acknowledging the digital workers. They are more responsible for our future than we think, we need to value their work since we base our algorithms and future artificial intelligence systems on their judgements.

Continuing this dialogue would touch upon the concepts of ‘immaterial and affective labour’: In a way, all platform-users are doing digital labour. With every click on Facebook, every search on Google and like on Instagram, we are creating data that adds value – all voluntary – and is making significant contributions to the profit of Silicon Valley’s big tech companies as well as to their artificial intelligence. Further, it makes every internet user a consumer and producer at the same time and supports the concept of the ‘prosumer’, coined by Toffler (1980) and further developed by Tapscott (1996). It is not artificial, but human work that is strengthening the algorithms and simultaneously creating the foundation for what we call ‘artificial intelligence’.

There is no doubt that microwork at AMT, because of exceptional precarious working conditions is comparable to ‘modern slavery’. First attempts to stand up against the suppression have been made. Inspired by ‘Foucault’s panopticon’ (Foucault 1997), a website called turkopticon helps workers to watch out for each other and organise themselves against exploitation.

Bibliography:
Foucault, M., 1977. Discipline and punish: the birth of the prison, London: Allen Lane.
Gray, M.L., Kulkarni, D., Shoaib Ali, S. and Suri, S., 2016. The Crowd is a Collaborative Network, [online] http://www.inthecrowd.org/wp-content/uploads/2015/10/collab_paper21.pdf [November 18, 2019].
Levitt, G. M., 2000. The Turk, Chess Automaton. McFarland & Co.
Irani, L. and Silberman, S., 2013. Turkopticon: Interrupting Worker Invisibility in Amazon Mechanical Turk. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, [online] http://crowdsourcing-class.org/readings/downloads/ethics/turkopticon.pdf [November 15, 2019].
Irani, L. 2015a. Difference and Dependence among Digital Workers: The Case of Amazon Mechanical Turk. South Atlantic Quarterly, 114(1), pp. 225–234.
Irani, L., 2015b. The cultural work of microwork. New Media & Society, 17(5), pp. 720–739.
Irani, L., 2016. The labour that makes AI “magic” at AI Now 2016, [online] https://www.youtube.com/watch?v=5vXqpc2jCKs [November 19, 2019].
Tapscott, D. 1996. The Digital Economy. Promise and Peril in The Age of Networked Intelligence, NY: McGraw-Hill.
Toffler, A., 1980. The third wave, London: Collins.

MILLENNIFEST EDINBURGH

26 October 2019 in Edinburgh @179a Canongate, EH8 8BN

“People tend to say that Millennials are too sensitive. Let’s make that our skill and build a new economy on that!”

Paul Bradley, Obama Foundation emerging leader, SDG specialist, and Policy Officer, SCVO @PaulMBradley

 

Powered by WordPress & Theme by Anders Norén

css.php

Report this page

To report inappropriate content on this page, please use the form below. Upon receiving your report, we will be in touch as per the Take Down Policy of the service.

Please note that personal data collected through this form is used and stored for the purposes of processing this report and communication with you.

If you are unable to report a concern about content via this form please contact the Service Owner.

Please enter an email address you wish to be contacted on. Please describe the unacceptable content in sufficient detail to allow us to locate it, and why you consider it to be unacceptable.
By submitting this report, you accept that it is accurate and that fraudulent or nuisance complaints may result in action by the University.

  Cancel