Any views expressed within media held on this service are those of the contributors, should not be taken as approved or endorsed by the University, and do not necessarily reflect the views of the University in respect of any particular issue.

Dana’s Blogs

Dana’s Blogs

Knowledge Integration and Project Planning: Data and Artificial Intelligence Ethics (2023-24)

Line of Reasoning to my Provisional Research Topic

After my initial brainstorm (see – ‘Developing Project Idea’), I’ve been thinking about which research topic I would like to pursue further. I’m now in the fifth week of the degree and it is safe to say I’ve already expanded my horizons! The project I’m thinking of now though is very broad and it will most likely have to be narrowed down in the coming months, it covers an aspect of the various application areas I was initially thinking about.  

Let me give you a bit of background context first, the past several weeks I’ve had classes covering many topics, such as: 1) How should AI be governed? – Privacy concerns, current GDPR regulations (and its benefits/ drawbacks) as well as the upcoming EU AI Act. 2) The ethics of robotics and autonomous systems, which covered a range of subjects (all worthy of a whole lifetime’s worth of study) from autonomous weapons, the future of work, robots in care and the issues around race and culture.  

Something that really intrigued me was the implications that the design of automation might have, to be more specific, our human biases and perspectives. Academic literature discusses the implications of certain biases which when ‘translated’ into code can have disastrous consequences, take Cave and Dihal’s (2020) paper on the ‘Whiteness of AI’ and the implications of the ‘White racial frame’ on such a technology for example; or, Danaher & Nyholm (2020) and Knell & Ruther’s (2023) work on the future of work and what constitutes a meaningful life. Did you know that a female gendered robot was subjected to twice as many dehumanising comments by subjects when presented as Asian or Black vs White (Strait et al 2018). Finally, Birhane & Dijk (2020) work – ‘Robot Rights? Let’s Talk about Human Welfare Instead’ – which in my opinion, begins to refocus the debate. During Mr. Ramamoorthy’s lecture (on ‘Responsibility and Transparency in Autonomous Systems’) the concept of trade-offs was brought up many times, he concluded by stating that there needed to be a serious conversation to discuss what trade-offs we wanted to have and more importantly what values, ethos we wanted to instill in such technologies.  

What I’ve learnt so far in this degree, combined with my personal perception (formed from many variables like: life experiences, culture, etc.) leads me to believe that what’s at stake may perhaps be more significant. If automation and technology amplify its creator’s critical lens of the world and inadvertently that person’s ‘humanity’, including their biases perhaps, then through the creation of such technologies will humanity be coming to a point at which it will have to choose what aspects of humanity it wants to amplify?  

I know this sounds quite profound (and abstract to some) but think about it, if AI (and similar) systems learn from big data, our big data, then how can there not be biases? We as humans are biased. So, what does this mean? Our global society – spread over hundreds of nations – contains thousands, if not millions, of different cultural differences and inclinations, as well as values and morals. So, what does this mean?  How do we escape a robot uprising – Terminator style – or, is that even a question we should be asking? Or rather, is it the ‘nightmare of a guilty conscience’ (Donath, 2020, p. 73)? Could we make an AI like an iPhone, standardised? Is that the future we want? 

So that’s my train of thought as to how I got here, the (provisional) question then that I’m thinking of proposing is: Before automation, where do we want to go from here?  


References

Birhane, A., Dijk, J. V. (2020) ‘Robot Rights? Let’s talk about Human Welfare Instead’, 2020 AAAI/ACM Conference on AI, Ethics and Society, pp. 1-7. Available at: https://doi.org/10.1145/3375627.3375855  

Cave, S., Dihal, K. (2020) ‘The Whiteness of AI’, Philosophy & Technology, 33, pp. 685-703. Available at: https://doi.org/10.1007/s13347-020-00415-6  

Danaher, J., Nyholm, S. (2021) ‘Automation, work and the achievement gap’, AI and Ethics, 1, pp. 227-237. Available at: https://doi.org/10.1007/s43681-020-00028-x  

Donath, J. (2020) ‘Chapter 3: Ethical Issues in Our Relationships with Artificial Entities’, in Dubber, M. D. Markus D., Pasquale, F., Das, S. (eds), The Oxford Handbook of Ethics of AI (2020; online edn, Oxford Academic, 9 July 2020), https://doi.org/10.1093/oxfordhb/9780190067397.001.0001 

Knell, S., Rather, M. (2023) ‘Artificial intelligence, superefficiency and the end of work: a humanistic perspective on meaning in life ‘, AI and Ethics. Available at: https://doi.org/10.1007/s43681-023-00273-w  

Strait, M., Ramos, A. S., Contreras, V. Garcia, N. (2018) ‘Robots Racialized in the Likeness of Marginalized Social Identities are Subject to Greater Dehumanization than those racialized as White’, in IEEE RO-MAN 2018 : RO-MAN 2018 : the 27th IEEE International Symposium on Robot and Human Interactive Communication /. [Online]. 2018 Piscataway, New Jersey :: Institute of Electrical and Electronics Engineers,. pp. 452–457. 

Leave a reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

This site uses Akismet to reduce spam. Learn how your comment data is processed.

css.php

Report this page

To report inappropriate content on this page, please use the form below. Upon receiving your report, we will be in touch as per the Take Down Policy of the service.

Please note that personal data collected through this form is used and stored for the purposes of processing this report and communication with you.

If you are unable to report a concern about content via this form please contact the Service Owner.

Please enter an email address you wish to be contacted on. Please describe the unacceptable content in sufficient detail to allow us to locate it, and why you consider it to be unacceptable.
By submitting this report, you accept that it is accurate and that fraudulent or nuisance complaints may result in action by the University.

  Cancel