Week 7 – A Battle of Morals: Socrates vs GPT
This week I gave a lot of thought to perhaps the most interesting course of all: Ethical Data Futures. If the final point is the midpoint of a crossroads, then this course isn’t a path but the signpost at the centre. Without it, there wouldn’t be the understanding and structure that exists in Narrative Futures – especially with our main focus being AI and data in relation to storytelling. This is my perspective after completing four seminars/lectures of the course (writing in the future has its perks ^^).
I have learnt a lot from all courses, and can already feel the development I’ve undertaken to become a well-rounded student, but (if made to choose) I’d select this course as the one that has defined my learning experience so far and that has most closely matched-up to my initial expectations of the program. There are pages and pages of notes (35+!) I’ve taken for this course, so I will only detail some of the information that I either didn’t know, or that stood out as particularly memorable. This doesn’t include the online discussion posts which have also been an interesting experience (and super insightful!)
Descriptive vs Normative
- Descriptive theories and claims about human social norms and values.
- Normative theories, claims and value judgments of moral goodness or rightness.
Ethical values that are highly relevant to data practices:
- Privacy
- Transparency
- Justice
- Equity
- Accountability
- Safety
- Sustainability
- Autonomy
Six Critical Data Skills
- Ethical Reflection (Case Study 1)
- Ethical Analysis (Case Study 1)
- Ethical Deliberation (Case Study 2)
- Ethical Evaluation (Case Study 3)
- Ethical Contestation (Case Study 4)
- Ethical Decision-making (Case Study 4)
Ethical Benefits of Data/AI practices
- Human understanding
- Social, institutional and economic efficiency
- Predictive accuracy and personalisation
Ethical Harms of Data/AI practices
- Harms to privacy and security
- Harms to fairness and justice
- Harms to transparency and autonomy
What is ‘Alocracy’ (John Danaher)?
The use of algorithms to govern. Eventually, greater reliance on algorithms to govern reaches a point where we are ‘governed by the algorithm’. Three types of algorithmic involvement (human in/ on/ off the loop).
The Seven Principles of Data Feminism
- Examine Power
- Challenge Power
- Elevate Emotion
- Embodiment
- Rethink Binaries & Hierarchies
- Embrace Pluralism
- Consider Context
- Make Labour Visible
Initial Ethical Considerations of AI Art Generation
- What is the carbon footprint? Is it worth it?
- When does art become an ethical issue?
- How does technology help or hinder creativity?
- Do AI-driven art make tools more or less valuable (socially)?
- Who is not benefitting from these tools? Why (not)?
Large Language Models – Risk of Harms
- Environmental costs
- A large dataset is not necessarily diverse and unbiased
- Misinformation and disinformation
Data Visualisation is not apolitical or (ethically) neutral!
- Perceived objectivity and authority of visualisations.
- Quantification and visual abstraction separating people/human experience behind the data and people ‘consuming’ the data visualisation.
- Persuasive power of visualisations as it leverages how our brains are hardwired to process data.
Remember: data is not neutral!
I am grateful for the fact the assignment for this course isn’t due till midway through April, as this will permit me to take my time with really delving into the myriad of information and reading(s) that are on this course. Plenty of information has been shared in the seminars/lectures, but this is definitely a course that will definitely benefit from hours of self-study – I’m looking forward to it!