‘Outline your provisional project topic and format’
In my last post, I explain what has inspired me to look into the following (provisional) research topic: Before automation, where do we want to go from here?
In this post I will outline what topics could be included within the research’s narrative as well as its potential format. In regard to formatting, for the moment, I think it will be essential to have a written piece – much like a dissertation/ research project – and perhaps a further artefact which somehow depicts the concepts within, but let’s not get ahead of ourselves!
There’s several topics which I think would be essential to cover, so let me break them down for you:
Master-Slave Dialectic
(From Nietzsche and Hegel)
- What is it?
- Its consequence on the contextual and critical lens within society
- Examples
Automation and Work
- History of Work (existing literature)
- In response to J. Danaher’s (2022) work (video?), the pursuit of ‘option B’ which Danaher describes as the pursuit of a ‘new social value system that replaces the work ethic’
- Not possible without breaking out of the master-slave dialectic paradigm – Explain why?
- Could this new social value system be based on value creation? Deriving from service logic(?) – Explain value creation
- This could bring a new meaning to a meaningful life
Examples of cultures on Earth which have different work ethics compared to the West
- For what is the technology for but not to benefit us humans? Rather than the main objective being the pursuit of exponential profits. This is aligned with Korineck & Stiglitz (2020) work that states that we must steer technological progress in the direction we want to go in
Education (?)
Inherently Bias data
The previous section looked at the theoretical but how could this be implemented in an applicable, practicable way. Especially given the following major obstacle to such a system. If AI (and similar) systems learn from big data, our big data, then how can there not be biases? We as humans are biased.
- How do we get to an ethical autonomous system if the data that these models are trained on are inherently biased due to its nature – Explore studied examples
- Is it even possible to create an AI with no discrimination or biases?
- A more realistic (/practical) approach perhaps, what can we do with the current data to address these concerns?
- Create an algorithm or AI that recognises the inherent (potential) discrimination within a data set? Take the example of amazon’s ‘sexist AI’ tool (2018). Addressing the potential discriminatory variables from the onset may be a step in the right direction?
- Who is supposed to decide what trade-offs are acceptable in these systems? – Positives/ negatives
- How do we take into account cultural differences among different nations? – A modular approach?
Regulation
Following the previous sections, how would we regulate such systems in our society?
- A transdisciplinary (new) governing body with the skills to oversee possible issues/dilemmas that derive from the use of such technologies
- overseeing trade-offs, e.g. privacy and transparency/ explainability + accountability
- certifications and tests for developers (like lawyers or doctors)
- A modular approach, which relates to each country having a governing body (to take into account cultural differences) under the umbrella of a global unified (and accepted) standard (?)
- A modular bottom-up approach grounded in a humanistic and nature centric perspective (?)