Any views expressed within media held on this service are those of the contributors, should not be taken as approved or endorsed by the University, and do not necessarily reflect the views of the University in respect of any particular issue.

A possible ethical future with AI

The Ethical Data Futures course took 12 weeks, studying possible ethical futures for data. The instrumental part of the course was learning the various ethical data skills that were used to critically analyse current data practices in furtherance of the discussion. These ethical data skills include ethical reflection, evaluation, deliberation, contestation, and decision-making. The discussions were held in groups, strictly applying the six skills to analyse unethical current data practices and steer towards more desirable ethical futures. As a result, a number of challenges with the present data practices were identified (Vallor and Rewark, 2018). These include the lack of human-centric values like respect, safety, empathy, and transparency in deployed technologies, which pose challenges.

Reflecting on one of the case studies, Generative AI (GEn AI) and Creative Art, discussed in one of the groups, points to many of the issues with AI in the present day. There were obviously lots of good sides to AI in the creative industry (Shaffi, 2023). These technologies have revolutionised art and made both democratised access to the use of software tools to make artistic work and producing impressive aesthetic artworks with the use of simple text prompts possible. While this is a good thing, it has negative effects. The commodification enhanced the commodification of creative art skills; where more people are benefiting from it, the cost of creating artistic illustrations is cheapened, and the benefits that are supposed to go to the professionals in the field are lost, but the ultra-rich AI providers are earning more because of increasing usage.

There was also a lack of consent and data owners’ rights. Training AI algorithms on billions of datasets scrubbed off the internet, including copyrighted and non-copyrighted works, is a common practice in tech innovations. This is done without the consent of the owners, a clear ethical breach of people’s rights by tech companies. It infringes on people’s fundamental rights and denies data owners the agency to agree or decline. While there are regulations against these infringements, they appear not to be discouraging these acts. The US Fair Use Act, which is one of such regulations, instead of providing solutions, makes exceptions for the use of copyright works for AI development. Similarly, the EU AI Act requires transparency from providers on the training datasets used, but this is impracticable due to factors like the “low threshold of originality and absence of comprehensive rights ownership metadata. 

The use of copyrighted materials in the training of AI algorithms raises the question of whether the outputs are free of infringements. I recall that during the discussions in the group, there was a unanimous agreement supporting the outputs as plagiarism since AI does not improve on the work it copies from. However, one of the participants argued that the act of replicating work has always been what artists do, so creative art should not be given some epistemic privilege in AI consumption. The counterpoint highlighted how human art involves emotions and improvements and their failings that Gen AI cannot replicate, undermining the satisfaction artists derive from their work. Amidst this, some artists have taken AI providers to court, and there have been some changes in the field. Systems like ChatGPT restrict generating copyrighted art, which demonstrates the impact of a collective demand for accountability can go a long way. 

Although the infringement on copyright in the training of the datasets is still not solved, no solution is in sight. The question now becomes, what kind of future would we want to build with regards to AI?

Well, an ethical future requires the inclusion  and co-creation of all stakeholders and different moral perspectives. It needs framing transparency as a requirement of ethical good faith from providers to enable “course-correcting.” Data owners can have actionable insight into data flows and use cases to opt-out if infringed. In addition, I think there should be incentive structures allowing proactive collaboration between AI developers and data sources for lawful consent pathways. Essentially, an ethical future must uphold human rights to enable people to live with dignity, creativity, and self-actualisation.

 

Reference

Vallor, S. and Rewark, W. J. (2018). An Introduction to Data Ethics. Markkula Center for Applied Ethics, pp. 7–28

Shaffi, S. (2023). ‘It’s the opposite of art’: why illustrators are furious about AI. The Guardian. Available online: https://www.theguardian.com/artanddesign/2023/jan/23/its-the-opposite-of-art-why-illustrators-are-furious-about-ai

Leave a reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

css.php

Report this page

To report inappropriate content on this page, please use the form below. Upon receiving your report, we will be in touch as per the Take Down Policy of the service.

Please note that personal data collected through this form is used and stored for the purposes of processing this report and communication with you.

If you are unable to report a concern about content via this form please contact the Service Owner.

Please enter an email address you wish to be contacted on. Please describe the unacceptable content in sufficient detail to allow us to locate it, and why you consider it to be unacceptable.
By submitting this report, you accept that it is accurate and that fraudulent or nuisance complaints may result in action by the University.

  Cancel