Any views expressed within media held on this service are those of the contributors, should not be taken as approved or endorsed by the University, and do not necessarily reflect the views of the University in respect of any particular issue.

1.5.1 KIPP meeting 1

This week I attended the first KIPP group meeting. Questions from peers helped me to identify some thoughts on my project which were left out of my week 5 blog post. Crucially, I didn’t explain that my proposal (to be in favour of a 98% corporation tax on AI companies) would include the notion that newly generated tax revenue would be used to offset the cost of AI in some way. My initial idea was that the newly unemployed could claim financial support, whether that be directly, through a specific ‘AI unemployment fund’, or indirectly, i.e. more cash to put towards the benefits system.

During the meeting I also received some helpful feedback from my peers, which is summarised below:

Critiques of my thesis

  • I justified my thesis by suggesting it will prove beneficial in the long term, despite having quite serious consequences in the short term. A peer responded by noting that long-term outlooks can prompt bad planning strategies. In light of this comment I plan to research the benefits and shortfalls of different philosophies for plan making
  • A peer noted my chosen subject was rather broad. I plan to reflect on my approach to the question, which could be e.g. through philosophical theory or the use of financial data
  • A peer suggested my thesis seemed extreme, and referenced Aristotle’s golden mean. I agreed that my policy is extreme from a standard viewpoint, but I personally believe it is justifiable when considering the magnitude of the problem. I will ensure I clarify this further in my project and examine my belief in closer detail
  • In response to questions on the ethics of a high tax rate, I clarified that I know my proposal isn’t considered ethical in Kantian/deontological ethics – it will cause widespread economic issues i.e. unemployment, stock market crash, etc. But this solution means choosing to deliberately take action now in order to avoid an AI-for-profit future, which I think will cause tenfold the unemployment and harm to the economy (amongst other, scarier things) of the high tax rate. So my intervention can be considered ethical, but only from a consequentialist/utilitarian POV. It’s almost a trolley problem. I was previously oversimplifying my ideas to be ‘logical vs illogical’, ‘right vs wrong’, but I now see these as opposing solutions which represent different ethical theories
  • I was asked why I propose to increase taxes only for AI companies, excluding companies which contribute to e.g. climate change. I answered that I do think applying high taxes to companies which contribute to other existential threats could be a reasonable policy proposal. I focus my efforts on AI because I believe it is the most urgent threat currently facing humanity, so measures to slow it should be prioritised above other issues. I expressed difficulty articulating my reasons for this opinion, so peers recommended The Precipice: Existential Risk and the Future of Humanity by Toby Ord, which sets out an argument in line with my own beliefs on this issue

Other feedback

  • I was under the impression that the use of data in our project was a requirement, but received feedback that this most likely isn’t the case. I will look into this before further clarifying my final project idea
  • Reading recommendation: Should We Have a Robot Tax? by Forbes

══════════════════

The meeting prompted further reflection on the role of capitalism in the dangers of AI. It seems that a key danger – AI being used by humans as a weapon/for harm – is only present due to the commodification of AI programs.

LLMs are profitable due to user demand. AI has therefore been shared by companies with the general public before we can guarantee that it is safe for public use. In a non-capitalist system, the interest in AI would be not for its profit-generating capabilities but for its other, much more significant offerings. There would be no ‘AI Race’, so AI wouldn’t need to be commodified, risking public safety.

I may therefore slightly change my project’s creative output, which will present a potential ‘AI utopia’ and a potential ‘AI dystopia’, to be that of a capitalist world vs a non-capitalist/socialist/Marxist world, showing how AI is perceived, treated, and used in each economic system.

1.5.1 KIPP meeting 1 / Alexandra Brown by is licensed under a

Leave a Reply

Your email address will not be published. Required fields are marked *

1.5.1 KIPP meeting 1 / Alexandra Brown by is licensed under a
css.php

Report this page

To report inappropriate content on this page, please use the form below. Upon receiving your report, we will be in touch as per the Take Down Policy of the service.

Please note that personal data collected through this form is used and stored for the purposes of processing this report and communication with you.

If you are unable to report a concern about content via this form please contact the Service Owner.

Please enter an email address you wish to be contacted on. Please describe the unacceptable content in sufficient detail to allow us to locate it, and why you consider it to be unacceptable.
By submitting this report, you accept that it is accurate and that fraudulent or nuisance complaints may result in action by the University.

  Cancel