Should a 98% corporation tax be introduced for companies which primarily utilise AI for the purpose of generating profit, or helping others to generate profit? My project will investigate the implications of this policy proposal, arguing in favour of this rather radical measure.
The Economic Argument
To support the case for a 98% corporate tax on AI companies, we turn to economics. This proposal has far-reaching implications, and delving into macro- and micro-economic theories helps us understand the potential impact.
Philosophical and Ethical Underpinnings
The ethical dimension of this debate cannot be ignored. What are the moral implications of taxing AI companies at such a high rate? How does this choice reflect society’s values and priorities? I will explore these questions drawing from philosophical and ethical frameworks.
Technological and Societal Consequences
The technology behind AI has already transformed our lives. By imposing a 98% tax, what technological changes can we anticipate? How will it affect AI development, application, and the larger tech ecosystem?
Political and Policy Considerations
Shaping the debate also requires an understanding of the political landscape. I will analyse the feasibility of introducing this tax and the potential policy changes that would accompany it.
Data-Driven Insights
I will collect data on the impact of high taxation on companies and their development, in addition to the (initial) adverse effect on employees of these companies and entangled communities. My data collection will be financially focused in order to appease the interests of our myopic, money-obsessed governments.
![](https://blogs.ed.ac.uk/s2618554_knowledge-integration-and-project-planning-data-and-artificial-intelligence-ethics-202/wp-content/uploads/sites/8689/2023/10/Rishi-Sunak-with-money_trans_NvBQzQNjv4BqqVzuuqpFlyLIwiB6NTmJwfSVWeZ_vEN7c6bHu2jJnT8-1.jpeg)
Mode of representation
I recently pitched a creative intervention as part of the Narratives of Digital Capitalism course. I became convinced that this kind of artistic critique can be one of the most effective ways of alerting the public to important issues: the spectacle attracts the public, facilitating the communication of an idea.
I would like to utilise this form of output, so my mode of representation will be a substantial creative output accompanied by 4000 words.
Creative intervention
My project will paint a picture of two divergent worlds…
World 1: Utopia
In this world, we imagine a future where AI companies are heavily taxed, the growth of AI is deliberately slowed down, and the technology is harnessed only for the greater good after rigorous testing. Economic systems adapt to this shift, and society embarks on a new, ethical era of AI usage.
World 2: Dystopia
Conversely, we visualise a world where AI companies continue on their current trajectory. The wealth gap widens, blue-collar jobs disappear, human health deteriorates due to excessive screen (and sofa) time, and the societal cost of diminished attention spans (due to Silicon Valley’s constant toddler-esque tantrums for our attention) becomes unbearable.
I have not yet chosen an idea for the intervention, but I know that it will a) explore my ideas above of utopia vs dystopia and b) will incorporate financial data as one of the key themes.
![](https://blogs.ed.ac.uk/s2618554_knowledge-integration-and-project-planning-data-and-artificial-intelligence-ethics-202/wp-content/uploads/sites/8689/2023/10/14886308_picsart-09-27-01-45-31.jpg)
I’m considering three potential modes of output: a film to depict my imaginary worlds, a video game exploring these worlds, or a more abstract piece which will utilise data, poetry, and visual techniques to provoke comparison between each world.
══════════════════
Radical problems require radical solutions.
Curated using ChatGPT 3.5 by OpenAI. For more information visit chat.openai.com.
This is pretty interesting. Rather than creating an intervention to protect people from the AI companies, you’re pitching an intervention to protect the AI companies from what could be perceived as unethical taxation.
Very creative take on the situation!
Thank you so much for this comment Angel!! It’s provoked a lot of thought about this – I am advocating for the high tax rate, even though I know it isn’t considered ethical in Kantian/deontological ethics – it will cause widespread economic issues i.e. unemployment, stock market crash – but this solution is choosing to deliberately take action now in order to avoid an AI-for-profit future, which I think will cause tenfold the unemployment and harm to the economy (amongst other, scarier things) of the high tax rate. So my intervention can be considered ethical, but only from a consequentialist/utilitarian POV. It’s almost a trolley problem –
Anyway, prior to your comment I was oversimplifying this to be ‘logical vs illogical’, ‘right vs wrong’ & am now seeing it as opposing solutions which represent different ethical theories – so thanks!
This is so fascinating! I instantly think of the various ways corporations may push to slice and dice such a tax based on the “percentage” of their system/platform/application that relies on AI, and the challenges involved in making such assessments (and accounting for real-time variability in such reliance).
What are your thoughts on a tax that goes to support regulatory/enforcement infrastructure around development of responsible systems, as a justification for the tax (not sure if that’s where you are headed with it, but it’s been an interesting concept)?
Thank you 🙂 re your question, I will propose that the extra cash is used to offset the cost of AI in some way – my initial idea was that the newly unemployed could claim financial support, whether that be directly through a specific ‘AI unemployment fund’ or indirectly i.e. just more cash to put towards the benefits system. Not 100% sure yet, hopefully clarity will come with further research. I’m now realising this is a key point that I should have included in the blog so I’m grateful for your comment, thanks!
(I’ve also been thinking a little bit about how to define who should be taxed and who shouldn’t, how much etc – good vs bad AI companies – its definitely a messy issue but I think the purpose of the use of AI is key – if the purpose is to discover new cures for diseases then great, carry on – if the purpose is to cut down your workforce and appease your shareholders then bye bye – the categorisation system would need to be thought about really carefully and would probably occur on a case by case basis – the point of the tax ultimately is to discourage and reduce the use of AI for profit, so my categorisation system would be on the stricter side)
Hey Alexandra, this is super interesting! It reminded me of Bill Gates discussing a ‘Robot Tax’ in 2017: https://qz.com/911968/bill-gates-the-robot-that-takes-your-job-should-pay-taxes#:~:text=%E2%80%9CYou%20ought%20to%20be%20willing,able%20to%20manage%20that%20displacement.
I really like the idea of a creative output, look forward to hearing more.
Yes exactly, from the tech optimist himself! Thanks Tom