Any views expressed within media held on this service are those of the contributors, should not be taken as approved or endorsed by the University, and do not necessarily reflect the views of the University in respect of any particular issue.
Skip to content

Tech for life? AI, Suicide Research, and the Right to a Liveable Future

Artificial Intelligence (AI) is a catch-all term for technologies that process information and appear, at least on the surface, to mimic human thought. In suicide research, AI is increasingly being explored as a way to analyse large datasets, especially to identify people considered “at risk” of suicide (Bernert et al. 2020). Much of the existing literature praises the efficiency of AI in handling big data, with particular emphasis on its effectiveness in predicting suicide attempts and risk (Abdelmoteleb et al. 2025). Discussions of challenges related to the use of AI usually centre data privacy, security, and algorithmic bias. But where does this data used by AI come from? Who is represented in it? And most importantly, what does it cost to use AI on such a large scale?

Reviews of literature show that most studies that examine the role of AI in suicide prevention are conducted in high-income countries (HICs) (Khan & Javed, 2022). This follows a broader pattern in suicide research, where low- and lower-middle income countries (LMICs) are usually under-represented (Itua et al. 2025). Even within HICs, data that feeds AI systems often come from more socioeconomically privileged populations, resulting in the promotion of AI tools that primarily serve groups whose clinical data is more readily attainable. When AI tools are shaped by data that excludes large parts of the global population, whole communities are left out of potentially “life-saving” research, therefore raising questions about whose lives are considered saveable. This also leads to another ethical issue: who benefits from this research, and who is negatively impacted by it? The answers to this question - the environmental and social costs of using AI - remains under-explored in suicide research.

AI systems are energy-intensive. Training these models and storing their data requires vast computing power, which results in heavy use of electricity and water, and the generation of e-waste, all of which lead to a significant carbon footprint (Frimpong, 2025). Many data centres also rely on rare minerals mined under exploitative conditions, typically in LMICs. The irony here is hard to miss: while AI is often pitched as a tool to improve liveability for some, the environmental consequences disproportionately harm the very communities already facing economic and climate-related hardship.

Marginalised groups are often left out of AI’s benefits and conversations, even though they are just as impacted - if not more so - by both suicide and environmental degradation. Take, for instance, a rural, historically Black community in Memphis, Tennessee, where residents raised alarm about water pollution linked to the building of a new AI data centre (Okoi, 2025). But if you search online for the keywords “AI” and “water pollution,” you’ll most likely find optimistic articles about AI being used to monitor pollution, with very few stories about how AI infrastructure itself might be causing harm.

Then there’s the human labour behind the machines. AI relies on low-paid workers - again, often based in LMICs - for essential tasks such as content moderation and data labelling (Regilme, 2024). These workers frequently work in poor conditions, have limited labour protections, and gain little recognition for their contributions. Thus, the profits and advancements of AI tend to stay in HICs, while marginalised communities within those countries and LMICs continue to be negatively impacted by the effort it takes to power AI. So, while AI might hold promise in suicide research, we need to pause and ask a harder question: can we really claim that AI is potentially a life-saving tool, if the same technology is creating unlivable conditions for so many?

AI is not a neutral tool - it reflects the social and political structures of the world around us. If we want suicide research to truly help people, it needs to be socially just. That means going beyond just improving algorithms to acknowledge the broader costs of using AI, and committing to research practices that don’t ignore the negative social, economical, and environmental effects of promoting AI use. We can no longer afford to treat either the climate crisis or suicide as a “data problem” to be solved by more efficient algorithms without considering the human and social costs of AI - and research must reflect that reality.

  • By Dr Paro Ramesh

References

Abdelmoteleb, S., Ghallab, M. and IsHak, W.W., 2025. Evaluating the ability of artificial intelligence to predict suicide: A systematic review of reviews. Journal of Affective Disorders. https://doi.org/10.1016/j.jad.2025.04.078

Adom, P.K., 2024. The socioeconomic impact of climate change in developing countries over the next decades: A literature survey. Heliyon, 10(15). https://doi.org/10.1016/j.heliyon.2024.e35134

Bernert, R.A., Hilberg, A.M., Melia, R., Kim, J.P., Shah, N.H. and Abnousi, F., 2020. Artificial intelligence and suicide prevention: a systematic review of machine learning investigations. International journal of environmental research and public health, 17(16), p.5929. https://doi.org/10.3390/ijerph17165929

Frimpong, V., 2025. The Sustainability Paradox of Artificial Intelligence: How AI Both Saves and Challenges Resource Management Efforts. Available at SSRN 5176930. https://dx.doi.org/10.2139/ssrn.5176930

Itua, I., Shah, K., Galway, P., Chaudhry, F., Georgiadi, T., Rastogi, J., Naleer, S. and Knipe, D., 2025. Are we using the right evidence to inform suicide prevention in low-and middle-income countries? An umbrella review. Archives of suicide research, 29(1), pp.290-308. https://doi.org/10.1080/13811118.2024.2322144

Khan, N.Z. and Javed, M.A., 2022. Use of artificial intelligence-based strategies for assessing suicidal behavior and mental illness: A literature review. Cureus, 14(7). https://doi.org/10.7759/cureus.27225

Levy, B.S. and Patz, J.A., 2015. Climate change, human rights, and social justice. Annals of global health, 81(3), pp.310-322. https://doi.org/10.1016/j.aogh.2015.08.008

Okoi, O., 2025. Artificial Intelligence, the Environment and Resource Conflict: Emerging Challenges in Global Governance. Balsillie Papers, 7(3). https://balsilliepapers.ca/bsia-paper/artificial-intelligence-the-environment-and-resource-conflict-emerging-challenges-in-global-governance/

Regilme, S.S.F., 2024. Artificial intelligence colonialism: Environmental damage, labor exploitation, and human rights crises in the Global South. SAIS Review of International Affairs, 44(2), pp.75-92. https://dx.doi.org/10.1353/sais.2024.a950958

Leave a Reply

Your email address will not be published. Required fields are marked *

css.php

Report this page

To report inappropriate content on this page, please use the form below. Upon receiving your report, we will be in touch as per the Take Down Policy of the service.

Please note that personal data collected through this form is used and stored for the purposes of processing this report and communication with you.

If you are unable to report a concern about content via this form please contact the Service Owner.

Please enter an email address you wish to be contacted on. Please describe the unacceptable content in sufficient detail to allow us to locate it, and why you consider it to be unacceptable.
By submitting this report, you accept that it is accurate and that fraudulent or nuisance complaints may result in action by the University.

  Cancel