Any views expressed within media held on this service are those of the contributors, should not be taken as approved or endorsed by the University, and do not necessarily reflect the views of the University in respect of any particular issue.
Skip to content

WP4 of Discovering Liveability focuses on the actions of the state – how government action around suicide prevention, and areas such as welfare policy; housing; the criminal justice and immigration systems, makes lives more or less liveable. For the past few months, I have been focusing on the suicide prevention policies of Northern Ireland, Wales, Scotland and England and considering the politics of these policies.  

Certain types of knowledge have traditionally been privileged when it comes to suicide prevention – largely consisting of epidemiological data and quantitative research from psychological, psychiatric and clinical sciences. This, then, has an impact on the kind of actions suicide prevention policies decide to take. Looking into the evidence used, how it functions, and how other evidence might change the resulting actions (such as from qualitative co-production and engagement with lived/living experience) is key to the Discovering Liveability project. In this blog I focus on a smaller example of this work, a single statistic used within the England policy. 

 

“A 35% Fall in Suicides in Inpatient Settings Between 2010 and 2020” 

Throughout the England suicide prevention policy, a statistic keeps being mentioned – that there was a 35% fall in suicides in inpatient settings between 2010 and 2020. The Executive Summary begins: 

We have seen one of the lowest ever suicide rates (in 2017) and collective efforts to improve patient safety led to a 35% fall in suicides in mental health inpatient settings in England between 2010 and 2020. (DHSC, 2023: 6) 

This “35% fall” statistic is also mentioned at the beginning of the ‘Introduction’, as well as at the start of the section that examines mental health patients as an ‘at-risk’ group. It is here, in the section on patients, where it is most fully explained as coming from “safer physical environments” and “staff vigilance”: 

Between 2010 and 2020, there was a 35% fall in the number of suicides in inpatient settings in England when taking into account the number of admissions. This fall is likely due to safer physical environments (including the removal of ligature points), staff vigilance, and wider improvements in mental health inpatient settings. (DHSC, 2023: 29) 

The figure is once again mentioned in the ‘means and methods’ section: 

Successful intervention is possible […] there have been significant steps taken to improve the physical safety of specific settings, such as removing ligature points from the wards of inpatient mental health facilities over recent years. This has contributed to a 35% fall in the number of inpatient suicides in England between 2010 and 2020. (DHSC, 2023: 57) 

By repeating this statistic, the policy is linking success at preventing inpatient suicide to a focus on physical measures of intervention (ligature points and staff vigilance). This highlights how certain types of (often quantitative) knowledge are often used to support and justify certain types of suicide prevention intervention. 

When clicking through to the linked source of this statistic, we see it comes from the National Confidential Inquiry into Mental Health (NCISH) Annual Report 2023. This report also mentions the key factor in this reduction as being the removal of ligature points, just as the English policy does – specifically citing the decrease in deaths by hanging (2023: 21).  

You might ask then, if there is data to support the England policy’s use of the “35% fall” figure to justify certain types of interventions, why am I writing a blog questioning it? 

 

Is the Data as Clear as it First Appears? 

The NCISH report is clear that there has been success in reducing inpatient suicides. However, the paragraph detailing a fall is, importantly, mentioned alongside further context: “the fall in the number of in-patient suicides seems to have slowed in recent years” (2023: 21). Indeed, looking at NCISH’s report, between 2017 and 2020 there appears to have been no fall, and furthermore, when I looked into the updated numbers including the most recent data for 2022, there has even been a slight rise in the rate of inpatient suicides per 10,000 admissions (NCISH, 2025). 

A further qualification also needs to be made that the rate of patient suicide overall (not just inpatients, but all those in contact with the NHS) has remained consistent throughout the 2010 to 2022 period. The highest percentage of patients dying by suicide were individuals considered to be “low/no” risk. Inpatient suicide makes up only 6% of all patient suicide (NCISH, 2023; 2025). 

It appears then, that the 35% figure used in the England suicide prevention policy, and explanation of this reduction as resulting from “safer physical environments” and “staff vigilance”, needs significant qualification. Any simple causal linkage needs to be called into question when “staff vigilance” is highlighted, given most suicides occurred in people assessed as “low/no risk”, and that, if a focus on ligature points was effective, any effectiveness appears to have vanished since 2017. 

This is crucial as, in my analysis of the current English suicide prevention policy (a full report coming out soon!), I found that the actions still prioritise physical interventions such as means restriction, hotspots, and training for staff in risk assessment and signposting. This continues a pattern identified in earlier UK suicide prevention policies by some of my colleagues (Marzetti et al. 2022), and elsewhere (Pirkis et al., 2023; Stoor et al., 2021; East et al., 2021; Wisbech et al., 2025). This continued focus (on means restriction and training in spotting signs of suicide) is significantly challenged, I think, by the broader context of the “35% reduction” statistic. If suicide rates among people in in-patient settings are stagnating, and even potentially increasing; and if the vast majority of suicides among patients are not occurring in inpatient settings, but in those assessed as low risk, this points to the need for a change in strategy. This narrow focus, justified by the use of this statistic, needs to be expanded to include priorities often missing from suicide prevention policy such as lived/living experience inclusion and the social determinants of suicide. 

 

Co-Production, Lived/Living Experience, and Liveability – Change is Happening! 

Some change is already occurring in broader suicide prevention policy landscapes. The suicide prevention policies of Scotland and Wales have begun to shift towards the language of liveability, highlighting social determinants, and lived experience inclusion as their priorities – representing a significant shift in focus. Additionally, NHS England guidelines on mental health inpatient care have also begun to propose moving away from risk assessment. 

In both NHS England’s Culture of Care Standards for Mental Health Inpatient Services (2024) and Staying Safe from Suicide: Best Practice Guidance for Safety Assessment, Formulation and Management (2025), advice is given to not use risk assessment tools and scales to predict future suicide or determine who should be offered treatment or discharged. Guidance, instead, suggests moving to a “relational”, “psycho-social assessment”, that focuses on “the person’s needs”. 

I cite these documents, as, whilst the “35% fall” figure is based in empirical data, the bigger takeaway of both inpatient and all patient stats seems to be the need to move away from the narrow risk based, physical means reduction techniques of previous policies. Whilst ligature points and staff vigilance are important issues, past and contemporary policies prioritisation of these issues limit broader questions of how policies and service provisions often make lives unliveable, and what governments might do to make lives more liveable in the first place. 

  •  By Tom Wadsworth (Research Fellow)

References

DHSC, 2023, Suicide Prevention in England: 5 Year Cross-Sector Strategy, Available at: Suicide prevention in England: 5-year cross-sector strategy - GOV.UK 

East, L., Dorozenko, K.P., Martin, R., 2021, The construction of people in suicide prevention documents, Death Studies, Vol. 45, https://doi.org/10.1080/07481187.2019.1626938 

Marzetti, H., Oaten, A., Chandler, A., Jordan, A., 2022, Self-Inflicted, Deliberate, Death Intentioned, A Critical Policy Analysis of UK Suicide Prevention Policies 2009-2019, Journal of Public Mental Health, Vol. 21, Issue 1 

NCISH, 2023, Annual Report 2023: UK Patient and General Population Data 2010-2020, Available at: NCISH | Annual report 2023: UK patient and general population data 2010-2020 

NCISH, 2025, Annual Report 2025: UK Patient and General Population Data 2012-2022, Available at: NCISH | Annual report 2025: UK patient and general population data, 2012-2022 

NHS England, 2024, Culture of Care Standards for Mental Health Inpatient Services, Available at: NHS England » Culture of care standards for mental health inpatient services 

NHS England, 2025, Staying Safe from Suicide: Best Practice Guidance for Safety Assessment, Formulation and Management, Available at: NHS England » Staying safe from suicide: Best practice guidance for safety assessment, formulation and management 

Pirkis, J., Gunnell, D., Hawton, K., Hetrick, S., Niederkrotenthaler, T., Sinyor, M., Yip, P.S.F., Robinson, J., 2023, A Public Health, Whole-of-Government Approach to National Suicide Prevention Strategies, Crisis, Vol. 44, https://doi.org/10.1027/0227-5910/a000902 

Stoor, J.P.A., Eriksen, H.A., Silviken, A.C., 2021, Mapping suicide prevention initiatives targeting Indigenous Sámi in Nordic countries, BMC Public Health, Vol. 21, https://doi.org/10.1186/s12889-021-12111-x 

Wisbech, J., Jønsson, A.B.R., Holen, M., 2025, Threatened by an individual double-edged risk? Representations of suicidal behavior and people who attempt suicide in prevention policies in Denmark. A poststructural policy analysis, Health, https://doi.org/10.1177/13634593251389636 

Artificial Intelligence (AI) is a catch-all term for technologies that process information and appear, at least on the surface, to mimic human thought. In suicide research, AI is increasingly being explored as a way to analyse large datasets, especially to identify people considered “at risk” of suicide (Bernert et al. 2020). Much of the existing literature praises the efficiency of AI in handling big data, with particular emphasis on its effectiveness in predicting suicide attempts and risk (Abdelmoteleb et al. 2025). Discussions of challenges related to the use of AI usually centre data privacy, security, and algorithmic bias. But where does this data used by AI come from? Who is represented in it? And most importantly, what does it cost to use AI on such a large scale?

Reviews of literature show that most studies that examine the role of AI in suicide prevention are conducted in high-income countries (HICs) (Khan & Javed, 2022). This follows a broader pattern in suicide research, where low- and lower-middle income countries (LMICs) are usually under-represented (Itua et al. 2025). Even within HICs, data that feeds AI systems often come from more socioeconomically privileged populations, resulting in the promotion of AI tools that primarily serve groups whose clinical data is more readily attainable. When AI tools are shaped by data that excludes large parts of the global population, whole communities are left out of potentially “life-saving” research, therefore raising questions about whose lives are considered saveable. This also leads to another ethical issue: who benefits from this research, and who is negatively impacted by it? The answers to this question - the environmental and social costs of using AI - remains under-explored in suicide research.

AI systems are energy-intensive. Training these models and storing their data requires vast computing power, which results in heavy use of electricity and water, and the generation of e-waste, all of which lead to a significant carbon footprint (Frimpong, 2025). Many data centres also rely on rare minerals mined under exploitative conditions, typically in LMICs. The irony here is hard to miss: while AI is often pitched as a tool to improve liveability for some, the environmental consequences disproportionately harm the very communities already facing economic and climate-related hardship.

Marginalised groups are often left out of AI’s benefits and conversations, even though they are just as impacted - if not more so - by both suicide and environmental degradation. Take, for instance, a rural, historically Black community in Memphis, Tennessee, where residents raised alarm about water pollution linked to the building of a new AI data centre (Okoi, 2025). But if you search online for the keywords “AI” and “water pollution,” you’ll most likely find optimistic articles about AI being used to monitor pollution, with very few stories about how AI infrastructure itself might be causing harm.

Then there’s the human labour behind the machines. AI relies on low-paid workers - again, often based in LMICs - for essential tasks such as content moderation and data labelling (Regilme, 2024). These workers frequently work in poor conditions, have limited labour protections, and gain little recognition for their contributions. Thus, the profits and advancements of AI tend to stay in HICs, while marginalised communities within those countries and LMICs continue to be negatively impacted by the effort it takes to power AI. So, while AI might hold promise in suicide research, we need to pause and ask a harder question: can we really claim that AI is potentially a life-saving tool, if the same technology is creating unlivable conditions for so many?

AI is not a neutral tool - it reflects the social and political structures of the world around us. If we want suicide research to truly help people, it needs to be socially just. That means going beyond just improving algorithms to acknowledge the broader costs of using AI, and committing to research practices that don’t ignore the negative social, economical, and environmental effects of promoting AI use. We can no longer afford to treat either the climate crisis or suicide as a “data problem” to be solved by more efficient algorithms without considering the human and social costs of AI - and research must reflect that reality.

  • By Dr Paro Ramesh

References

Abdelmoteleb, S., Ghallab, M. and IsHak, W.W., 2025. Evaluating the ability of artificial intelligence to predict suicide: A systematic review of reviews. Journal of Affective Disorders. https://doi.org/10.1016/j.jad.2025.04.078

Adom, P.K., 2024. The socioeconomic impact of climate change in developing countries over the next decades: A literature survey. Heliyon, 10(15). https://doi.org/10.1016/j.heliyon.2024.e35134

Bernert, R.A., Hilberg, A.M., Melia, R., Kim, J.P., Shah, N.H. and Abnousi, F., 2020. Artificial intelligence and suicide prevention: a systematic review of machine learning investigations. International journal of environmental research and public health, 17(16), p.5929. https://doi.org/10.3390/ijerph17165929

Frimpong, V., 2025. The Sustainability Paradox of Artificial Intelligence: How AI Both Saves and Challenges Resource Management Efforts. Available at SSRN 5176930. https://dx.doi.org/10.2139/ssrn.5176930

Itua, I., Shah, K., Galway, P., Chaudhry, F., Georgiadi, T., Rastogi, J., Naleer, S. and Knipe, D., 2025. Are we using the right evidence to inform suicide prevention in low-and middle-income countries? An umbrella review. Archives of suicide research, 29(1), pp.290-308. https://doi.org/10.1080/13811118.2024.2322144

Khan, N.Z. and Javed, M.A., 2022. Use of artificial intelligence-based strategies for assessing suicidal behavior and mental illness: A literature review. Cureus, 14(7). https://doi.org/10.7759/cureus.27225

Levy, B.S. and Patz, J.A., 2015. Climate change, human rights, and social justice. Annals of global health, 81(3), pp.310-322. https://doi.org/10.1016/j.aogh.2015.08.008

Okoi, O., 2025. Artificial Intelligence, the Environment and Resource Conflict: Emerging Challenges in Global Governance. Balsillie Papers, 7(3). https://balsilliepapers.ca/bsia-paper/artificial-intelligence-the-environment-and-resource-conflict-emerging-challenges-in-global-governance/

Regilme, S.S.F., 2024. Artificial intelligence colonialism: Environmental damage, labor exploitation, and human rights crises in the Global South. SAIS Review of International Affairs, 44(2), pp.75-92. https://dx.doi.org/10.1353/sais.2024.a950958

css.php

Report this page

To report inappropriate content on this page, please use the form below. Upon receiving your report, we will be in touch as per the Take Down Policy of the service.

Please note that personal data collected through this form is used and stored for the purposes of processing this report and communication with you.

If you are unable to report a concern about content via this form please contact the Service Owner.

Please enter an email address you wish to be contacted on. Please describe the unacceptable content in sufficient detail to allow us to locate it, and why you consider it to be unacceptable.
By submitting this report, you accept that it is accurate and that fraudulent or nuisance complaints may result in action by the University.

  Cancel