Any views expressed within media held on this service are those of the contributors, should not be taken as approved or endorsed by the University, and do not necessarily reflect the views of the University in respect of any particular issue.
Skip to content

Safe space or digital cage? The new rules of researching lived experience online

Short summary: The implementation of the Online Safety Act has meant restricted access to not only lived experience community/support spaces, but also the types of knowledge about suicide that are allowed and are accessible, and thereby, reproduceable and sayable. This may also have an impact on suicide research - researchers may have to choose to provide their identity to unreliable third parties to carry out their work, which in turn will have repercussions for the types of knowledge about suicide that they produce/circulate. 

On 25 July 2025, a user on the website 4chan posted a link to a database containing over 70,000 verification photographs and government-issued identification documents. These documents belonged to users of US-based Tea, a ‘dating safety app’ designed to allow women to share reviews of men they had dated, flag potentially concerning behaviours, and identify whether they had criminal records. Tea’s privacy policy had stated that photographs and IDs would be deleted immediately following authentication. Within days of the information being made public on 4chan, various apps and websites were created; one mapped the geographic locations of Tea users, while another platform ranked the appearance of women using their verification images. 

On the same day that the link was posted on 4chan, the UK’s Online Safety Act (OSA) became enforceable. This Act introduced age verification requirements for individuals seeking access to adult content online, particularly pornography. The legislation also extends to online content related to suicide and self-harm, requiring platforms such as Reddit and X to implement age verification mechanisms - often involving the submission of photographs or government ID, such as passports and driving licences. Platforms often state that verification data will be deleted after a short retention period - for example, Reddit relies on a US-based verification provider that promises to discard data within a week. However, the Tea incident raised concerns about the reliability and safety of third-party age verification systems, which have already been shown to exhibit racial and gender biases. These concerns were heightened after a data breach affecting Discord on October 9th, in which government IDs of approximately 70,000 users were exposed. These incidents highlight the risks involved in normalising identity verification as a prerequisite for accessing sensitive content online. 

These issues are also particularly important for users participating in communities that discuss suicide and self-harm. Ofcom, the UK’s communications regulator, has stated that the relevant provisions of the OSA are intended to restrict access to content that “encourages, promotes, or provides instructions” for suicide and self-harm. However, it remains unclear how personal narratives of self-harm or suicidal ideation are addressed under this framework - particularly if the distinction between “harmful” and “helpful” content is constructed by regulatory bodies with limited or no input from those with lived experience. Third sector organisations such as Samaritans have raised similar concerns about the lack of inclusive engagement with lived experience during the consultation process for the OSA. As Marsh, Winter and Marzano (2022) have argued, suicide prevention initiatives have frequently conflated “pro-choice” suicide forums with content that actively promotes suicide. In practice, these spaces often function as communities of support, particularly for users who have found mainstream healthcare services or charity helplines inaccessible or ineffective. Enforcing a rigid, pre-determined binary between “harmful” and “helpful” content (Wadsworth, 2025), and restricting access to one side of that divide at the expense of user privacy, risks silencing perspectives that challenge mainstream suicide preve/ntion narratives and denies access to peer support. 

It is difficult to assess the extent to which lived experience is amplified or obscured by age-restriction. For example, Reddit may allow users to view thread titles, but requires an age-verified account to access full discussions, whereas Discord prevents users from seeing age-restricted communities unless verification has already been completed. Cursory searches I conducted on Google brought up Reddit posts that illustrated inconsistencies: posts asking what to expect when calling a suicide prevention hotline were accessible, as were discussions of suicide in fiction, or philosophical debates about the ethics of suicide. However, when I attempted to view comments or posts with titles that challenge dominant prevention narratives without an age-verified Reddit account, these were all inaccessible. Inaccessible posts included critiques of suicide prevention as virtue signalling, recommendations of texts in critical suicidology, and personal accounts of negative experiences with suicide helplines. As it has only been a few months since the implementation of the OSA, platforms may evolve their policies over time. It is also possible that currently accessible communities/posts/comments may later become restricted, and vice versa. 

Activists frequently rely on blogszinesartworkshort-form media, and other non-traditional forms of communication, such as forums and discussions threads, to articulate experiences that are left out of mainstream suicide prevention discourse. By placing restrictions both on the content of these discussions and the type of people who can participate in these conversations, these new regulations may very well risk the production and circulation of only ‘approved’ perspectives about suicide and self-harm, both within academic literature and social media. Although research has shown that age verification systems can be and are bypassed (Woodley et al. 2025), academic researchers must work within legal, institutional, and ethical frameworks. 

For example, the enforcement of the OSA was an area of concern when conducting the campaign analysis for Work Package 3 on the ‘Discovering Liveability’ project. The aim of the campaign analysis is to address key areas of activism on suicide, by collecting campaign materials from 20 UK-based groups who are organising or campaigning around suicide prevention and/or liveability. Due to privacy concerns about third-party verification of personal ID, the research team decided not to submit team members’ ID to social media sites for age verification. Although efforts were made to identify both prominent and less visible campaigns to capture a range of views for this part of the project, restrictions may have resulted in the omission of campaigns or campaign material that are age-restricted.  This example indicates that with the enforcement of the OSA, researchers may be left with a limited set of options: either submit personal identification for professional purposes, trusting that third-party verification providers will handle this data responsibly, or avoid age verification altogether, and risk losing access to valuable content about lived experience that fall outside what regulators consider acceptable. 

  • By Dr Paro Ramesh

Leave a Reply

Your email address will not be published. Required fields are marked *

css.php

Report this page

To report inappropriate content on this page, please use the form below. Upon receiving your report, we will be in touch as per the Take Down Policy of the service.

Please note that personal data collected through this form is used and stored for the purposes of processing this report and communication with you.

If you are unable to report a concern about content via this form please contact the Service Owner.

Please enter an email address you wish to be contacted on. Please describe the unacceptable content in sufficient detail to allow us to locate it, and why you consider it to be unacceptable.
By submitting this report, you accept that it is accurate and that fraudulent or nuisance complaints may result in action by the University.

  Cancel