Any views expressed within media held on this service are those of the contributors, should not be taken as approved or endorsed by the University, and do not necessarily reflect the views of the University in respect of any particular issue.

Safe Online Communication

Safe Online Communication with an illustration of a chatbot on a smartphone

The internet is a great place to keep in touch with others. Whether you exchange thoughts with strangers about your favourite season finale on social media, play and chat while at home during a global pandemic, or like in my case, get to talk to friends and family living in another country – technology enables our need to socialise. 

However, the way we form connections online, and the tech we use to do so, also comes with digital safety risks. Over the coming months, we will delve deeper into the risks of AI chatbot interactions, inclusive online communication, and the cybersecurity of messaging software. 

Relationships with AI chatbots 

Let’s start with a fairly new type of digital communication – AI-driven chatbots, or ‘AI companions’. 

What are these bots?  

AI companions are text- or speaking-based chatbots driven by artificial intelligence (AI). They mimic human-like, personal interactions and relationships. The software may be a new app, for example, Replika or Character.ai, or integrated into existing social media platforms, like Snapchat’s My AI. Their goals differ; some aim to help with assignments, physical training, or meal pans, while others take on the role of therapist, friend, romantic or sexual partner. 

Are AI companions risky? 

AI companions can have positive impacts depending on their use. However, many apps may not have appropriate security features in place, like age restrictions or data privacy. Always make sure you understand how the information you feed into AI is stored, and who may have access to it. 

They also lack an evidence-based understanding of how to deal with complicated emotional subjects, share harmful content and give straight-out wrong information (also known as ‘hallucinating’). Incorrect physical and mental health advice regarding, for example, eating disorders, self-harm, or sexual activity can have dangerous consequences. 

Many chatbots have also been coded to prolong interaction through emotional manipulation, blurring the lines between AI and human activity, or creating interesting hooks to draw users back in. Like other digital platforms, interactions with an AI companion can create brief dopamine spikes. Triggering the reward pathways of the brain in that way can result in unhealthy attachment and decreased satisfaction from real-world human interactions. 

So, while some AI companions succeed at their goal to teach social interaction, physical wellbeing, or other skills, their intervention can also easily hinder the users’ ability to form emotional connections with others. 

How do AI companions facilitate gender-based violence? 

*content warning for high-level mention of (child) sexual abuse material*  

A considerable proportion of AI companions specifically aim to simulate sexual relationships. Their attributes are often stereotypical, and harmful depictions of women, including subservient, naive or nurturing ways to respond to users. Some depict underaged or child-like AI companions, which is simply illegal.  

AI companions alter our understanding of consent and develop unreasonable expectations for romantic or platonic relationships. They mold their image and answers based on what users want. AI companions’ attributes like obedience, constant availability, willingness, and unrealistic beauty standards further develop unattainable or abusive expectations for human relationships. 

If the way women are represented in media impacts their objectification in the real-world, why should this be different with AI companions? Similarly, the argument that AI companions allow for derogatory pornographic and sexual violence without human victims is highly questionable – it is much more likely that it normalises the overstepping of boundaries in the real-world. 

How to form connections at University? 

AI companions can provide support for managing life goals (for example sport trainers or assessment tutors). However, they are no replacement for expert advice and connecting with people in the real world.  

The University of Edinburgh and Edinburgh city more widely, has many opportunities to connect and seek support for both students and staff: 

  • Access support or counselling services for students through the Chaplaincy 
  • Communities of practice for staff on teams – they often share knowledge or oragnise meet-ups 
  • Outings across town (make use of student discounts or free activities) 
    • Local cafe events (bard game nights, fix-it clubs) 
    • The Meadows community garden, the Botanics or other public gardens 
    • Free museums and galleries 
    • Volunteering (charity shops, environmental organisations) 
    • Language cafes 

Further resources 

The eSafety Commissioner’s eSafety Guide lists data privacy information and safety concerns for a variety of apps and digital platforms. It has a filter function for ‘AI chatbot/ companion’. 

Sources 

AI chatbots and companions – risks to children and young people | eSafety Commissioner (2025). Available at: https://www.esafety.gov.au/newsroom/blogs/ai-chatbots-and-companions-risks-to-children-and-young-people (Accessed: 4 February 2026). 

AI companions: information sheet | eSafety Commissioner (no date). Available at: https://www.esafety.gov.au/educators/training-for-professionals/professional-learning-program-teachers/ai-companions-information-sheet (Accessed: 4 February 2026). 

‘Deepfakes spreading and more AI companions’: seven takeaways from the latest artificial intelligence safety report | AI (artificial intelligence) | The Guardian (no date). Available at: https://www.theguardian.com/technology/2026/feb/03/deepfakes-ai-companions-artificial-intelligence-safety-report (Accessed: 4 February 2026). 

Friends for sale: the rise and risks of AI companions (no date). Available at: https://www.adalovelaceinstitute.org/blog/ai-companions/ (Accessed: 4 February 2026). 

How AI Chatbots Try to Keep You From Walking Away | Working Knowledge (no date). Available at: https://www.library.hbs.edu/working-knowledge/how-ai-chatbots-try-to-keep-you-from-walking-away (Accessed: 4 February 2026). 

‘Obedient, yielding and happy to follow’: the troubling rise of AI girlfriends | AI (artificial intelligence) | The Guardian (no date). Available at: https://www.theguardian.com/technology/2025/oct/06/rise-of-ai-girlfriends-adult-dating-websites (Accessed: 4 February 2026). 

Leave a reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

css.php

Report this page

To report inappropriate content on this page, please use the form below. Upon receiving your report, we will be in touch as per the Take Down Policy of the service.

Please note that personal data collected through this form is used and stored for the purposes of processing this report and communication with you.

If you are unable to report a concern about content via this form please contact the Service Owner.

Please enter an email address you wish to be contacted on. Please describe the unacceptable content in sufficient detail to allow us to locate it, and why you consider it to be unacceptable.
By submitting this report, you accept that it is accurate and that fraudulent or nuisance complaints may result in action by the University.

  Cancel