Any views expressed within media held on this service are those of the contributors, should not be taken as approved or endorsed by the University, and do not necessarily reflect the views of the University in respect of any particular issue.

Personal Blog: Coding the Distance Sensors

This week, I focused on setting up a distance sensor. We envisioned this as the primary source of user input for our installation. The idea is to place sensors throughout the room and modulate the audio and visuals based on where the user is. All the user needs to do is move through the space, and their surroundings will noticeably shift in response.

To accomplish this, we’re using HC-SR04 ultrasonic distance sensors, which have the following pins: GND, 5V, Echo, and Trig. When the Trig pin is set to HIGH, it sends out a burst of ultrasonic sound (around 40 kHz, well above human hearing). If there’s an object in front of it, the sound reflects back, and the Echo pin reads HIGH once it detects the return pulse. From this, we can calculate the time it took for the sound to travel to the object and back—then convert that time into physical distance.

Wiring the Sensor

I followed Joe Hathaway’s guide for wiring and programming the sensors. I used female-to-male jumper wires to connect the SR04 to the Arduino Uno as follows:

Wiring Diagram
Joe Hathaway, Edinburgh College of Art, 2023

Coding the Sensors

First, I defined the trigger and echo pins in my Arduino sketch:

int trigPin = 3;
int echoPin = 2;

void setup() {
  Serial.begin(9600);
  pinMode(trigPin, OUTPUT); 
  pinMode(echoPin, INPUT);  
}

In the main loop, I began by clearing the trigger pin (setting it to LOW), then pausing for 2 microseconds to stabilize the signal:

digitalWrite(trigPin, LOW);
delayMicroseconds(2);

Then I triggered the ultrasonic burst by setting the pin HIGH for 10 microseconds:

digitalWrite(trigPin, HIGH);
delayMicroseconds(10);
digitalWrite(trigPin, LOW);

To measure how long it took for the sound to return, I used the pulseIn() function:

float duration = pulseIn(echoPin, HIGH);

This gives the round-trip travel time in microseconds. To convert that into distance:

  • Sound travels at approximately 0.034 cm per microsecond, or 0.0135 inches per microsecond.
  • Since the sound travels to the object and back, we divide by 2.

So the distance calculations look like this:

float cm = (duration * 0.034) / 2.0;

float inches = (duration * 0.0135) / 2.0;

Finally, I printed the results to the serial monitor:

Serial.print("Distance in cm = ");

Serial.print(cm);

Serial.print(", inches = ");

Serial.println(inches);

Observations and Next Steps

The sensor was pretty accurate at short distances—when I moved my hand back and forth in front of it, the values responded clearly and consistently.

At longer distances, the values became more erratic, which aligns with what Joe notes in the documentation: ultrasonic sensors can be noisy, especially over larger ranges. To improve stability, I plan to implement some smoothing—possibly by averaging several readings—and maybe add a max range limit.

Just like in my last blog (where I used a button input), I’ll bring this sensor data into TouchDesigner using a Serial DAT. I’m testing this with my group soon, and we’ll explore how to map the sensor input to real-time audio changes.

Meeting and Recording on 25 FEB

In terms of sound design, Lidia and I have confirmed the audio materials for our first sound design today. We borrowed several different microphones from the music store to record ambient sound and sound effects, and included them in the first stage of sound design. During the recording process, Kyra assisted me in recording a lot of sound materials at Alison House. And Leo gave a lot of useful and valuable advice before recording, which made our recording process very easy. Lyida also worked with Leo in the afternoon to study sensor and technology issues, and Lidia and I also took her advice on sound.

Visual Research and Inspiration: Visualization Mood boards

In conceptualizing this installation, we developed storyboards to explore how visual elements can convey the five stages of grief. Our focus was on creating immersive, evocative visual experiences that capture the essence of each emotional stage by curating images, colour palettes, textures, and abstract representations.

For each stage of grief – denial, anger, bargaining, depression, and acceptance – we created individual mood boards. These compilations serve as visual anchors for the future to ensure a cohesive yet distinct representation of each emotional phase. The mood boards incorporate a range of visual elements including:

  • Color schemes to reflect emotional tone of each stage
  • Textures and patterns that evoke specific sensations or feelings
  • Abstract and representational imagery that symbolizes key concepts

(https://miro.com/app/board/uXjVLixa9bM=/)

1. Denial: A thin veil shielding from harsh reality.

Colour scheme: Muted greys and whites for numbness and disbelief.

Textures: Fabric-like patterns, semi-transparent cloth or veil texture.

Imagery: A scene viewed through a textured veil, blurred shapes suggesting an obscured view, or someone partially hidden behind a cloth. The veil could symbolize a ‘preferable reality’

image sources:
1.https://pin.it/6tKUtohzu
2.https://pin.it/7bgzGUoXV
3.https://pin.it/5sVXEn7fF
4.https://pin.it/4Ng3FQue6
5.https://pin.it/5c1swarlw
6.https://pin.it/2908WJ3Ln

 

2. Anger

Color Scheme: Vibrant reds, oranges, and deep browns; a sense of burning and intensity.

Textures: Turbulent smoke, distorted glass or warped metal, creating a sense of chaos.

Imagery: Swirling smoke obscuring objects, jagged edges piercing the air, or distorted views through broken mirrors, symbolizing frustration and rage.

image sources:
1.https://pin.it/2wanXLXzE
2.https://pin.it/6KplPdwCK
3.https://pin.it/4bEmssVxS
4.https://pin.it/5pPvOTJjV
5.https://pin.it/Jh4HzCkEQ
6.https://pin.it/Krae5xifo

 

3. Bargaining: A futile grasp, everything slips away.

Colour Scheme: Pale yellows and soft blues, symbolizing fleeting hope and fragility.

Textures: Flowing, liquid-like patterns; smooth but uncontrollable surfaces.

Imagery: Liquid dripping or running through open hands, symbolic of time or opportunities slipping away, reinforcing the sense of helplessness and loss of control.

image sources:
1.https://pin.it/1wfk5fLBu
2.https://pin.it/2zZSOpM6i
3.https://pin.it/2wanXLXzE
4.https://pin.it/7gBz254kQ
5.https://pin.it/28BFFqNOK
6.https://pin.it/1i5oIvl9a

4. Depression: A spiral of thoughts bending into unbearable shapes.

Colour Scheme: Primarily blacks, dark grays, and deep blues, with minimal light and desaturated colors, heavy shadows.

Textures: Distorted gouache texture, thick and uneven layers, rough and clotted surfaces, symbolizing emotional stagnation.

Imagery: Abstract shapes submerged in darkness, distorted figures struggling against the weight in heavy, dark pigments, reflecting despair and hopelessness.

 

image sources:
1.https://pin.it/1o4N3Dfh1
2.https://pin.it/7uljdZqUp
3.https://pin.it/7my5pn7jV
5.https://pin.it/6a8p2261a
6.https://pin.it/4WIGVj0Sp
7.https://pin.it/5eUzBKMc1

Acceptance: From shadow to light

Colour Scheme: Soft purples transitioning to lighter hues, symbolizing transformation.

Textures: Smooth gradients, gentle curves.

Imagery: Sunrise, open spaces, balanced compositions.

 

image sources:
1.https://pin.it/6CW58fcqH
2.https://pin.it/3YWkJy4Yj
3.https://pin.it/745npWwI9
4.https://pin.it/79KAq3H8E
5.https://pin.it/1cYJpkZeF
6.https://pin.it/6I1afqwSd

Personal Blog: Arduino Integration with Touch Designer

For our project, we wanted to create a fully immersive experience by integrating user interactions as much as possible. To achieve this, we chose Arduino to connect a variety of sensors and bring real-time interaction to the installation. Our goal is to make users feel present in the scene, with data influencing both the audio and visual elements dynamically.

We plan to use several sensors, including pulse, proximity, humidity, and light sensors. But before diving into those, we decided to start with a simple task: getting button input to work on an Arduino Uno board. This allowed us to familiarize ourselves with the hardware, wiring, and code framework before scaling up.


Day 1: Installing Arduino and Wiring the Button Circuit

We started by installing the Arduino IDE, writing basic code, and wiring a simple button circuit. Here’s a picture of that setup:

Unfortunately, we didn’t get it working on the first day. However, I went back later, referenced a wiring diagram from Arduino’s official tutorials, and got it running.


Wiring and Code

Here’s the wiring diagram I followed:

 

This circuit diagram is sourced from Arduino’s official documentation on wiring and programming a button (Arduino, 2024).

 

And here’s the simple code I wrote to print the button state from pin 2 in the Arduino IDE:

 

 

With this setup, I was able to successfully read button input. Here’s a quick demo video:

 

 


Exploring OSC Integration and Data Broadcasting

Initially, I explored the idea of using Open Sound Control (OSC) to broadcast sensor data over a network via Wi-Fi or Ethernet. The plan was for the audio and visual teams to pick up the input data from other computers. To test this, I installed Unity and worked on some integration options.

However, our team decided to simplify the setup by using Touch Designer as the central hub to handle all data, visuals, and sound. With this approach, a single computer could run the Touch Designer project and read sensor data directly from the Arduino’s serial port.


Connecting Arduino to Touch Designer

I updated my Arduino code to print button data to the serial port. In Touch Designer, I added a Serial DAT to the template and connected it to the Arduino. This allowed me to read the button state in real time within the project.

Here’s a demo of the button input working in Touch Designer:

 

 

 


Next Steps

My next steps involve adding multiple sensors to the circuit, printing the data in a structured format, and interpreting it in Touch Designer to control various parameters. For now, we’re simulating the data to keep the creative design process moving.

Here’s a look at the simulated button input set up to control a black-and-white filter in Touch Designer—and me smiling because it’s all working!

 

 


This progress has been a solid foundation for integrating real-time sensor data with our immersive project. More updates to come as we expand the system!

Progress of meetings

The First meeting

Rough Idea 1 :

We wanted to create an immersive VR museum tour experience that combines sight, sound, and interaction to allow visitors to explore the exhibits in a more intuitive and sensory way, breaking down the physical limitations of traditional museums. Incorporate interactive exhibits: 360° views of artifacts, even disassembling and zooming in to understand internal structures. Narrative design with sound: Dynamic ambient sound and spatialized audio are used to make the content of the exhibition not only visible, but also “heard”.

Rough Idea 2 :

We wanted to make an immersive urban wanderlust experience about Edinburgh, giving participants the freedom to explore the city from a first person perspective. Enhance their perception of the city space through real-time soundscapes, interactive visual elements and dynamic data. Take a guided tour with AR/VR: use AR glasses or your phone to see hidden stories and historical layers of the city. Sound roaming can be added: put on a headset and ambient sounds and AI narration will change depending on the location to enhance immersion. We want to let people experience the city in new ways, not just pass by it.

The Second meeting

The concept of VR and AR use was changed to audio-visual installation due to the lack of technical skills we have for implementation. Moreover,  since the visuals were dominating  in those project ideas and our group has more sound designers/ audio technicians, we wanted to create something which would be more suitable for our skills.

About First concept draft :

Adapting the 5 stages of grief to create an audio-visual multisensory experience. [https://en.wikipedia.org/wiki/Five_stages_of_grief]

Ideally, the project would use 5 separate and isolated rooms to present each emotion, [denial, anger, bargaining, depression, acceptance]. The emotions are presented in this order, but since the artwork is generative, the visitors can freely move to the next room anytime.
(version B: the visitors need to achieve/find/complete something to be able to progress to the next room)

Preferably the sound would be presented with multiple monitors (around the sides, and at random locations within the space), each equipped with a proximity sensor. Some monitors would play an ambience audio track providing the base of the mood, while the monitors equipped with sensors would trigger an impact, sfx, …

At the same time, these sensors would trigger visual effects, filters, and pre-sets, which would be added to the main visual

Main Visual:

Media display with projectors/ screens, lights, brightening up the otherwise dark space. The image would evolve around the theme of humanoid characters, faces,.. It would be a mixture of real-time footage (perhaps heavily edited) taken from the exhibition rooms with cameras, plus additional digital characters/ drawings/effects … We are also considering using thermal cameras.

(reference project for visuals: Professor Jules’ A Requiem for Edward Snowden)

Questions for Leo:

–     Is it possible to have 5 rooms next to each other which we can fully soundproof/isolate? (but they should still be quite connected to allow smooth transitions across the rooms) If not possible, we have alternative ideas for solutions: such as using headphones instead and create binaural mix

–     How many speakers can we possibly get? If we are planning to build the installation across 5 rooms, we would need quite a lot (minimum 25?)

–     How many sensors can we get/buy?

–     For the visuals, projectors or screens would be better (in terms of quality and sense of immersion). Ideally we would love to do surround visuals, but that might require a lot more projectors

Quick mood board pictures for desired visuals/ setup:

Meeting picture:

The Third meeting:

About Second concept draft:

Introduction

Emotions are intangible, and everyone defines them differently. For some, sadness flows like water, while for others, the feeling of flow brings joy. Since emotions are difficult to define, we chose not to restrict them but instead to create an emotional garden that can change infinitely, allowing everyone who enters to shape their own emotional garden through the control panel.

Inspiration

In modern society, emotional labor refers to individuals regulating their emotional performance in order to meet social expectations, especially in social and workplace Settings. Our daily facial expressions and emotional responses are often regulated to conform to external expectations, and behind this is often the suppression of personal emotions.

Goal

Reflect  and  discuss  the  phenomenon  of  emotional labor  and  its  influence  on individuals.

How to express the theme

Emotion Garden plan A

Using heart rate to monitor people’s real emotions and facial recognition to detect fake emotions  on  people’s  faces,  the  greater  the  difference,  the  more  withered  the visualized plants, and the smaller the difference, the more robust the plants.

(Ps: Visualizing with plants is just an idea I have, not fully researched yet, maybe there is a better way to express it)

Emotion Garden plan B

Through  facial  recognition plus AR in  different  emotions  on  the  face  to  present different visual effects, so that people can “really” see the emotions and feel that.

(Ps: The idea was rejected because of the sensitivity of facial information)

 

The Presence workshop

Refined concept:

Due to the limitations of available equipment, we reduced the scale of the project into one room, to somehow express the 5 stages together. This proved to be a better solution anyways, since the 5 emotions are not separated with a straight line and one might experience more, or a mixture of several at the same time.

Therefore the new project summary is:

1 room – 5-8 speakers – visual projection. Creating an interactive space where the collective emotions and active presence (of the people in the room) are artistically expressed by audio-visual representation. This is generated by the visitors themselves, with the use of interactive devices around the room.

[Multiple sensors will be available to play with, which would trigger audio content, and would affect the visual presentation].

Sensors:

[Have to be compatible with ARDUINO]

Audio:
Heart rate sensor(s) = influences the low-frequency content
Light sensor(s) = influences the high-frequency content
Proximity sensor(s) = triggers random sound effects
Humidity sensor = influences the base, the ambience of the audio track (?) (it changes slower over time. The density of the crowd will influence the air)
(Temperature sensor = extra visual display (only if we can find one for a low price)) These sensors would individually have various value parameters assigned to them, therefore once a specific value is triggered, the system would employ real-time audio processing to modify the sound.

Visuals:

Perhaps it features a big mass of shape, and the data would control colours (like temperature in Thermal cameras), shapes, particles …
(We could also use MaxMSP jitter to manipulate video output with sound.

What will be the interactive visual system?
– How will the data be processed into abstract visual representation?
– Which parameters trigger what visual? – …?

For both audio and visual content, we have to think about the two extreme ends: How does it look/ sound with no people in the room [no interactions with sensors], and at full capacity [with many people interacting with the sensors]?

TO DO LIST:

Since the assignment is due next Thursday, we decided to divide the tasks up:
Kyra = visuals: basic design in Unity or Touch Designer Xiaole = time management, project planning& strategy
Evan = sound: at least 30 seconds of audio, ambience and/ or additional sound effects
Isha & Lydia = technical components: get OSC to work with Arduino, how to connect these systems triggering sound & visuals, what other tools we need,…
Lidia = writing the project introduction, concept, and aim. References to other similar artworks

Workshop picture:

Handwritten Notes:

By Lidia Huiber

The Fourth Meeting

At this meeting we finalized the final concept.

The project explores the theme of presence and grief through a multi-sensory audio-visual installation that presents the “five stages of grief” (denial, anger, negotiation, depression, acceptance). The five stages of grief (denial, anger, negotiation, depression, acceptance) are presented. The installation combines real-time image processing and digital design with chaotic, distorted visuals and a subdued soundscape that expresses the flow of emotions. Viewers interact with devices such as heart rate sensors, light sensors and knobs to experience a non-linear emotional journey from chaos to calm, revealing the randomness and complexity of grief. The project aims to engage the audience in reflecting on their own emotional state and that of those around them, encouraging deeper emotional connection and a focus on “authentic being”.

At the end of the day we had a detailed division of labor for the project.

 

 

 

 

 

 

 

 

 

 

Conceptual Foundations and User Interaction Flow

Conceptual Foundations: Defining Presence and the Exploration of Grief

The concept for our project developed through an iterative process of group discussions centered on the theme of “presence.” Our initial conversations revolved around the idea that presence goes beyond physical existence. It is a state of full awareness and engagement. From this, we began to discuss the contrast between presence and absence, particularly how the absence of someone or something can heighten the significance of being present. This interplay between presence and absence became a key inspiration for our work.

As we delved deeper into the technical aspects of the project, we explored how emotions manifest through sensory experiences, considering how abstract feelings can be externalized through sound, visuals, and other sensory stimuli. This led us to focus on one particularly profound emotional journey: Grief.

Grief became the central focus of our project as it perfectly captures the tension between presence and absence. Its unpredictable and non-linear nature aligned with our goal to move away from traditional, structured storytelling, while its universal relevance made it something participants could connect with on a personal level. We wanted to represent the emotional weight of grief in a way that allowed visitors to actively engage with the experience, influencing the emotional ‘weather’ of the space rather than simply observing it. The sensory design featuring chaotic visuals and layered audio will reflect the mental turbulence often felt during grief, while the interactive elements highlight how acts of empathy and connection can bring stability. By translating abstract emotions into tangible sensory experiences, the installation will encourage participants to reflect on how being present, even in small moments, can have a meaningful impact on emotional well-being.

User Interaction Flow 

Entry Point: Introduction to the Installation

Action: The user enters the dark, intimate space of Alison House’s Atrium. 

Experience: The room is in a chaotic state, representing the early stages of grief (denial, anger, depression). The visuals on the projection are fragmented and erratic, with dark tones and disjointed animations. The surround sound creates an unsettling atmosphere with low-frequency drones, sharp noises, and overlapping audio elements. 

Interaction: Initially, no interaction is required. The system reflects a “worst mental state” as it starts in chaos due to minimal activity. 

Exploration: Interaction Begins

Action: Visitors approach the center of the room where interactive sensors (heartbeat sensor, buttons, knobs, light sensors, proximity sensors) are placed. 

Experience: The sensors pick up on the Presence. 

  1. Heartbeat Sensor: Detects a visitor’s pulse and translates it into rhythmic audio or visual elements (e.g., pulsating light or sound).
  2.  Proximity Sensors: Trigger subtle changes in visuals or sound as visitors move closer or farther from specific areas. 
  3. Visual Sensors: React to changes in ambient light caused by visitors’ shadows or movements, influencing visuals on the projection. 

System Response: The more interactions occur, the system begins to transition from chaos to calmer states. 

Dynamic Progression Through Stages of Grief

Action: As visitor activity increases (e.g., multiple people interacting simultaneously), the system dynamically evolves toward later stages of grief (bargaining and acceptance). 

Experience: The projected visuals become less fragmented and chaotic, shifting toward more cohesive imagery with lighter tones. Soundscapes evolve from dissonant noise to harmonious and soothing ambient music. Visitors feel a sense of collective impact as their interactions contribute to improving the “mental state” of the system. 

Interaction Feedback: Real-time feedback ensures users see their impact on the environment (e.g., visuals brighten or calm as more sensors are activated). 

Regression if Activity Stops  

Action: If visitors stop interacting with the sensors or leave the space entirely, the system begins to regress back into earlier stages (chaos and depression).  

Experience: Visuals become darker and more fragmented again. Soundscapes return to unsettling tones, reinforcing a sense of emotional instability. This regression emphasizes that grief is non-linear and requires ongoing engagement for resolution. 

 

Installation Features:

  • The installation is non-linear; users can enter at any time during its progression. 
  • Sensors are designed to be intuitive so that all interactions feel natural and engaging. 
  • The evolving system reinforces themes of grief as a shared experience that requires active participation for healing. 
  • This flow ensures an immersive journey where visitors feel both individual agency and collective responsibility in shaping the emotional narrative of grief.

Miro Board: https://miro.com/app/board/uXjVLixa9bM=/

DMPS Presence 25: Research for Audio

1.Introduction

Sound not only shapes emotions but also plays a crucial role in guiding immersive experiences. This project, centered on the concept of “Denial,” explores how sound represents the five stages of negative emotions and enhances emotional resonance through interactive installations. To establish a solid theoretical foundation for sound design, I conducted extensive research on relevant literature and case studies, analyzing how sound influences emotions, the key elements of immersive sound design, and the integration of interactive technology. Through the fusion of sound and visuals, this project aims to create a profound emotional journey for the audience.

2.Theoretical Background

2-1 Denial

Our project is based on the concept of “Denial,” commonly associated with Kübler-Ross’s (1969) Five Stages of Grief, introduced in On Death and Dying. This model describes the emotional responses individuals experience when facing death or major life changes, progressing through denial, anger, bargaining, depression, and acceptance. Denial is more than just temporary avoidance; it serves as a psychological defense mechanism that helps individuals cope with overwhelming emotional shock.

Stage Overview Emotional Manifestation
Denial Rejecting reality, avoiding painful truths. Numbness, indifference, refusal to acknowledge or face reality.
Anger Feeling frustration and resentment when unable to escape reality. Hostility, irritability, possible aggressive behavior.
Bargaining Attempting to reduce pain through negotiation or compromise. Self-consolation, hoping to change the outcome through “deals.”
Depression Recognizing the unchangeable reality, leading to sadness and helplessness. Loneliness, despair, loss of motivation and interest.
Acceptance Ultimately accepting reality and facing the future with a calm mindset. Inner peace, relief, gradually adapting to change.

These five stages describe the emotional changes individuals experience when facing significant loss or trauma. Psychologist Anna Freud (1936) identified denial as a primitive yet common defense mechanism that allows individuals to temporarily escape reality when emotions become overwhelming, helping to reduce psychological stress. When a person struggles to accept inevitable loss or change, denial may persist, manifesting as avoidance of facts or self-soothing to maintain a false sense of reality. Immersive sound installations can enhance this emotional experience, allowing individuals to more intuitively perceive and explore denial and its psychological impact.

2-2 Sound

Huang and Wu (2007) found a strong correlation between music selection and emotional responses. The impact of sound on emotions is a multidimensional process influenced by key factors such as pitch, intensity, rhythm, and sharpness. Research indicates that high-frequency noise and sharp sounds can trigger stress responses, whereas low-frequency vibrations may induce a sense of calmness or suppression (HEAD acoustics, n.d.). Additionally, fast and irregular rhythms are often linked to anxiety, while dissonant intervals—such as minor seconds, augmented fourths/diminished fifths, and major sevenths—stimulate the amygdala, a brain region responsible for processing emotions, particularly fear and distress (Pankovski, 2023). Musical scales also play a crucial role in emotional expression; major scales are generally associated with positive emotions, whereas minor scales tend to evoke negative feelings. Moreover, slow-tempo music is often linked to sadness (Sun, Liu, & Nan, 2009).

In terms of sound design, different frequencies elicit distinct emotional experiences. Low-frequency sounds (20 Hz to 250 Hz) can create physical resonance, which in turn provokes anxiety and fear. For instance, the low-frequency vibrations of thunder and earthquakes are commonly associated with danger and threat, intensifying feelings of unease. In contrast, high-frequency sounds (2,000 Hz to 20,000 Hz) are highly stimulating, heightening alertness and inducing emotions such as anger and anxiety (Wemore8888, n.d.). Additionally, high volume, fast-paced, and irregular rhythms can further amplify tension or aggression, making sound a powerful tool in shaping psychological experiences.

These findings highlight the critical role of sound characteristics in emotional modulation. Variations in pitch, musical scale, rhythm, and frequency can direct and enhance different psychological experiences, effectively influencing the emotional state of an audience in immersive environments.

 

3.Case Studies

3-1 Sound Installations or Immersive Art Cases


TeamLab’s Interactive Art Exhibition
https://www.teamlab.art/zh-hans/e/artsciencemuseum

Guqi Cultural Tourism Performing Arts (2022)

“THE DAY LEFT FIELD” is an immersive audiovisual installation, set within a 144-square-meter space.
https://www.bilibili.com/video/BV1CM411B7Ls/?utm_source=chatgpt.com

Aooooxixi. (2022, December 16).

3-2 Game Music, Film, and Other Sound Design

In The Last of Us, the protagonist, Joel, experiences deep emotional trauma after losing his daughter. The game uses low, slow-paced music and environmental sound effects to create an atmosphere of oppression and solitude.

[Source link:https://api.xiaoheihe.cn/maxnews/app/share/detail/2695066]

 

In the Silent Hill series, deep, slow-paced melodies contribute to an oppressive atmosphere. Below is the official soundtrack:

[Source link:Silent Hill Original Soundtrack]

 

William Basinski’s The Disintegration Loops

Composed in 2001, The Disintegration Loops by William Basinski was created when he attempted to digitize old tape loops and discovered that they gradually deteriorated due to aging. He recorded this process, capturing a sense of slow decay and irreversible tragedy, while simultaneously evoking a meditative tranquility and transcendence.

[Source link: https://b23.tv/mrp3x6y]

3-3 Interactive Technology Case:

Voice Tunnel

Created by Mexican artist Rafael Lozano-Hemmer, Voice Tunnel is a large-scale interactive installation that allows participants to engage through sound. As visitors vocalize into the central sound system of the tunnel, the installation responds in real-time by adjusting the brightness and flickering patterns of lights based on the intensity and characteristics of the sound. This interplay between sound and light creates a unique and immersive sensory experience.

[Source link: https://www.urbanlight.cn/newsdetail/e3c8d402-0ab1-4963-af3c-ac12de995c48?utm_source=chatgpt.com]

China Lighting Network (2022)

 4. Draft of Sound Design for the Five Stages of Denial

4-1. Denial

Sound Design

  • Muffled Sounds:

Low-pass filtering to create a dull, suppressed auditory perception.

Out-of-focus ambient sounds, such as distant traffic noise or indistinct radio broadcasts.

  • Deep Droning:

Infrasound (below 20Hz) to induce bodily resonance and discomfort.

Sustained low-frequency humming, resembling industrial low-frequency resonance.

  • Hollow Echoes:

Long-tail reverb to create a vast and empty spatial impression.

Repetitive whispering, symbolizing stagnant thoughts and inner voices.

  • Tinnitus and Silence:

Subtle high-frequency ringing, simulating auditory shutdown after shock (e.g., the “ringing effect” after an explosion).

Sudden silence, representing moments of psychological paralysis or cognitive void.

4-2. Anger

Sound Design:

  • Sharp and Piercing Sounds:

Sounds of glass shattering, screaming, and metal scraping to create extreme discomfort.

Rapidly ascending Shepard tones to induce a never-ending sense of tension.

  • Dissonant Melodies:

Use of augmented fourths and dissonant intervals, such as tritones or chromatic melodies, to evoke anxiety.

Sharp, high-pitched string stabs inspired by Psycho (1960) to enhance unease. Example: Psycho’s iconic string sound.

  • Sudden Impact Sounds:

Abrupt volume bursts, such as industrial noise or electronic distortion, to create jump-scare moments.

Stuttering or “skipping” sounds, like a glitching vinyl record, mimicking the fractured nature of emotional outbursts.

4-3. Bargaining

Sound Design:

  • Looping Familiar Sounds:

Childhood memories or dialogues, such as old radio broadcasts or repeated lullabies.

Old telephone recordings or home video clips with faint background noise.

  • Ambient Noise:

White noise, rain sounds, or train motion sounds to provide a sense of security and aid in escapism.

Urban ambiance, such as café chatter or distant television sounds, to enhance a dreamlike immersion.

  • Soft Whispers:

ASMR-like whispers to evoke intimacy but potentially become unsettling when endlessly repeated.

Layered whispers with slight phase shifts, creating an eerie “multiple consciousness” effect.

  • Distorted and Unstable Sounds:

Tape degradation effects, making the audio gradually decay (inspired by The Disintegration Loops by William Basinski).

Example: William Basinski – The Disintegration Loops.

4-4. Depression

Sound Design

  • Irregular Rhythms:

Fragmented percussion with unstable beats, resembling chaotic jazz improvisation.

Phasing effects (inspired by Steve Reich’s techniques) where melodies gradually shift out of sync, creating a sense of disarray.

Example: Steve Reich – Phasing Technique.

  • Accelerating Heartbeat:

Low-frequency heartbeat simulation, starting faint and intensifying to induce breathlessness.

Sudden heartbeat cessation to create a peak moment of psychological tension.

  • High-Frequency Noise:

Sharp electronic static noise, similar to TV white noise.

High-frequency pulses mimicking tinnitus effects triggered by anxiety.

  • Breathing and Suffocation:

Heavy breathing in the background to reinforce panic and unease.

Gradually increasing reverb on breath sounds, making them feel distant and simulating a loss-of-control sensation.

4-5. False Acceptance

Sound Design :

  • Stable Yet Slightly Distorted Tones:

Gentle piano or string melodies, slightly detuned to create a subtle sense of unease.

Off-key harmonies that induce subconscious discomfort.

  • Low-Frequency Noise Beneath Laughter:

Layering low-frequency droning beneath an otherwise harmonious soundscape to create hidden anxiety.

Example: Black Swan soundtrack reference – Black Swan (2010).

  • Looping Calm Melodies:

A serene melody that loops but gradually incorporates subtle noise or instability.

Extended melodic tails, making the sound feel like an intentional effort to maintain composure.

 

 References

(Academic Sources)

Freud, A. (1936). The ego and the mechanisms of defence. International Universities Press.

HEAD acoustics. (n.d.). Making noise and sound impact visible. Retrieved from https://www.head-acoustics.cn/blog/data/making-noise-and-sound-impact-visible

Huang, C.-F., & Wu, S.-W. (2007). 大學生音樂選曲與情緒反應之相關研究 [A study on the relationship between college students’ music selection and emotional responses]. 國際藝術教育學刊 [International Journal of Arts Education], 5(1), 54–70. Retrieved from https://ed.arte.gov.tw/uploadfile/periodical/1657_arts_education51_054070.pdf

Kübler-Ross, E. (1969). On death and dying. Macmillan.

Pankovski, P. (2023). Psychological connotations of harmonic musical intervals. Physics Letters Review, 46, 69. Retrieved from https://ui.adsabs.harvard.edu/abs/2023PhLRv..46…69P/abstract

Sun, Y., Liu, Y., & Nan, Y. (2009). The impact of music on emotion and its neural mechanisms. Progress in Natural Science, 19(1), 1–10. Retrieved from https://www.nsfc.gov.cn/csc/20345/22468/pdf/2009/音乐对情绪的影响及其脑机制的相关研究.pdf

Wemore8888. (n.d.). 聲音頻率與情緒反應:從低頻到高頻的心理影響 [Sound frequency and emotional response: The psychological impact from low to high frequencies]. Retrieved from https://wemore8888.com/%E8%81%B2%E9%9F%B3%E9%A0%BB%E7%8E%87%E8%88%87%E6%83%85%E7%B7%92%E5%8F%8D%E6%87%89%EF%BC%9A%E5%BE%9E%E4%BD%8E%E9%A0%BB%E5%88%B0%E9%AB%98%E9%A0%BB%E7%9A%84%E5%BF%83%E7%90%86%E5%BD%B1%E9%9F%BF/

(Video & Sound & PDF Sources)

Aooooxixi. (2022, December 16). THE DAY LEFT FIELD installation [Video]. Bilibili. BV1CM411B7Ls

China Lighting Network (2022), “Review | Eight Interactive Sound and Light Installations,” URBANLIGHT.CN. Retrieved from https://www.urbanlight.cn/newsdetail/e3c8d402-0ab1-4963-af3c-ac12de995c48.

Guqi Cultural Tourism Performing Arts. (2022). TeamLab: The Immersive Experience Creation of Art Exhibitions in the New Media Environment. URBANLIGHT.CN. https://www.urbanlight.cn/newsdetail/6bb52e34-5c2b-4f2a-8d4a-a69735df0cd0

himSeize. (2022, April 26). William Basinski – The Disintegration Loops [Video]. Bilibili. https://b23.tv/mrp3x6y

teamLab. (2016, March 12). teamLab at ArtScience Museum [Video]. Instagram. https://www.teamlab.art/zh-hans/e/artsciencemuseum

Movieclips. (2011, May 27). The Shower – Psycho (1960) [Video]. YouTube. https://www.youtube.com/watch?v=0WtDmbr9xyY

Searchlight Pictures. (2010, August 17). Black Swan | Official Trailer [Video]. YouTube. https://www.youtube.com/watch?v=5jaI1XOB-bs

musicue, B. (2012, April 11). Steve Reich – Different Trains (Part 1).mp4 [Video]. YouTube. Steve Reich – Phasing Technique

TheSilentHillFan. (2012, May 22). Silent Hill Original Soundtrack [Video]. YouTube.

UrbanLight. (2022, December 9). 盘点 | 八个交互式声光装置 [Review of eight interactive sound and light installations]. UrbanLight. https://www.urbanlight.cn/newsdetail/e3c8d402-0ab1-4963-af3c-ac12de995c48

 

Presence 2025 : Project outline and Submission 1 summary

Project Lead Professor: Leo Butt

Roles up until now:

Lydia – Arduino- Touch Designer integration, sensor programming

Isha – User interaction flow, storyboards, visual concept

Kyra – Visual design, Touch Designer

Xiaole – Audio aesthetic research

Evan – Sound design

Lidia – Conceptualisation, note taking

Introduction to concept and project aim:

The project is based on two core concepts: using the five stages of grief as a theme and creating an interactive audio-visual installation. Integrating these ideas into a cohesive and meaningful concept was not a straight journey, but we have now established a strong foundation:

Expressing the profound impact of social interactions on individuals experiencing grief and how these interactions, in turn, shape our own self-reflection.

But how did we get here?

In the first week, we focused on defining what the word ‘presence’ means to us: It is not only to be physically present in a space but to be consciously aware of being there, at that moment. Being present requires us to open our senses and pay attention to everything that surrounds us: every change of breeze in the air, every sound, every movement of light, and every change of sight. Moreover, it means to actively concentrate and pay attention to the other beings around us.

As Buddhist monk Thích Nhất Hạnh wrote once, “The greatest gift we can make to others is our true presence” (Hạnh, 2023).

Coping with the loss of someone or something one loves is probably the biggest challenge in everyone’s life (Smith, 2018). It is highly an individual experience, however, what might be the connection in all, is the need for others who can offer support and comfort: a Swedish study based on a survey outlined that participants during such difficult times did report their need for emotional support. Despite mentioning that this is mostly “provided by family and friends”, receiving kind gestures and a positive attitude from a stranger can equally bring a significant change (Benkel et al., 2024).

The project seeks to raise awareness of this topic, leading us to create an audio-visual multisensory installation portraying “The five stages of grief”. The installation will be ‘an exploration of being present in grief’, primarily from the perspective of an imaginary character’s inner mental world. Secondarily, we direct back a rhetorical question: Do they see themselves in it, and are there perhaps any feelings which they have been hiding from themselves?

First introduced as “The Five Stages of Death” by Swiss-American psychiatrist Elisabeth Kübler-Ross in 1969 (Kübler-Ross, 2014), the model says that those who experience sudden grief will most likely go through the following 5 stages and emotions: denial, anger, bargaining, depression, and acceptance. However, these are very strict categorizations of humans’ complex scale of emotions. The lines between these feelings are blurred (Grief is Not Linear: Navigating the Loss of a Loved One, 2023), therefore, in our project as well, we will aim to present the audience with the general impression of grief as we imagine it from a creative point of view.

Our weekly progression to arrive to this concept can be found in our ‘Progress of Meetings” blog:

https://blogs.ed.ac.uk/dmsp-presence25/2025/02/10/progress-of-meetings/

The project in detail:

The installation will be a true sense-stimulating phenomenon, with powerful sound ambiences, low-humming sounds and whispers, and abstract visual design of distorted faces, muted screams, and chaotic shapes and forms: by mixing the technique of real-time processed video footage playback and digital designs.

The showroom location will be in a closed space such as Alison House’s Atrium, to provide the necessary atmosphere of the intimate and dark characteristics of one’s innermost mind. An array of 6-8 speakers placed around the walls of the room will provide a surround sound environment, while a projector on the front will display the designed visuals.

There will be multiple (at least 5) sensors, such as a heartbeat sensor, buttons, knobs, light sensors, and proximity sensors. These will be placed around the centre of the room, for the visitors to interact with. The sensors will be built with Arduino, which will feed the data into Touch Designer to control the main audio ambience, additional SFX and the visual design.

Rather than going through the 5 emotional stages in order, it will be a non-linear journey influenced by the activity of the visitors. The starting scenario is: with no, or only a few visitors, our imaginary grieving person (abstract presentation on the projection) will be in the worst mental state (first stages of the 5). It will be in chaos, darkness and pure depression. As the number of visitors and interactions with the sensors grow, the system will gradually become more and more calmer, and eventually evolves towards the last stage: acceptance.

However, if the interactions come to a halt, the negative emotions will come back once again.

The roles of the sensors:

Humidity sensor: as the crowd grows and reduces, the change in humidity of the air will provide a smooth and slow change over time. The sensor will most likely affect the sound ambience and the amount of distortive processing on the visuals.

Heartbeat sensor: when visitors scan their heartbeat, as a symbol for supporting the other with care and love, it will be presented with a corresponding heartbeat sound and visuals

Buttons/knobs, proximity sensors, light sensors: will trigger SFX and VFX, or change a set of real-time processing parameters.

The knobs/ buttons do not have a fixed direction, such as turning left for sadness, right for improvement in mood. Instead, it would be completely random. The goal is to get the audience involved, representing the true nature of how random emotions can come and go. Of course, if the input difference is significant, the grief gradually becomes more controllable.

More detailed description of technical concept and user interface layout can be found here:

https://blogs.ed.ac.uk/dmsp-presence25/2025/02/10/conceptual-foundations-and-user-interaction-flow/

Audio components:

In order to create relevant sound design aesthetics for the project, Xiaole did a throughout research on audio design techniques on the theme of grief, and related art projects:

https://blogs.ed.ac.uk/dmsp-presence25/2025/02/10/dmps-presence-25-research-for-audio/

Then, Evan created the first samples of short sounds designs: one for the five stages each.

https://blogs.ed.ac.uk/dmsp-presence25/2025/02/09/sound-design/

In the next couple of weeks we will aim to create more ambiances, effects, and effect parameter changes

The audio-system will be built in Wwise for the freedom of triggering and real-time processing multiple events/sound effects/ ambiances at the same time. This will be integrated into Unity, to use the given data (collected from sensors) to trigger the events. Since now we have a strong concept foundation, experienting with and building these systems will be our main goal for the next weeks.

Visual:

The visuals will be designed with Touch Designer, where the various data sources will trigger different layers/ effects

First, Isha created a  moodboard to summarize our ideas for used textures, colors, saturations, figures:

https://blogs.ed.ac.uk/dmsp-presence25/2025/02/10/visual-research-and-inspiration-visualization-mood-boards/

Then Kyra developed the first sketches about one of the emotions: ‘anger’

https://blogs.ed.ac.uk/dmsp-presence25/2025/02/09/emotion-visualization-visual-exploration-of-the-angry-stage/

Technology:

After the workshop at week 4, Lydia has taken up the role of computer tech design. The sensors will be built using Arduino, which will send the data bot the Touch Designer for the visuals, and Unity for the audio. In the next week, the workload will be share between her and Isha.

The technical research and progress can be found here:

https://blogs.ed.ac.uk/dmsp-presence25/2025/02/10/arduino-integration-with-touch-designer/

Summary:

By creating a constant connection between the visitor’s input and their result on the art, the project aims to show people the impact they can leave on someone’s mental well-being. Experiencing this through ‘someone else’s eyes, while it reflects the visitors’ bodies, we are hoping to make them ask themselves whether they are truly paying attention to their own, and their loved one’s struggles. And therefore, we aim to foster a deeper awareness of emotional connections, encouraging mindfulness and active presence in their everyday life.

 

References:

Benkel, I. et al. (2024) ‘Understanding the needs for support and coping strategies in grief following the loss of a significant other: insights from a cross-sectional survey in Sweden’, Palliative Care and Social Practice, 18. Available at: https://doi.org/10.1177/26323524241275699.

Cohen, S. (2004) APA PsycNetpsycnet.apa.org. Available at: https://psycnet.apa.org/buy/2004-20395-002.

Grief is Not Linear: Navigating the Loss of a Loved One (2023) Veritas Psychotherapy. Available at: https://veritaspsychotherapy.ca/blog/grief-is-not-linear/.

Hạnh, T.N. (2023) Dharma Talk: True Presence – The Mindfulness BellParallax Press. Available at: https://www.parallax.org/mindfulnessbell/article/dharma-talk-true-presence-2/ (Accessed: 9 February 2025).

Heaney, C.A. and Israel, B.A. (2008) Health Behaviour and Health Education. Jossey-Bass, pp. 190–193. Available at: https://www.medsab.ac.ir/uploads/HB_&_HE-_Glanz_Book_16089.pdf#page=227.

Kübler-Ross, E. (2014) On Death and Dyingwww.simonandschuster.com. Scribner. Available at: https://www.simonandschuster.com/books/On-Death-and-Dying/Elisabeth-Kubler-Ross/9781476775548 (Accessed: 9 February 2025).

Smith, M. (2018) Coping with Grief and Loss: Stages of Grief and How to HealHelpGuide.org. Available at: https://www.helpguide.org/mental-health/grief/coping-with-grief-and-loss.

Young, S.N. (2008) ‘The neurobiology of human social behaviour: an important but neglected topic’, Journal of Psychiatry & Neuroscience : JPN, 33(5), p. 391. Available at: https://pmc.ncbi.nlm.nih.gov/articles/PMC2527715/.

Sound Design

Sound design

My sound design is centered around the five emotional stages of the Kübler-Ross model, exploring the psychological changes that occur when people face a major loss. Each sound piece is designed to capture the core qualities of a specific emotional stage, bringing the listener into the emotional experience described by the model.

Figure 1: Kübler-Ross model (Source: Visual Paradigm, n.d.)
Figure 2: Psychological numbness (Source: Dame Magazine, 2021)
In the denial stage, individuals often show shock and psychological numbness, trying to protect themselves from emotional shock by avoiding reality. The core of this stage is “unreality”, as if they are in a dream that is detached from reality. The sound creates a vague auditory experience at this stage, making the listener feel as if they are in an isolated environment, symbolizing the individual’s resistance to reality and psychological stagnation. The vague background sound and short sense of blankness in the sound strengthen the stagnant state of emotion. This “unreal” auditory experience is closely linked to the emotions of the denial stage, as if the listener himself is also trying to withdraw from the reality of the outside world.
Figure 3: Anger (Source: Fezra Counseling, n.d.)

Anger is a strong outburst of emotion, with instability and confrontation. At this stage, individuals try to find external outlets to fight against the inner helplessness, and their emotions fluctuate violently. The sound shows a greater dynamic range contrast at this stage, and the sudden volume peak enhances the impact of anger. The inharmonious intervals increase the tense atmosphere, making the audience feel the aggressiveness and instability of anger in hearing.

 

Figure 4: Individuals are wavering between hope and despair (Source: Cardiff University, n.d.)

The bargaining stage is an attempt to regain lost control, with the individual wandering between hope and despair. This stage reflects an inner contradiction: the inability to accept reality, yet the inability to change it. The sound design of this section symbolizes the repeated attempts and failures in psychology through a looping rhythm and a faint modulated melody. The sound reinforces the slimness and impossibility of hope, immersing the audience in an emotional “self-dialogue”. In this way, the sound expresses the core contradiction of the bargaining stage: the desire for change, but the inability to do anything about it.

 

Figure 5: Deep sense of despair and helplessness (Source: Vanourek, n.d.)

The depression stage is the despair and helplessness deep in the heart after the individual begins to face reality. This is the lowest point of emotion, full of fear of the future and powerlessness of the current situation. The sound at this stage shows strong emotional oppression, as if an invisible weight is looming over the audience.

 

Figure 6: Trying to appear “normal,” but internally still
struggling to accept (Source: Psychology Today, 2017.)

The acceptance stage is not simply calm, but a compromise of inner emotions and surrender to reality. This is a complex emotional state, including superficial peace and deep worries. The sound of the work at this stage shows a subtle sense of calm, but this calmness always carries a hint of uneasiness. The subtle fluctuations and potential low frequencies symbolize the emotional residue that has not been completely eliminated in the heart. The audience experiences a complex state of balance in this emotion: no longer struggling, but not completely letting go. This sound experience highlights the emotional core of the acceptance stage – accepting reality, but not forgetting the pain.

Links to all sounds used here:www.youtube.com/@yuxinzhang-og1gg

Reference
  1. Visual Paradigm. n.d. “Kubler-Ross Change Curve.” Visual Paradigm Blog.Accessed February 12, 2025. https://blog.visual-paradigm.com/what-is-the-kubler-ross-change-curve/
  2. Dame Magazine. 2021. “There’s a Reason You Feel Numb Right Now.” Dame Magazine. Accessed February 12, 2025. https://www.damemagazine.com/2021/02/10/theres-a-reason-you-feel-numb-right-now/
  3. Ezra Counseling. n.d. “Anger Management: Understanding and Navigating the Stages of Anger.” Ezra Counseling. Accessed February 12, 2025. https://ezracounseling.com/anger-management-understanding-and-navigating-the-stages-of-anger/
  4. Cardiff University. n.d. “On Hope and Despair, Part I.” Open for Debate Blog. Accessed February 12, 2025. https://blogs.cardiff.ac.uk/openfordebate/on-hope-and-despair-part-i/
  5. Vanourek, Gregg. n.d. “How to Overcome Helplessness.” Gregg Vanourek Blog. Accessed February 12, 2025. https://greggvanourek.com/how-to-overcome-helplessness/
  6. Psychology Today. 2017. “How and Why You Compromise Your Integrity.” Evolution of the Self Blog. Accessed February 12, 2025. https://www.psychologytoday.com/us/blog/evolution-of-the-self/201707/how-and-why-you-compromise-your-integrity

Emotion Visualization: Visual Exploration of the “Angry” Stage

We focus on the five stages of grief, exploring the “presence within sadness”, inviting the audience into a fully immersive experience of grief. At the same time, we pose a thought-provoking question: “How do you see yourself in it?” This encourages the audience to look inward and confront the grief they may have unconsciously ignored or suppressed.

Grief is an extremely universal emotion, yet we often choose to repress, overlook, or conceal it. Among the five stages, “anger” stands out as one of the most common and visually powerful emotional expressions. Therefore, I decided to start with this stage as the entry point for this exploration.

Step 1: Brainstorming

Image source:

  1. https://pin.it/3yGT9QwmD
  2. https://pin.it/32MxQO3eF
  3. https://pin.it/gQ65KZ7uy
  4. https://pin.it/5ycmtktoe
  5. https://pin.it/7qkzKyduC
  6. https://pin.it/q3eqQStHy
  7. https://pin.it/3HU6oQDO9
  8. https://pin.it/3pd1nipQ6
  9. https://pin.it/18tLP3exa
  10. https://pin.it/41Gwmyml5
  11. https://pin.it/LC65sPEDE
  12. https://pin.it/2KisD4i57
  13. https://pin.it/tAT1FiN6u
  14. https://www.behance.net/gallery/116096239/BLOCKCHAINS-ARE-BEAUTIFUL
  15. https://chirnside.studio/Frequencies
  16. https://pin.it/7EsmVNmRv

Anger is an intense, chaotic, and hard-to-control emotion. Smoke, with its shapeless and uncontrollable dynamic flow, perfectly aligns with the visual representation of anger. It can manifest as turbulent surges, sudden eruptions, or continuous diffusion. Each person enters this experience in their own way, and their emotional reactions are beyond our control. This sense of unpredictability is a defining characteristic of both flames and smoke.

Step 2: First Attempt

I experimented with TouchDesigner for visual presentation, as its dynamic adjustment capabilities allow for a more accurate representation of the diversity and fluidity of emotions.

I primarily used the Nvidia Flow Emitter to visualize smoke. By adjusting parameters such as smoke, fuel correction rate, and fuel values, I was able to manipulate the volume of the smoke. The human silhouette gradually deconstructs into a constantly shifting cloud of smoke, retaining its original shape while embodying a sense of fluidity and transformation.

Step 3: Further Effect Enhancement

Adding multiple noise types  for layering—could it enhance the complexity and visual appeal of the smoke effect?

Step 4: Real-Time Interaction Optimization and Hardware Output

1. Attempt to connect Kinect to capture 3D body data and enhance spatial perception.
2. Heart rate sensor: Dynamically adjust the intensity of the smoke effect based on the audience’s physiological data, reflecting emotional fluctuations.

Reference:

https://pin.it/3yGT9QwmD

 

css.php

Report this page

To report inappropriate content on this page, please use the form below. Upon receiving your report, we will be in touch as per the Take Down Policy of the service.

Please note that personal data collected through this form is used and stored for the purposes of processing this report and communication with you.

If you are unable to report a concern about content via this form please contact the Service Owner.

Please enter an email address you wish to be contacted on. Please describe the unacceptable content in sufficient detail to allow us to locate it, and why you consider it to be unacceptable.
By submitting this report, you accept that it is accurate and that fraudulent or nuisance complaints may result in action by the University.

  Cancel