Any views expressed within media held on this service are those of the contributors, should not be taken as approved or endorsed by the University, and do not necessarily reflect the views of the University in respect of any particular issue.

sound design

Contributions and participation during the project’s progress-xiaole liu

1. Early Recording and Sound Library Construction

After defining the sound style and expressive goals for the five emotional stages, I moved on to developing the preliminary recording plan and took charge of collecting and initially organizing the sound materials.

This phase was not just about gathering sounds — it was a process of conceptual sound creation and design, centered around the project’s emotional framework.
The goal of this task was to build a comprehensive sound library that would provide a rich and diverse selection of sounds for other teammates handling the final sound design, significantly boosting their efficiency and creative flexibility.

Categorization and Recording Planning

I first classified the five emotional stages and extracted their core sound characteristics. Combining my previous research and team discussions, I drafted dedicated recording lists and foley plans for each emotion. Here are a few examples:

  • Anger: Focused on high-frequency, sharp, and explosive sounds. I prepared metal rods, glassware, chains, and recorded creative foley through collisions, friction, and dragging to capture tension and confrontation.

  • Denial: Aimed to evoke blurriness, disorientation, and undefined spatiality. I recorded blurred voices, fabric friction, and reversed water sounds to express psychological avoidance and confusion.

  • Bargaining: Simulated psychological tug-of-war and indecision. I used paper tearing, cyclic breaking syllables, and unstable rhythmic vibrations to create the texture of psychological uncertainty.

  • Depression: Used low-frequency, slow, continuous sounds to convey oppression. Recordings included deep echoes from buckets, ambient noise, and breathing sounds to create a closed, silent space.

  • Acceptance: Represented gentleness, release, and continuity. I used soft metal friction, wind chimes, bells, and faint melodic fragments to simulate the smooth transition of emotions.

All recordings were independently completed by me.
Each week, I rented professional recording equipment and secured sampling locations, striving to ensure high-quality and diversified materials. I also experimented with various techniques (different gestures, force variations, and medium changes) to capture more expressive raw sounds.

Post-Processing and Sound Design

After recording, I imported the raw audio into ProTools for detailed post-production. To tailor the materials to each emotional stage, I applied various acoustic and stylistic transformations, including:

  • Reverb: Simulating spatial extension to evoke impressions of echo, loneliness, vastness, or relief.

  • Pitch Shifting: Lowering pitch for heavier emotions or raising it to induce unease and tension.

  • EQ (Equalization): Enhancing or attenuating specific frequency bands to sharpen, deepen, clarify, or blur the sound textures.

  • Delay and Time Stretching: Extending audio length, creating echoes, and simulating auditory time suspension.

  • Filtering: Applying high-pass or low-pass filters to make sounds feel distant, muffled, or veiled.

  • Reverse and Reconstruction: Reversing and rearranging audio clips to break naturalness and create surreal psychological effects.

  • Compression: Controlling dynamic range differences to enhance the emotional cohesion and impact.

Processing Examples

  • Denial (Denial):
    When editing fabric friction sounds, I applied a low-pass filter to reduce high frequencies, making the sound blurrier. Then, I added slight reverb and reversed segments to enhance the feeling of spatial confusion and psychological escape.

  • Anger (Anger):
    For metal collision sounds, I pitch-shifted the recordings up by half an octave to sharpen the harshness, applied saturation to introduce distortion, and added light delay to create chaotic spatial echoes, enhancing the tension.

Through these techniques, I not only boosted the expressive power of the recordings but also made them highly adaptable for real-time triggering and transformation within the interactive system.

The outcome of this phase was a well-organized Foundational Emotional Sound Library, allowing teammates to quickly and efficiently select materials based on the emotional scene they were designing.

2. Sound Design for Specific Emotional Stages

After completing the foundational sound library and preliminary editing, I took further responsibility for building the complete sound design for three emotional stages: Bargaining, Depression, and Acceptance.

At this stage, the work was no longer simply about recording or editing sounds.
It became a systematic design practice — exploring how sound and emotion interact and express together.

I needed to not only imagine sound reactions that would match the visual animations but also design dynamic sound scenes triggered by various sensors, ensuring that all sound elements fit together harmoniously, immersing the audience in a powerful emotional atmosphere.
This was not just sound creation — it was a process of translating sound into emotional language.

My Workflow

  • Refining sound style definitions: For each emotional stage, I clarified the desired sound characteristics, rhythmic logic, and spatial expressions.

  • Targeted recording and secondary creation: Based on sensor trigger types, I re-recorded critical materials and selected the best-fitting fragments from the sound library for deep processing.

  • Sound construction in ProTools: I completed multitrack mixing, rhythm deconstruction, sound field design, and dynamic layering to ensure adjustability and stability within the system.

  • Organized sound assets by functionality: Grouped materials by “background ambiance,” “behavioral triggers,” and “emotional transition responses” for easy system integration.

  • Established structured interactive sound libraries: Created clearly named and uniformly organized folders for each emotion, with usage notes (scenario, trigger method, dynamic range) to allow seamless integration by teammates working on Wwise, Unity, and Max/MSP.

Through this phase, I pushed the project from “sound materials” toward “systematic emotional sound expression,” ensuring cohesion, functionality, and artistic integrity within the interactive framework.


🎧 Sound Design Examples

Bargaining (Bargaining)

To express the inner wavering and repetitive struggle, I designed multiple loopable sound units simulating hesitant and anxious emotional flows.

Example 1: The struggle between tearing and re-coiling

  • Foley materials: Paper tearing, fabric crumpling, wood scraping

  • Design techniques:
    Cut tearing sounds into rapid fragments, time-stretch selected parts, overlay slight reversed audio and high-frequency filtering to simulate psychological “fracture and repetition.”
    Layered with background friction sounds to create a tactile tension.

  • Emotional intent: Express the constant push-and-pull between hope and denial.

Depression (Depression)

For this stage, I aimed to convey deep emotional downpour, loss, immersion, and self-isolation, avoiding strong rhythms to create a “slow-time” and “emotional stagnation” atmosphere.

Example 1: Damp, Oppressive Interior Space

  • Foley materials: Water echoing inside metal buckets, slow palm movements across wood flooring, low-frequency ambient noise

  • Design techniques:
    Pitch-down metal water echoes by about 5 semitones; add long-tail reverb and room simulation; overlay low-frequency brown noise to create pressure.
    Palm sliding sound filtered to preserve only the low-mid range, maintaining subtle motion tension.

Emotional intent: Build a psychological space that’s damp, heavy, and hard to escape, reflecting the chaotic silence of depression.

Acceptance (Acceptance)

As the most peaceful and open stage, the sound design for Acceptance needed to create a gentle, transparent, spatially flowing atmosphere — while maintaining emotional richness and avoiding flatness.

Example 1: Clear Ambiance of Wind Chimes and Metal Friction

  • Foley materials: Light metal taps, wind chimes, copper wire friction, glass resonances

  • Design techniques:
    Overlay wind chime sounds with fine metallic friction; EQ to emphasize the high-frequency clarity; set glass resonance as the background layer with long reverb; add subtle modulation to copper friction for liveliness.
    Control overall volume dynamics to maintain a slow, flowing texture.

Emotional intent: Create a “clear, peaceful, continuous but not hollow” emotional atmosphere, expressing release and inner stability.

Example 2: Fragmented Melodies and Shifting Harmonies

  • Foley materials: Finger-plucked music box, toy piano, breath sounds, small chime bells

  • Design techniques:
    Cut piano notes into fragments and reassemble into irregular melodic lines; add unstable synthetic harmonies and low-frequency fluctuations; convert breath sounds into airy resonances for delicate spatial textures.

Emotional intent: Express the idea that even under a calm surface, traces of emotional echoes persist.

These sounds were set to trigger dynamically based on audience proximity and movement, enhancing the feeling of flowing emotions across space.

Conclusion

By the end of this phase, all sound assets were meticulously categorized by emotional type, functionality, and acoustic features, ensuring that teammates could directly integrate them into the interactive system without further editing.

This work greatly improved the team’s sound integration efficiency while preserving the emotional consistency, controllability, and artistic completeness of the final installation experience.

Personal blog 10-11 week: Installation Testing and Exhibition Reflections

Prior to the exhibition we carried out three installation tests, on 3.26, 3.27 and 4.2. The visual test problems were mainly solved in the first and second tests, while the third one mainly solved the sound design problems and combined all the elements.

The first installation testing

During the first round of testing, several visual issues were identified:

1.Placement of the Kinect: Due to the limited detection range of the Kinect, it’s important to position it in a spot where it can capture all visitors entering the room. At the same time, the device should not obstruct the audience’s line of sight.

2. Adjusting the intensity of certain visual effects: Since the exhibition relies on real-time visuals, performance optimization is key. Take the flame effect in the Anger stage as an example—while a more intense flame burst enhances the emotional impact, it also risks causing system lag. During this test, I repeatedly fine-tuned the effect strength to strike a balance between visual impact and system performance.

The second installation testing

1. Since physical sensors are used, factors like lighting, air movement, and dust can interfere with the sensor’s data reception. This sometimes causes the visuals to change even when no one is near the sensor. To minimize this issue, I adjusted the input range within the Math CHOP to reduce sensitivity to such noise.

2.Some test effects were not visually obvious: when viewers approach the sensor, they do so with uneven body surfaces rather than a flat plane. If the resulting changes are too subtle, it can lead to disappointment during interaction. Therefore, during testing, I eliminated certain effects—such as changes in transparency—that didn’t provide strong enough visual feedback.

Exhibition Reflection:

Sensor placement: When visitors try to interact directly with the sensor, they may not appear in the visual output (screen) at the same time, which diminishes the overall experience and sense of immersion.

Clarity of the theme: During conversations with the audience, one visitor mentioned that they only understood the exhibition was about emotions after my explanation. This made me reflect on whether the emotional theme could be made clearer from the start. Possible solutions include incorporating interactive emotional quiz questions before entering the space, or placing more visually explicit emotional cues or posters at the entrance of the exhibition.

Personal Blog: Final Integration with M5Sticks, OSC, and MAX

For the final phase of the project, I focused on refining and scaling our distance sensing setup. The goal was to clean up noisy sensor data, set a distance cap, connect everything to a shared network, and build a centralized system for routing and processing OSC messages in real time.

Hardware Refinement

I tested several M5StickC-Plus boards and HC-SR04 sensors, comparing consistency across units. Some sensors fluctuated too much or lost accuracy at mid-range distances. I ended up choosing the four most stable ones.

Each M5Stick was flashed with the same code, but I updated the OSC address string at the top so each sensor would send data to a different address:

String address = "/M55/distance";

Network Setup: Leo’s Router

Instead of using Joe’s mobile hotspot, I switched over to Leo’s router, which provided a more reliable connection. This was important for minimizing packet drops and keeping multiple sensors running smoothly.

const char *ssid = "LeoWiFi";
const char *password = "Presence";

The M5Sticks all send their messages to:

const IPAddress outIp(192, 168, 0, 255);
const unsigned int outPort = 8000;

Distance Measurement and Capping

The sensor code still uses the familiar trigPin/echoPin setup. After triggering and timing the ultrasonic pulse, I added a cap to prevent noisy long-range readings:

float cm = (duration * 0.034) / 2.0;

if (cm > MAX_DISTANCE) {
  cm = MAX_DISTANCE;
}

Averaging the Distance Values

To smooth out the data, I used a rolling average over the last 10 readings. Each new value is added to a buffer, and the average is recalculated every loop.

#define NUM_SAMPLES 10

float distanceBuffer[NUM_SAMPLES] = {0};

distanceBuffer[bufferIndex] = cm;
bufferIndex = (bufferIndex + 1) % NUM_SAMPLES;

float sum = 0.0;
for (int i = 0; i < NUM_SAMPLES; i++) {
  sum += distanceBuffer[i];
}

float avgDistance = sum / NUM_SAMPLES;

Normalization for OSC Output

The averaged distance is normalized to a 0–100% scale so it’s easier to use for modulating audio or visual parameters:

float normalizedDistance = (avgDistance / MAX_DISTANCE) * 100.0;

This gives us a value like “23.4” instead of “78 cm”—much easier to use directly in Unity or TouchDesigner.

Sending the OSC Message

Once the data is ready, the M5Stick sends it as an OSC message using the CNMAT OSC library:

OSCMessage msg(address.c_str());
msg.add(normalizedDistance);

udp.beginPacket(outIp, outPort);
msg.send(udp);
udp.endPacket();
msg.empty();

Centralized Processing in Max

Rather than having each sensor talk directly to Unity or TouchDesigner, we built a central Max patch to receive and clean all OSC data.

Here’s what the patch does:

  • Uses udpreceive to listen for all messages on port 8000 
  • Routes each message by OSC address (/M51/distance, /M52/distance, etc.) 
  • Compares each value to a threshold (e.g., < 30) using if objects 
  • Sends a 1 or 0 depending on whether someone is near that sensor 
  • If all sensors are triggered at once, it sends a /ChangeScene message to both Unity and TouchDesigner on port 8001 

This setup keeps the sensor logic modular and centralized—easy to debug, scale, and modify. We only need to change one patch to update the interaction logic for the entire system.

Final Testing

We tested everything together, and it worked: scene and audio changes were successfully triggered in Unity, responding to movement in front of the sensors. I also captured a video of the audio modulating based on proximity.

This system is now:

  • Scalable (thanks to Wi-Fi and OSC) 
  • Cleanly routed (through MAX) 
  • Responsive (with smoothed, normalized data) 

It’s exciting to see everything running reliably after so many small iterations.

Personal Blog – week 7 -8

As planned, during these weeks I focused on sound implementation in Wwise, and  creating new sound effects.

The progress of our sound team has been slower than planned because we had to solve some issues of communication about the sound scapes’ concept. However Evan kindly offered to help with the Wwise session build, which drastically speeded up the process.

I agreed to do the first and fourth stage (Denial and Depression), and will do the sound effects for Acceptance.

Denial:

Since I already had some  sound effects crafted for this stage, I grouped  them into folders. Next I had to build a logic system in Wwise:

  1. One stereo ambience, in which a low-pass filter is controlled by an RTPC (which will be a proximity sensor).
  2. Random containers of breath_female, breath_male and breath_reversed, these are triggered by a Play event, with some delay and probability variations
  3. Electro-magnetic sound, LFO speed controlled by RTPC (Proxi sensor)
  4. High ticks sfx sequence container
  5. Music section of drum loops and percussion ticks (60 bpm)

link for Wwise Denial soundcaster:

https://media.ed.ac.uk/media/t/1_kod3nzx7

Depression:

For depression I wanted to create a quite dark atmosphere as a base layer, and use lots of human voices to evoke the memories of shared moments with friends, family, social times.

Since the provided visual design sample looked like a person behind an wall that separates them from the present, I wanted to replicate this in the audio by filtering out high frequencies:

The base atmo layer therefore gets a heavy high cut filtering after trigger start (this was applied in Ableton, before implementing to Wwise), and a second layer of  filtered storm ambience is occasionally triggered to add weight and a “clouded” emotion to the scene’s soundscape.

Apart from the unmodified “Vox_x” files (only have reverb to place them away in distance) an additional random container of transposed voices are used to enhace the dark sensory of passing time, and bittersweet memories.

The footsteps, personally represent a sort-of  hallucination for me, like someone else was still around us, watching from close.

Link for Wwise Depression soundcaster:

https://media.ed.ac.uk/media/t/1_8el8h85o

 

Technical Development:

We created a Unity project and successfully received OSC data via Lydia’s Proximity sensor.

 

Next week we aim to successfully crossfade  and move between the five stages triggered by data. However, we are stil having difficulties about how can we approach that switch between the stages, and how to specify/ limit the data to receive smooth transition.

Personal Blog: Switching to M5Sticks, OSC, and Unity

In our team meeting this week, we discussed the technical direction of our project. Up until now, I had been oversimplifying things by using a single Arduino Uno board, physically connected to my computer and sending distance data over the serial port into TouchDesigner. This worked for early tests, but it wasn’t going to scale.

We needed a setup that could support multiple sensors sending data to multiple computers: one machine running TouchDesigner for visuals, and another running Unity, integrated with Wwise, to handle spatial audio. The two systems would be kept in sync using Open Sound Control (OSC)—a protocol built for fast, real-time communication between creative applications.

After that, I had a meeting with Joe Hathaway, who pointed out that the Arduino Uno doesn’t support Wi-Fi. He recommended switching to M5StickC-Plus boards, which have built-in Wi-Fi and are well-suited for sending OSC messages wirelessly over a local network. We worked together to adapt my existing Arduino code to the M5Stick. Rather than printing values to the serial monitor, the device now connects to a personal hotspot and sends real-time OSC messages over UDP.

Code Walkthrough: M5Stick + OSC

Here’s a breakdown of the changes and additions we made in code.

1. Include Libraries and Setup Pins

We import the required libraries for the M5Stick hardware, Wi-Fi, UDP, and OSC. Then we define the trigger and echo pins for the HC-SR04 distance sensor.

#include <M5StickCPlus.h>
#include <WiFi.h>
#include <WiFiUdp.h>
#include <OSCMessage.h>

int trigPin = G0;
int echoPin = G26;

2. Wi-Fi and OSC Setup

We define the OSC address, SSID and password of the Wi-Fi network, the IP address of the receiving machine (e.g. a laptop running Unity), and the port number.

String address = "/M121/distance";

const char *ssid = "JoesPhone";
const char *password = "12345678";

const IPAddress outIp(10, 42, 218, 255);  // Receiving computer IP
const unsigned int outPort = 8000;        // OSC port

3. Setup Function

The setup() function initializes the M5Stick screen, connects to Wi-Fi, and begins listening on the network.

void setup() {
  M5.begin();
  Serial.begin(115200);
  pinMode(trigPin, OUTPUT);
  pinMode(echoPin, INPUT);

  while (!connectToWiFi()) {}
  udp.begin(outPort);

  M5.Lcd.println("Ready\n");
  M5.Lcd.println("Sending to:");
  M5.Lcd.print("IP: ");
  M5.Lcd.println(outIp);
  M5.Lcd.print("Port: ");
  M5.Lcd.println(outPort);
}

4. Loop: Distance Measurement + OSC Sending

This is the main loop that measures distance and sends it as an OSC message.

void loop() {
  // Trigger the ultrasonic pulse
  digitalWrite(trigPin, LOW);
  delayMicroseconds(2);
  digitalWrite(trigPin, HIGH);
  delayMicroseconds(10);
  digitalWrite(trigPin, LOW);

  // Measure echo time
  float duration = pulseIn(echoPin, HIGH);
  float inches = (duration * 0.0135) / 2.0;

  // Send as OSC message
  OSCMessage msg(address.c_str());
  msg.add(inches);
  udp.beginPacket(outIp, outPort);
  msg.send(udp);
  udp.endPacket();
  msg.empty();

  delay(50);  // Small pause to prevent flooding
}

5. Wi-Fi Connection Helper

This function connects the M5Stick to the defined Wi-Fi network and prints status updates to the screen.

bool connectToWiFi() {
  M5.Lcd.print("Connecting");
  WiFi.mode(WIFI_STA);
  WiFi.begin(ssid, password);

  unsigned long startAttemptTime = millis();
  while (WiFi.status() != WL_CONNECTED && millis() - startAttemptTime < 30000) {
    M5.Lcd.print(".");
    delay(400);
  }

  if (WiFi.status() != WL_CONNECTED) {
    M5.Lcd.println("\nErr: Failed to connect");
    delay(2000);
    return false;
  } else {
    M5.Lcd.println("\nConnected to:");
    M5.Lcd.println(ssid);
    M5.Lcd.println(WiFi.localIP());
    delay(2000);
    return true;
  }
}

Next Steps

Now that the M5Stick is sending OSC messages over the network, I plan to test this with my team and work through how to receive those messages in both Unity (for Wwise audio control) and TouchDesigner (for visuals). We’ll also explore setting up multiple M5Sticks on the same network and assigning each one a unique OSC address to keep things organized.

Code and diagrams adapted from Joe Hathaway, Edinburgh College of Art, 2024, used under the MIT License.

Visual Research and Inspiration: Visualization Mood boards

In conceptualizing this installation, we developed storyboards to explore how visual elements can convey the five stages of grief. Our focus was on creating immersive, evocative visual experiences that capture the essence of each emotional stage by curating images, colour palettes, textures, and abstract representations.

For each stage of grief – denial, anger, bargaining, depression, and acceptance – we created individual mood boards. These compilations serve as visual anchors for the future to ensure a cohesive yet distinct representation of each emotional phase. The mood boards incorporate a range of visual elements including:

  • Color schemes to reflect emotional tone of each stage
  • Textures and patterns that evoke specific sensations or feelings
  • Abstract and representational imagery that symbolizes key concepts

(https://miro.com/app/board/uXjVLixa9bM=/)

1. Denial: A thin veil shielding from harsh reality.

Colour scheme: Muted greys and whites for numbness and disbelief.

Textures: Fabric-like patterns, semi-transparent cloth or veil texture.

Imagery: A scene viewed through a textured veil, blurred shapes suggesting an obscured view, or someone partially hidden behind a cloth. The veil could symbolize a ‘preferable reality’

image sources:
1.https://pin.it/6tKUtohzu
2.https://pin.it/7bgzGUoXV
3.https://pin.it/5sVXEn7fF
4.https://pin.it/4Ng3FQue6
5.https://pin.it/5c1swarlw
6.https://pin.it/2908WJ3Ln

 

2. Anger

Color Scheme: Vibrant reds, oranges, and deep browns; a sense of burning and intensity.

Textures: Turbulent smoke, distorted glass or warped metal, creating a sense of chaos.

Imagery: Swirling smoke obscuring objects, jagged edges piercing the air, or distorted views through broken mirrors, symbolizing frustration and rage.

image sources:
1.https://pin.it/2wanXLXzE
2.https://pin.it/6KplPdwCK
3.https://pin.it/4bEmssVxS
4.https://pin.it/5pPvOTJjV
5.https://pin.it/Jh4HzCkEQ
6.https://pin.it/Krae5xifo

 

3. Bargaining: A futile grasp, everything slips away.

Colour Scheme: Pale yellows and soft blues, symbolizing fleeting hope and fragility.

Textures: Flowing, liquid-like patterns; smooth but uncontrollable surfaces.

Imagery: Liquid dripping or running through open hands, symbolic of time or opportunities slipping away, reinforcing the sense of helplessness and loss of control.

image sources:
1.https://pin.it/1wfk5fLBu
2.https://pin.it/2zZSOpM6i
3.https://pin.it/2wanXLXzE
4.https://pin.it/7gBz254kQ
5.https://pin.it/28BFFqNOK
6.https://pin.it/1i5oIvl9a

4. Depression: A spiral of thoughts bending into unbearable shapes.

Colour Scheme: Primarily blacks, dark grays, and deep blues, with minimal light and desaturated colors, heavy shadows.

Textures: Distorted gouache texture, thick and uneven layers, rough and clotted surfaces, symbolizing emotional stagnation.

Imagery: Abstract shapes submerged in darkness, distorted figures struggling against the weight in heavy, dark pigments, reflecting despair and hopelessness.

 

image sources:
1.https://pin.it/1o4N3Dfh1
2.https://pin.it/7uljdZqUp
3.https://pin.it/7my5pn7jV
5.https://pin.it/6a8p2261a
6.https://pin.it/4WIGVj0Sp
7.https://pin.it/5eUzBKMc1

Acceptance: From shadow to light

Colour Scheme: Soft purples transitioning to lighter hues, symbolizing transformation.

Textures: Smooth gradients, gentle curves.

Imagery: Sunrise, open spaces, balanced compositions.

 

image sources:
1.https://pin.it/6CW58fcqH
2.https://pin.it/3YWkJy4Yj
3.https://pin.it/745npWwI9
4.https://pin.it/79KAq3H8E
5.https://pin.it/1cYJpkZeF
6.https://pin.it/6I1afqwSd
css.php

Report this page

To report inappropriate content on this page, please use the form below. Upon receiving your report, we will be in touch as per the Take Down Policy of the service.

Please note that personal data collected through this form is used and stored for the purposes of processing this report and communication with you.

If you are unable to report a concern about content via this form please contact the Service Owner.

Please enter an email address you wish to be contacted on. Please describe the unacceptable content in sufficient detail to allow us to locate it, and why you consider it to be unacceptable.
By submitting this report, you accept that it is accurate and that fraudulent or nuisance complaints may result in action by the University.

  Cancel