Any views expressed within media held on this service are those of the contributors, should not be taken as approved or endorsed by the University, and do not necessarily reflect the views of the University in respect of any particular issue.

Interview with the audience

On March 8th, I interviewed Yilin, a medical student visiting Edinburgh, as part of our project exploring the visualization of breathing through interactive art. Her responses provided valuable insights into how breathing is perceived and experienced. Here’s a summary of the key points from our conversation:

Key Insights

  1. Awareness of Breathing:
    Yilin mentioned that her awareness of breathing increased after the pandemic, especially as a medical student focused on lung health.

  2. Noticing Breathing:
    She notices her breathing most during moments of anxiety or tension, when her breath becomes faster.

  3. Emotional and Physical Differences:

    Steady breathing makes her feel calm, while rapid breathing increases anxiety.
    Deep breathing slows her heartbeat, while shallow breathing makes it faster.

  4. Visualizing Breathing:
    Yilin imagines her breathing as similar to a cardiogram—sometimes stable, sometimes fluctuating.

  5. Colors and Relaxation:
    She finds natural colors like sky blue and grass green relaxing, while black and white evoke tension, symbolizing death.

  6. Sound and Breathing:
    Yilin feels that nature sounds (e.g., birds, leaves) help stabilize her breathing and make her feel calm. She compared her breath to the sound of wind through leaves.

Key Takeaways for Our Project

  • Visuals: Incorporate cardiogram-like patterns to represent breathing, with smooth transitions for calm states and fluctuations for tension.

  • Colors: Use natural, calming colors like blue and green to evoke relaxation.

  • Sound: Integrate nature-inspired sounds to enhance the immersive experience.

Designing a Visual Guide for 5-5 Breathing

In our previous research, we discovered that the 5-5 breathing technique—inhaling for 5 seconds and exhaling for 5 seconds—can help people achieve a state of calm and relaxation. To enhance this experience, we decided to incorporate a visual guide into our interactive installation. The visual design is divided into two parts:

1. A fixed animation that guides the 5-5 breathing rhythm.

2. A dynamic graphic that responds to the user’s actual breathing patterns.

My role is to design the fixed animation, which serves as a clear and intuitive guide for users to follow the 5-5 breathing rhythm. However, this task has proven to be more challenging than expected.

Initial Attempts and Challenges

1. Rotating and Expanding Dots:

I started by experimenting with rotating and expanding dots, hoping to create a sense of rhythm and flow. However, this design conflicted with the dynamic sphere in the second part of the visual, creating visual clutter and confusion.

2. Expanding and Contracting Circles with Halos

Next, I tried using a circular design that expanded outward during inhalation and contracted inward during exhalation, accompanied by a glowing halo effect. While this was visually appealing, it still overlapped and competed with the central dynamic sphere, making it difficult for users to focus on either element.

Key Insight: These attempts revealed that centralized and circular designs are not suitable, as they conflict with the dynamic sphere that occupies the center of the screen.

3. Background Color Transitions

To avoid overlapping with the dynamic sphere, I shifted my focus to the background, using color transitions to indicate the breathing phases. For example:

Inhale: Transition from a cool color (e.g., blue) to a warm color (e.g., orange).

Exhale: Transition back to the cool color.

However, this approach introduced two new challenges:

      1. Color Interpretation: Users might not intuitively associate specific colors with inhaling or exhaling, leading to confusion.

      2. Transition Smoothness: A sharp transition between colors felt abrupt and unnatural, while a slow, gradual transition made it difficult for users to distinguish the exact moments to inhale or exhale.

Current Direction and Next Steps

Based on these challenges, I’ve realized that the fixed animation needs to:

Avoid Centralized and Circular Designs: To prevent visual conflict with the dynamic sphere.

Provide Clear Timing Cues: Ensure users can easily identify when to inhale and exhale.

Maintain Visual Harmony: Complement the dynamic sphere without overwhelming it.

Moving forward, I plan to explore the following solutions:

Edge-Based Animations: Create animations that occur at the edges of the screen, such as a progress bar or waveform, to guide breathing without interfering with the central sphere.

Text or Symbol Indicators: Use simple text (e.g., “Inhale” and “Exhale”) or symbols (e.g., arrows) to provide clear instructions.

Subtle Motion Cues: Incorporate gentle, non-intrusive motion (e.g., flowing lines or particles) that aligns with the breathing rhythm.

Conclusion

Designing a visual guide for the 5-5 breathing technique has been a process of trial and error. While my initial attempts faced challenges, they provided valuable insights into what works and what doesn’t. By focusing on edge-based designs and clear timing cues, I’m confident that the final fixed animation will effectively guide users toward a calm and balanced breathing experience.

1st Intergration & Trouble Shooting

Overview: Integrate the existing visuals and audio to review the effect, and troubleshoot any unresolved issues

Dates of Development | March 15, 2025

Process Group | Touch Designer & Max

  1. This integration began by incorporating the cyclic breathing-inspired visuals created by Yixuan into Hefan’s breathing-responsive visuals. The combined visuals served as a background to guide the breathing rhythm. We also tested how well it aligned with Roulin’s background music.

    Issues Identified:

    • The breathing-responsive visuals felt somewhat monotonous, as they only involved scaling.
    • The cyclic background visuals lacked variation.
  2. I brought the borrowed DPA microphone and RME audio interface, and the team conducted a breathing test to observe how airflow influenced the visual changes. We found that the visuals were highly unstable, sometimes fluctuating excessively. Yixuan adjusted the range settings, making the visual transitions smoother.

    Meanwhile, Ashley brought a router to attempt transferring TD data from my computer to her Max setup. Yixuan and Ashley worked on resolving the connection. They were successfully able to connect the Router to my computer  which allowed Ashley’s to receive data directly from TD from my computer. 

  3. On the other side, we discussed Li’s installation design. We explored two potential plans:

    • Plan 1: Creating a non-contact breathing mask with a hidden microphone. This design would better capture the exhaled airflow and create a recirculation effect.
    • Plan 2: Designing a potted plant installation, where the microphone would be embedded as a branch of the plant.

    The final decision is still pending.

 

Continue reading “1st Intergration & Trouble Shooting”

Touchdesigner breath detection testing

Overview: This test is based on Hefan’s TD testing, focusing on refining the interaction between breath detection and visual output. The goal was to optimize microphone placement, enhance the sensitivity of airflow detection, and improve the overall experience of guided breathing. Key areas of experimentation included adjusting audio analysis parameters, refining the response to different breathing intensities, and ensuring smooth visual transitions.

Findings & Adjustments

Breath Sensitivity & Microphone Positioning

  • Issue: When blowing gently, the system did not detect sufficient airflow, resulting in weak visual feedback.
  • Solution: The microphone was found to be most sensitive when placed horizontally, 3-4 cm below the mouth. This position maximized airflow detection while minimizing background noise.
  • Next Steps: Implement a fixed microphone stand or position guide to ensure all participants use the optimal placement without manual adjustments.

 

Breath Decay & Visual Transition

  • Issue: During guided breathing (5s inhale, 5s exhale), the airflow naturally decreases over time. This caused a sudden drop in visual output, making the transition feel abrupt.
  • Solution: Adjust audio smoothing settings and refine the visual decay curve to create a more natural fade-out effect as breath weakens.
  • Next Steps: Experiment with lag filters and dynamic scaling to create a gradual transition when airflow weakens.

High-Frequency Sensitivity & Detection Consistency

  • Observation: The system responded to breath but not to spoken words, confirming that it effectively isolates airflow instead of voice.
  • Adjustment: The high-frequency range was the most responsive, while the low-frequency range was unnecessary for detecting breath.
  • Next Steps: Fine-tune gain and threshold settings to ensure consistent response across different breath intensities.

Next Steps & Refinements

  • Implement a fixed microphone position to avoid user adjustments.(Maybe put it within Installation)
  • Optimize audio analysis parameters to ensure smooth transitions when breath intensity decreases.
  • Conduct further tests to validate sensitivity across different users and refine visual-matching dynamics.

Development Blog – March 12th, 2025

IMG_4282

Location – Alison House Music Store

Participants – Xinyi Qian,Roulin Liu

Microphone and TouchDesigner Troubleshooting
Equipment
Microphones:
Rode smartLav+ Lavalier Mic
DPA 4061 Lavalier Mic
Audio Interface:
RME FireFace UCX
Software:
TouchDesigner
RME FireFace UCX Driver
Process of Development

Issue Identification
On Tuesday, we attempted to connect the Rode smartLav+ Lavalier Mic to the computer, but it failed to capture audio.
It was unclear whether the issue was due to hardware failure, software incompatibility, or another technical limitation.

Equipment Borrowing & Alternative Testing
Xinyi re-borrowed the Rode smartLav+ Lavalier Mic for further testing.
Additionally, we borrowed the DPA 4061 Lavalier Mic, as research suggested it offers superior audio quality.
Connection Issue with DPA 4061
The DPA 4061 uses an XLR adapter, which cannot be directly connected to a computer.
To resolve this, we borrowed the RME FireFace UCX audio interface.

Setup & Software Installation
With the help of the Music Store staff, we successfully connected the microphones to the computer using the RME FireFace UCX.
We installed the necessary drivers for the FireFace UCX to ensure proper functionality.

Testing & Results
Both microphones were able to function properly after the setup.
After conducting audio tests, we found: The DPA 4061 captured more subtle details, such as breathing sounds.
It was more sensitive and had lower latency than the Rode smartLav+.
Based on these results, we decided to proceed with the DPA 4061 for our project.

Upcoming Goals
We will fine-tune parameters in RME and TouchDesigner to achieve the best audio quality.

On the Sound Design Front – Max MSP – Reverb, Delay, & Stutter (Ashley Loera)

Dates of Development | March 10th, 2025

Sound Design Director | Max MSP: Ashley Loera

Process of Development

Equipment

Software:

  • Max MSP Software

Hardware:

  • MacBook Air Laptop
Max MSP – Audio FX
  • Delay
  • Reverb (Dark Hall Reverb)
  • Stutter
Progress Notes

Accomplished the following goals:

  • My aim this week was to develop 3 Unique Audio FX to change real time with audio input in conjunction to the TD output data. 
  • As shown below, I was successfully able to accomplish this by creating simple Audio FX (Delay, Reverb, Stutter).
  • To control the feedback of Audio Output, I created a limiter on each gain slider by multiplying the audio output by 0.5 to decrease the volume before the audio output gets to the gain slide, to avoid possibility of feedback.
Trouble Shooting Software
    • Positives: Successfully developed 3 Unique Audio FX (Delay, Reverb, Stutter) by creating 3 separate value ranges using TD output data.
    • Issues: My concern is that currently I have to be very mindful of the volume levels as I have experienced feedback even with the set limiters. This can and will hinder the process as the interference is quite jarring and distracting.
    • Approach: Create a buffer section and smooth-out data output from TD to filter rate of changes for Audio FX (as also suggested by our Tutor, Philly, on March 11th Tutorial Session).
Upcoming Goals:
  • Develop an Oscillator to change real time with audio input. To accomplish this, I located the following Youtube Tutorial:

” Max/MSP: Using Peakamp~ to Control Frequency of an Oscillator.”

Youtube Video: www.youtube.com/watch?v=tPP2VpRiwqU

  • Develop a Gain Buffer to troubleshoot feedback issues. I located the following Max MSP Tutorial that analyzes the Audio Input and provides a very helpful tips regarding Audio Analyzation.

Youtube Video: https://www.youtube.com/watch?v=oUi9UYW8cKY&t=20s

Tutorial Updates (March 11, 2025)

Feedback:

  • Concerns regarding Unpredicatable Audio Feedback
  • Suggestions to further develop the Max Patch by doing the following:
        • Create a Gain Buffer to eliminate possibility of feedback during installation.
        • Smooth out TD Data Output – to avoid static changes in Audio FX
        • Ensure that the Max MSP Patch serves as an accompaniment to the fixed composition created by Sound Director Ruolin, to ensure cohesiveness in audio presentation.
        • Consider including different types of Reverbs – via VST Plugins or by creating multiple.
                  • Schroeder Reverb:

                    Youtube Video: https://www.youtube.com/watch?v=SJiOXhbF810

Bibliography:

Bolaños, Gabriel. “Max/MSP: Using Peakamp~ to Control Frequency of an Oscillator.” YouTube, YouTube, 19 Jan. 2022, www.youtube.com/watch?v=tPP2VpRiwqU.

Max, Learning. “Max/MSP – Microphone_Analysis.” YouTube, YouTube, 28 Aug. 2018, www.youtube.com/watch?v=oUi9UYW8cKY&t=20s.

Hawthorne, Michael. “Implementing Schroeder Reverb in Max/MSP.” Youtube, Youtube, 13, June, 2018, https://www.youtube.com/watch?v=SJiOXhbF810,

Interview Outline & Analysis (along with Ethics Form & Interview Consents)

Overview: Our group conducted a series of interviews to explore how people perceive breathing and its connection to emotions, physical state, visuals, and sound, supporting ourproject.

Interview Goals

  • Understand when people become aware of their breathing.
  • Explore how different breathing states (deep vs. rapid) affect emotions and the body.
  • Gather insights on visual and auditory representations of breathing for interactive design.

Interview Participants

Participants included but were not limited to:

  • Meditation practitioners : Highly aware of breathing patterns.
  • Medical students : Providing a physiological perspective.
  • General participants : Offering everyday experiences.

The interview findings provide diverse user perspectives for our interactive design.

Interview Outline

Interview Analysis

This interview focused on three core aspects: perception of breathing, emotional & physiological experiences, and audio-visual associations. The interview consisted of nine questions.

Interview Analysis English Version

Ethics Form & Interview Cosnents

THE UNIVERSITY of EDINBURGH

Exploring Audio-Driven Visuals in TouchDesigner: From Shape Transformations to Microphone Interaction

TouchDesigner Audio-Driven Graphics Experiment: From Shape Transformations to Microphone Input Control

Recently, I have been experimenting with TouchDesigner to explore dynamic shape transformations and audio-driven visual effects. Initially, I focused on modifying basic geometric forms using Noise, Output-Derivative (Slope), Threshold, Level (Gamma Adjustment), and Bloom Effects. Later, I integrated microphone input to control shape size and color, using Transform SOP for scaling and translation, with two Null CHOPs managing position and color separately.Touchdesigner learns note-taking and analysis

test

 


1. Initial Shape Transformations and Visual Experiments

① Noise + Output-Derivative (Slope) I started by applying Noise CHOP to introduce organic movement into the shape.To enhance the natural transitions, I used Output-Derivative (Slope CHOP) to smooth out the rate of change, preventing sudden spikes in movement.

② Threshold + Level (Gamma Adjustment) Threshold CHOP was used to create high-contrast effects, transforming smooth gradients into distinct binary patterns. Level CHOP (Gamma Adjustment) helped fine-tune the brightness curve, making the visuals either softer or more dramatic.

③ Bloom Effect Finally, I applied a Bloom effect, enhancing the highlights and adding a glowing aura to the shape, making it more visually engaging.


2. Using Transform SOP for Background, Scaling, and Positioning

To better organize the visuals, I utilized Transform SOP to: Add a background (either static or gradient-based). Control scaling dynamically. Apply translation effects to move the shape across the screen.Link transformations to audio input, so that sound influences both size and position. Additionally, I created two Null CHOPs: One for Position (to control movement based on audio input). One for Color (to change the color dynamically according to the audio intensity).


3. Integrating Microphone Input with Audio Device In CHOP

After experimenting with basic shape transformations, I moved on to controlling these parameters with real-time audio input.

① Capturing Microphone Input Using Audio Device In CHOP, I connected my microphone to feed real-time audio data into TouchDesigner. The raw audio data fluctuates too quickly, so direct mapping would result in erratic visual behavior. To ensure a smooth transformation, I applied additional processing.

② Smoothing Audio Input: Filter CHOP + Lag CHOP .Filter CHOP: Set Filter Width = 2.0 to smooth out the fluctuations, reducing rapid jumps. Lag CHOP: Applied a gradual transition effect: Lag Up = 1.2 (slower increase when the volume rises)Lag Down = 3.0 (even slower decrease when the volume drops)

③ Mapping Audio Data to Shape Scale with Math CHOP Mapped the volume range (0.01 ~ 0.3) to shape scale (0.5 ~ 2.0).This ensures that louder sounds gradually enlarge the shape, while softer sounds slowly shrink it, avoiding sudden jumps.


4. Connecting Audio Data to Transform SOP Mapped the output of Math CHOP to Transform SOP’s Uniform Scale, enabling shape size changes based on audio intensity. Connected Null CHOP (Position) to Translate parameters so that the shape moves dynamically with the sound. Linked Null CHOP (Color) to the color channels, allowing the shape’s color to shift depending on volume levels.

 

css.php

Report this page

To report inappropriate content on this page, please use the form below. Upon receiving your report, we will be in touch as per the Take Down Policy of the service.

Please note that personal data collected through this form is used and stored for the purposes of processing this report and communication with you.

If you are unable to report a concern about content via this form please contact the Service Owner.

Please enter an email address you wish to be contacted on. Please describe the unacceptable content in sufficient detail to allow us to locate it, and why you consider it to be unacceptable.
By submitting this report, you accept that it is accurate and that fraudulent or nuisance complaints may result in action by the University.

  Cancel