Any views expressed within media held on this service are those of the contributors, should not be taken as approved or endorsed by the University, and do not necessarily reflect the views of the University in respect of any particular issue.

Reflection and Process | Plastic Flowers | What do Lilies Symbolize? (Ashley Loera)

Dates of Development | Friday, March 21st, 2025 & Friday, March 28th, 2025

Sound Design Director | Max MSP: Ashley Loera

Reflection and Process | Plastic Flowers 

What do Lilies Symbolize? 

Plastic Lilies

On Friday, March 21st and Friday, March 28th, our Process group crafted a beautiful sculpture of lungs and flowers with lights. I got to help create the plastic lilies with plastic water bottles and hot glue.

Process:
  • I cut oval shapes in the water bottles and bent/shaped the oval shapes into petal shapes with a candle flame. During this process, I had a wonderful time creating petals that felt natural and as though they were moving in the wind. Capturing the movement of lilies as a form of breath.

A. Depiction of oval plastic petal – with a curve melted on the petal to mimic movement in the flower.

B. We used candles to melt the plastic.

C. Example of a completed flower petal.

D. Glued 6 petals together to create the structure of a Lily.

Symbolism:
  • In various cultures, especially the western culture, Lilies represent forgiveness. Floral design is one of my most passionate hobbies, and in my journey as a florist, I have learned the symbolism behind many flowers.
  • Oftentimes, in spiritual practices, faiths and organizations, forgiveness is shared as a value that leads to peace and healing. For me, the symbolism behind the Lily and our use of it in this presentation holds a very special and powerful connection to the Process of breath.
How does this tie into our installation?:
  • Through the process of forgiveness, we learn to let go. Not just emotionally or mentally. We learn to let go in the body. My education background is in Psychology and Substance Abuse Treatment and Prevention. During my undergraduate education I took courses in Trauma Psychology. One important concept and truth I learned, was that the body hold’s onto Trauma, Resentment and Pain. Often-times through the healing process, people must learn to work in the body to release trauma because the body internalizes pain, which later results into higher levels of cortisol. Ultimately manifesting into of anxiety, depression, chronic pain and illnesses.  Meditation and Mindfulness has been used as a powerful therapeutic tool to help treat the body.
  • As forgiveness is a powerful step of letting go, this process spiritually, psychologically and physically relates to the process of inhaling and exhaling in meditation.

Final Max Patch | Presentation Notes & Feedback (Ashley Loera)

Dates of Development | April 1st -3rd, 2025

Sound Design Director | Max MSP: Ashley Loera

Final Max Patch | Presentation Notes & Feedback

Software:

  • Max MSP Software
  • Touch Designer

Hardware:

  • Xinyi’s Laptop
Final Patch:

Please refer to the video below, showcasing the final patch, and how it operates.

Presentation Process

We opted out of using the router, as Xinyi’s laptop had the ram space and efficiency to run both TD and Max MSP at the same time. Just in case, I made sure to bring my laptop and the Router. No issues on this front during the presentation.

During the presentation set-up, I ran into some issues with the Audio playback of the Max Project in the Atrium. Philly was able to assist us in adjusting the settings of the hardware and we were able to get the Meditation music playing. Once the Meditation music began to play, I noticed that the Microphone was not connecting to Max MSP.

During this time, a group of people entered the room before the installation presentation was set to begin (at 11:00AM) and one of our Process group members asked us to allow the group to try the installation with audio. After I had reflected that we were still setting up and were having issues with the microphone audio, our Process group member continued to ask if her friends could experience the presentation without the microphone as they were short on time.

Unfortunately, the absence of microphone playback did not present well to the group that came to experience the installation and I had to take the initiative to stop the presentation and finish the audio set-up. The group was understanding, however, I felt that it would have been better to finish setting up before allowing others to experience our presentation, especially as this interruption caused us to delay our installation by 30 minutes. I also believe that we may have needed more time to set up due to the unforeseen audio technical issues that arose last minute.

I was able to work with Philly to adjust Max MSP settings, restart the program and connect the Microphone to Max MSP. Once Audio Technical Issues were addressed we began the presentation of our installation at 11:30AM.

During the presentation, one of our group members asked us to turn down the audio input as it was too loud. I originally obliged, however, I noticed that TD was not able to process the audio information coming into the microphone because the audio input gain was decreased significantly. When I turned the gain up again, slightly, TD was able to process the audio input information directly impacting the color of the visual presentation of our installation. However, because the audio input gain needed to be increased further to make TD more sensitive to the audio input. I believe that by creating another gain buffer here, I would have been able to address the issue of volume control and Data sensitivity in TD.

Feedback:
  • Teaching fellow, Roderick Dunlop brought it to my attention that he felt that the installation didn’t have much of an interactive impact and could have been improved in this aspect.
  • When speaking with our audience, we received feedback that the space we developed was calm, well prepared and well executed. One gentleman suggested that the audio playback almost resonated with him as a thunderstorm, and he found it interesting when considering it as a comparison to the process of breath. I found this insight unique and creative.
  • We also received the following feedback to improve our installation from Jules and Philly:
      • The Meditation Music was clipping. Later I was able to address this by creating faders in the audio samples, to allow for clear transitions between the audio samples. It came to my attention after this, as clipping continued, that the additional splicing of the 6 audio samples caused clipping because it began playing during half the track.
      • The Audio Presentation and Visual Presentation was well prepared, however, still static.
      • Reiterated that the installation did not have much of an interactive impact.
      • Suggested diversifying the visual presentation.
      • Suggested removing the audio playback of the audio input complete. In its place, use the audio input to manipulate an audio sample of breath, that would give the illusion of breathing, by maintaining sample clarity and consistency.
      • Suggested including a synthesizer to the max patch to create a varied element in the arrangement that allowed the experiencer to experience a change in sound over time. Adding movement to the music and audio presentation.
  • During the Installation, Philly was able to assist me in creating a basic synthesizer, adding an additional element presenting movement and a unique perspective on the sound front. The following patch was added during the presentation.

Please refer to the snapshot below:

Final Max Patch Goals:
  • I will develop a sample envelop of a breath audio sample that is directly manipulated by the audio input and audio FX to allow for clarity and consistency in the max patch.
  • I will develop a fader chain for the spliced samples.

Add Effects

 

I’ve been playing with TouchDesigner recently, exploring ways to connect music and visuals: using the mid and high frequencies of a background track to drive dynamic changes in visuals.

I started by using the Audio Spectrum CHOP to split the audio into different frequency bands. Then I extracted the midand high values as control parameters to manipulate certain layers in the scene—like scaling, position shifting, and even shader-based distortion. The result is a visual that “dances” with the music in real time.

One of my favorite additions is a little boom glow effect—when the high frequencies suddenly spike, it triggers a burst of glowing light in the visuals, like a tiny explosion. It adds a sense of rhythm and punch to the whole piece, emphasizing certain beats and creating a more immersive, concert-like feel.

Next step, I’m thinking about incorporating the low frequencies to control more grounded movements—like global structure wobble or a soft, breathing motion in the background.

Refining the 5-5 Breathing Background Visuals

After the last class, I focused on improving the background visuals for our 5-5 breathing guide, addressing feedback from earlier critiques. The previous version had two main issues:
1. After its recent adjustments, the visuals didn’t fully align with the dynamic sphere animation.
2. The repetitive patterns felt monotonous over time.

To solve this, I revisited the original particle effect style but expanded its variety. I created eight distinct 5-second animations, each with unique motion paths—some resembling slow upward drifts, others mimicking gentle pulses or radial expansions. By stitching these clips into a seamless loop, the background now feels dynamic and non-repetitive, subtly shifting to keep viewers engaged without distracting from the central breathing guide. I also fine-tuned the color palette and blending modes to better harmonize with the main dynamic sphere.

The updated background not only complements the breathing rhythm but also adds depth to the installation’s atmosphere. Next, I’ll test the visuals with real-time breathing data to ensure the transitions feel intuitive and calming.

TouchDesigner Visual Update

In this test, I explored using microphone input to drive real-time visual changes. While the initial interaction worked, I believe the main visuals could be more expressive and exaggerated. For example, the mic input could control a wider range of parameters — such as diffusion, distortion, brightness, and more dramatic rotations — to create a more immersive, reactive effect. I’m open to experimenting with any visual responses that feel dynamic and playful.

Additionally, when I’m not blowing into the mic, the visual decay happens a bit too fast. Slowing down the rate of change when idle could help maintain a smoother and more organic visual flow.

Test Video in TouchDesigner

Updated video

TouchDesigner Visual Update – Version 2

In this iteration, I moved away from the original circular visual form and transitioned to a more organic, irregular shape. This change allowed the visual to feel more fluid and less constrained, resembling something between flowing matter and an energetic field.

Additional enhancements include:

  • Increased Mic Interactivity: The microphone input now drives multiple parameters simultaneously, including diffusion intensity, mesh distortion, brightness shifts, and rotation amplitude. These changes respond more dramatically to volume spikes, making the interaction more expressive.

  • Particle Responsiveness: The internal texture now breaks apart and swirls with more visible turbulence, suggesting a kind of sonic turbulence.

  • Visual Decay Tweaks: The decay time when there’s no input has been lengthened, allowing the form to fade out more gracefully rather than collapsing too fast.

  • Color Feedback: There is subtle hue shifting based on sound amplitude, helping the piece feel alive and emotionally responsive.

2nd Intergration & Trouble Shooting

Overview: Successfully connected TouchDesigner and Max, and optimized visual parameters in TouchDesigner.

Dates of Development | March 26, 2025

Process Group | Xinyi Qian, Ashley Loera,Hefan Zhang

 

Part 1:

Ashley and I went to Alison House this afternoon to meet with Philly. While Ashley consulted Philly about audio processing issues in Max, I prepared for the connection test. After resolving her problem, we successfully connected Max and TD.

However, when I later returned to test the setup with the DPA microphone, I encountered a new issue: the airflow was too concentrated, resulting in a harsh, noisy sound. Ashley is currently working on refining this by adjusting the audio input processing to reduce the rough airflow noise.


Part 2:
In parallel, Hefan optimized the visuals in TD based on feedback from Philly and Jules, adding richer and more detailed effects. I conducted breathing tests with the new visuals and collaborated with Hefan to debug and refine the following aspects:

  1. Math Mapping: We adjusted the mapping so that the airflow data captured by the microphone results in smoother and more continuous visual transitions.

  1. Color Saturation & Noise: The previous version of the visuals was overly saturated. We adjusted the Levels and Noise parameters to create a more subtle, natural effect.

 

Progress on Designing a Visual Guide for 5-5 Breathing

Following my previous exploration of visual designs for the 5-5 breathing guide, I’ve continued experimenting with new ideas to create an animation that complements the central dynamic sphere without causing visual conflict.

This time, I tried creating an animation in After Effects using blue particle lines that grow upward, then fall and dissipate. The idea was to simulate a natural, flowing motion that aligns with the rhythm of breathing:

Inhale (5 seconds): The particles grow upward, symbolizing the intake of breath.
Exhale (5 seconds): The particles fall and dissipate, representing the release of breath.

After sharing the animation with the team, we identified a issue:

Problem: The falling motion of the particles wasn’t prominent enough, making it difficult to distinguish between the inhale and exhale phases.

Solution: I’m currently working on enhancing the falling effect by: Increasing the speed and visibility of the particles as they fall. Adjusting the timing to ensure the transition between growth and fall is smooth yet noticeable.

Designing a Visual Guide for 5-5 Breathing

In our previous research, we discovered that the 5-5 breathing technique—inhaling for 5 seconds and exhaling for 5 seconds—can help people achieve a state of calm and relaxation. To enhance this experience, we decided to incorporate a visual guide into our interactive installation. The visual design is divided into two parts:

1. A fixed animation that guides the 5-5 breathing rhythm.

2. A dynamic graphic that responds to the user’s actual breathing patterns.

My role is to design the fixed animation, which serves as a clear and intuitive guide for users to follow the 5-5 breathing rhythm. However, this task has proven to be more challenging than expected.

Initial Attempts and Challenges

1. Rotating and Expanding Dots:

I started by experimenting with rotating and expanding dots, hoping to create a sense of rhythm and flow. However, this design conflicted with the dynamic sphere in the second part of the visual, creating visual clutter and confusion.

2. Expanding and Contracting Circles with Halos

Next, I tried using a circular design that expanded outward during inhalation and contracted inward during exhalation, accompanied by a glowing halo effect. While this was visually appealing, it still overlapped and competed with the central dynamic sphere, making it difficult for users to focus on either element.

Key Insight: These attempts revealed that centralized and circular designs are not suitable, as they conflict with the dynamic sphere that occupies the center of the screen.

3. Background Color Transitions

To avoid overlapping with the dynamic sphere, I shifted my focus to the background, using color transitions to indicate the breathing phases. For example:

Inhale: Transition from a cool color (e.g., blue) to a warm color (e.g., orange).

Exhale: Transition back to the cool color.

However, this approach introduced two new challenges:

      1. Color Interpretation: Users might not intuitively associate specific colors with inhaling or exhaling, leading to confusion.

      2. Transition Smoothness: A sharp transition between colors felt abrupt and unnatural, while a slow, gradual transition made it difficult for users to distinguish the exact moments to inhale or exhale.

Current Direction and Next Steps

Based on these challenges, I’ve realized that the fixed animation needs to:

Avoid Centralized and Circular Designs: To prevent visual conflict with the dynamic sphere.

Provide Clear Timing Cues: Ensure users can easily identify when to inhale and exhale.

Maintain Visual Harmony: Complement the dynamic sphere without overwhelming it.

Moving forward, I plan to explore the following solutions:

Edge-Based Animations: Create animations that occur at the edges of the screen, such as a progress bar or waveform, to guide breathing without interfering with the central sphere.

Text or Symbol Indicators: Use simple text (e.g., “Inhale” and “Exhale”) or symbols (e.g., arrows) to provide clear instructions.

Subtle Motion Cues: Incorporate gentle, non-intrusive motion (e.g., flowing lines or particles) that aligns with the breathing rhythm.

Conclusion

Designing a visual guide for the 5-5 breathing technique has been a process of trial and error. While my initial attempts faced challenges, they provided valuable insights into what works and what doesn’t. By focusing on edge-based designs and clear timing cues, I’m confident that the final fixed animation will effectively guide users toward a calm and balanced breathing experience.

Exploring Audio-Driven Visuals in TouchDesigner: From Shape Transformations to Microphone Interaction

TouchDesigner Audio-Driven Graphics Experiment: From Shape Transformations to Microphone Input Control

Recently, I have been experimenting with TouchDesigner to explore dynamic shape transformations and audio-driven visual effects. Initially, I focused on modifying basic geometric forms using Noise, Output-Derivative (Slope), Threshold, Level (Gamma Adjustment), and Bloom Effects. Later, I integrated microphone input to control shape size and color, using Transform SOP for scaling and translation, with two Null CHOPs managing position and color separately.Touchdesigner learns note-taking and analysis

test

 


1. Initial Shape Transformations and Visual Experiments

① Noise + Output-Derivative (Slope) I started by applying Noise CHOP to introduce organic movement into the shape.To enhance the natural transitions, I used Output-Derivative (Slope CHOP) to smooth out the rate of change, preventing sudden spikes in movement.

② Threshold + Level (Gamma Adjustment) Threshold CHOP was used to create high-contrast effects, transforming smooth gradients into distinct binary patterns. Level CHOP (Gamma Adjustment) helped fine-tune the brightness curve, making the visuals either softer or more dramatic.

③ Bloom Effect Finally, I applied a Bloom effect, enhancing the highlights and adding a glowing aura to the shape, making it more visually engaging.


2. Using Transform SOP for Background, Scaling, and Positioning

To better organize the visuals, I utilized Transform SOP to: Add a background (either static or gradient-based). Control scaling dynamically. Apply translation effects to move the shape across the screen.Link transformations to audio input, so that sound influences both size and position. Additionally, I created two Null CHOPs: One for Position (to control movement based on audio input). One for Color (to change the color dynamically according to the audio intensity).


3. Integrating Microphone Input with Audio Device In CHOP

After experimenting with basic shape transformations, I moved on to controlling these parameters with real-time audio input.

① Capturing Microphone Input Using Audio Device In CHOP, I connected my microphone to feed real-time audio data into TouchDesigner. The raw audio data fluctuates too quickly, so direct mapping would result in erratic visual behavior. To ensure a smooth transformation, I applied additional processing.

② Smoothing Audio Input: Filter CHOP + Lag CHOP .Filter CHOP: Set Filter Width = 2.0 to smooth out the fluctuations, reducing rapid jumps. Lag CHOP: Applied a gradual transition effect: Lag Up = 1.2 (slower increase when the volume rises)Lag Down = 3.0 (even slower decrease when the volume drops)

③ Mapping Audio Data to Shape Scale with Math CHOP Mapped the volume range (0.01 ~ 0.3) to shape scale (0.5 ~ 2.0).This ensures that louder sounds gradually enlarge the shape, while softer sounds slowly shrink it, avoiding sudden jumps.


4. Connecting Audio Data to Transform SOP Mapped the output of Math CHOP to Transform SOP’s Uniform Scale, enabling shape size changes based on audio intensity. Connected Null CHOP (Position) to Translate parameters so that the shape moves dynamically with the sound. Linked Null CHOP (Color) to the color channels, allowing the shape’s color to shift depending on volume levels.

 

Group Meeting | Sound Design Notes | Max MSP & Touch Designer (Ashley Loera)

Meeting Date: February 28th, 2025

Sound Design Director | Max MSP | Ashley Loera

On the Sound Design Front:

During the Group Meeting on Friday, I inquired with the Visual Design Team about their Touch Designer TOE to confirm how many data outputs would be sent to my Max Patch.

Goal:

  • The goal is to prepare the Max MSP Patch to receive the correct # of data outputs from Touch Designer.

Progress During Meeting: 

  • The Visual Team inferred that there may be an opportunity for 2-3 Data outputs from the Touch Designer TOE that is fed into the Max MSP Patch via OSC.
  • To put this to the test, the Visual team shared their current Touch Designer TOE with me and I found that there was only 1 data output from Touch Designer coming into the Max MSP Patch using OSC.

Issues: 

  • Due to my limited knowledge I was unsure how to output multiple data points from Touch Designer. Furthermore, we had a limited amount of time to find solutions for this during the meeting; I was unable to dive into this to see how many data points I could work with.
  • Running the Touch Designer TOE & the Max MSP Patch caused my laptop to heat up quickly and caused Max MSP to crash out (twice).

Possible Solutions:

  •  Create 2-3 Data Groups in the Max Patch and use the Data Output from Touch Designer to select which Data Group to select to process Audio Input via unique SFX connected to each Data Group. (i.e. Reverb, Delay, Glitch)
  • I suggested that the group purchase a router so that I can receive information from the Laptop hosting the Touch Designer TOE during the Installation (as suggested by Philly during our Tutorial).
        • We are considering purchasing the following router.

TP-Link 300 Mbps Wireless N Access Point, Passive PoE Power Injector, 10/100M Ethernet Port (TL-WA801N)

Amazon Link: https://www.amazon.co.uk/TP-LINK-TL-WA801N-Wireless-Injector-Ethernet/dp/B085M4ZJ2L/ref=asc_df_B085M4ZJ2L?mcid=8d5f95930ef93bfd8dfe419c344bab62&th=1&hvocijid=14358773039682378152-B085M4ZJ2L-&hvexpln=74&tag=googshopuk-21&linkCode=df0&hvadid=696285193871&hvpos=&hvnetw=g&hvrand=14358773039682378152&hvpone=&hvptwo=&hvqmt=&hvdev=c&hvdvcmdl=&hvlocint=&hvlocphy=9046887&hvtargid=pla-2281435177418&gad_source=1

Ideas to Consider: 

  • After further discussions with Sound Design Director, Ruolin Liu, I will be attempting to include a synthesizer in the Max Patch to assist in creating music using the Max Patch to accompany Sound Design Director, Ruolin Liu’s, meditation compositions.
        • The idea that stemmed from this conversation was to further develop the Delay Effect in the Max MSP Patch to slowly decrease the volume of the audio input over time to seamlessly fuse into the Synthesizer.
                • Idea: To sonically represent the process of presence with one’s own voice. From being conscious of the breath, to being conscious of the one’s center and balance.
css.php

Report this page

To report inappropriate content on this page, please use the form below. Upon receiving your report, we will be in touch as per the Take Down Policy of the service.

Please note that personal data collected through this form is used and stored for the purposes of processing this report and communication with you.

If you are unable to report a concern about content via this form please contact the Service Owner.

Please enter an email address you wish to be contacted on. Please describe the unacceptable content in sufficient detail to allow us to locate it, and why you consider it to be unacceptable.
By submitting this report, you accept that it is accurate and that fraudulent or nuisance complaints may result in action by the University.

  Cancel