Any views expressed within media held on this service are those of the contributors, should not be taken as approved or endorsed by the University, and do not necessarily reflect the views of the University in respect of any particular issue.

Presentation Mechanism Design

Our project is divided into four parts, illustrating the progressive intensification of a patient’s memory disorder symptoms, and therefore must be played in sequence. To ensure each section plays automatically during the formal presentation and minimize the need for manual control, I designed a Max Patcher to handle the playback of each part, as illustrated below.

The presentation of our project is split into four parts to show the progressive deterioration of a patient’s memory disorder symptoms:

1. Video Playback (Parts 1 & 2): Utilizes Max’s jit.playlist which outputs a “done” signal after a video finishes, automatically triggering the next video. This ensures a seamless, sequential playback without manual intervention.

2. Audio-Visual Interaction (Part 3): Employs Vizzie for audio-visual interaction, triggered by a toggle that activates both microphone and camera capture upon receiving the “done” signal from the previous video.

3. Interactive Segment (Part 4): Transitioned via a manual click due to the variable duration of Part 3. Developed in TouchDesigner, it employs the NDI protocol to send video signals to Max on the same computer, allowing for control over the display’s start and end within Max.

4. Loop Initiation: A Bang button is designed to close Part 4 and trigger Part 1’s video, starting the cycle anew. This setup ensures that all parts are played in order and that transitions between different technologies and formats are smooth.

Hardware Control

To help the audience quickly understand how to engage with the exhibit, I mapped all operations to hardware controls and created a guidance display for the computer screen. I used the Korg nanoKontrol and selected two buttons for this purpose. This setup aims to simplify interaction and enhance user experience during the exhibition.

Guidance

Hardware Setup

   

Screen Recording

A Max patcher screen recording for a complete presentation loop.

Part 3 – Fade Interaction Design Process

Previous blogs about this section

Mid-term Project Report: Fade (Han)

Part3 modification for formal presenting

This section of the project uses an audio-visual approach, incorporating both a microphone and a camera as inputs. By collecting and analyzing the volume data of sound, the real-time video from the camera is manipulated to show the portrait becoming distorted in response to the varying volumes of speech. This creates a dynamic and interactive visual effect that mirrors the fluctuations in the audio input.


DSP

For the audio input in this project, I first pass it through a gate plugin to eliminate unnecessary noise, preserving only the sounds made by visitors. Then, I apply a reverb effect to maintain a consistently high level of input volume. After obtaining the volume levels, I perform calculations and scale these values between 0 and 1, the range accepted by the Vizzie object. This scaling adjusts the video’s hue, the quantity of feedback, and the probability of noise, dynamically altering the visual output based on the audio input.

Goals and Outcomes

Through this interactive design, I aim to reflect the relationships between Alzheimer’s patients and their close contacts, while also emphasizing the crucial role of caregivers.

The final visual effect is such that when someone speaks into the microphone, the screen displays the original image captured by the camera. When there is silence, the image gradually distorts, and the facial contours on the screen begin to be vague and finally fade. This effect artistically represents the fading memories of Alzheimer’s patients towards their loved ones. Continuous speaking into the microphone keeps the image clear, symbolizing how constant communication can help awaken the patient’s memories and clarify their perception of the world. Conversely, a lack of engagement and conversation causes the patient’s memories, and the facial outlines in the visual representation, to fade out.

Video Sound Design Final Edition

In our project, the first part (Prologue) and the second part (Blur) are presented as videos. To ensure a smooth experience for the audience, we decided to make the soundtrack directly for videos. 

The sound design focuses on glitch sounds to reflect the distortion in memories of Alzheimer’s patients and the visually distorted elements. No music was added to the video’s sound design, only sound effects were used, with tonal sound effects tuned to the same key as the music to ensure harmony. To make some difference to the first part where the voice is tightly synced with visuals[1], the second part includes a poem read freely, generated by ChatGPT[2] and voiced via a TTS model, creating a complementary blend with the edgy glitch sounds as well. 

The final videos are as follows.

Music Composition Process

In this project, music serves as an element for shaping the atmosphere of the scene. I designed a continuously played generative music, primarily using Ableton Live, along with plugins from Max for Live, and used Max/MSP to control transitions between musical sections.

Initial Idea

During the initial background research for the project, I discovered that stories of Alzheimer’s disease evoked a persistent, underlying pain in me. Like the gradual distortion of a patient’s memory, it is a slowly worsening process that affects the patients and also causes deep-seated pain in their loved ones. Therefore, I wanted to create an ambient music piece that is slow, steady, and subtly sad, conveying the feeling of slowly telling a story. In seeking inspiration, I found that the slightly detuned piano was perfect for conveying this mood. When one hears this sound, it evokes the image of an aged person, sitting in front of an old piano at home, gently recounting memories through playing piano. Based on this, I composed a four-bar chord progression as the foundation of the piece and created the following demo to establish the overall mood of the music.

Further Arrangement

Initially, we planned to create separate musical pieces for each of the four sections of the project, each advancing progressively. However, for better coherence and to enhance production efficiency, we decided to develop a single piece of music throughout. 

Creating a track that plays continuously for at least eight minutes requires continuous development and variation. Therefore, I expanded on the initial demo by adding more instruments and extensively using Max for Live MIDI Effects on each part. Thus the music, based on a stable chord progression loop, randomly presents some melodies, rhythms, and effects. 

Chord Progression

For the random melodies, I used an AI text-to-music model to generate some musical fragments[1], selected those that fit our project’s atmosphere, and converted these audio files into MIDI Files to further generate melodies using these plugins.

Device Rack for Melody Track
Device Rack for Melody Track
Original Melody Clips
Original Melody Clips

Utilize AI-Generated Voice

This project aims to evoke the emotions of its audience, and in the creation of the audio, I found that the human voice is particularly effective in conveying emotions. Additionally, voice is a crucial element in topics related to memory[2]. Therefore, I decided to incorporate this element into the music. The process of generating the voice is described in more detail in this blog[3]. 

During production, I first arranged these audio clips in the Session view of Live.

Voice Clips

Afterwards, I created several effect chains and controlled the volume of each chain using the Macro knobs in Live.

Audio Effect Rack with Rand Button
Max Patcher for Sending MIDI to Trigger Randomize

In Live, there is a “Randomize” button that randomly changes the values of each knob within a rack, effectively altering the volume of each effect chain. These chains combine to add rich variations to the voice. To maintain ongoing variation in this section, I set up a Max Patcher using metro and noteout to send timed MIDI signals from Max. This triggers the “Randomize” button in Live at regular intervals.

MIDI Mapping in Live

The Music Version at Previous Stage

Week8 New Version of A Generative Music

Transition Control

After completing all the musical content, I noticed an issue: while every part of the music was continuously changing, the overall piece was too uniform with all parts playing simultaneously. Therefore, I restructured the music to vary the sections as follows, allowing each instrument to alternate in taking prominence.

1-> +vibe +vox
2-> +pad -vibe
3-> +organ 
4-> +harp+DB
5-> +saw+lead
6-> -harp
7-> -lead
8-> -pad -DB -saw +harp
9-> -vox
10-> +pad
11-> -harp
12-> -pad -organ +vibe +vox

I controlled the faders in Live for each part via MIDI CC signals linked to fader movements in Max. I also set up Max to change the fader values every 8 bars, allowing the music to transition smoothly into the next section.

Midi Mapping in Live

Music Final Edition

Final presentation video

We recorded a lot of videos during production and on the day of the final presentation. We organized the footage and edited it into the video below to better demonstrate our final success.

I use Adobe AE, Ar and some editing software to synthesize the video.

This is the video link:https://media.ed.ac.uk/media/The+long+farewell/1_2h5zctr4

css.php

Report this page

To report inappropriate content on this page, please use the form below. Upon receiving your report, we will be in touch as per the Take Down Policy of the service.

Please note that personal data collected through this form is used and stored for the purposes of processing this report and communication with you.

If you are unable to report a concern about content via this form please contact the Service Owner.

Please enter an email address you wish to be contacted on. Please describe the unacceptable content in sufficient detail to allow us to locate it, and why you consider it to be unacceptable.
By submitting this report, you accept that it is accurate and that fraudulent or nuisance complaints may result in action by the University.

  Cancel