Previous blogs about this section
This section of the project uses an audio-visual approach, incorporating both a microphone and a camera as inputs. By collecting and analyzing the volume data of sound, the real-time video from the camera is manipulated to show the portrait becoming distorted in response to the varying volumes of speech. This creates a dynamic and interactive visual effect that mirrors the fluctuations in the audio input.
DSP
For the audio input in this project, I first pass it through a gate plugin to eliminate unnecessary noise, preserving only the sounds made by visitors. Then, I apply a reverb effect to maintain a consistently high level of input volume. After obtaining the volume levels, I perform calculations and scale these values between 0 and 1, the range accepted by the Vizzie object. This scaling adjusts the video’s hue, the quantity of feedback, and the probability of noise, dynamically altering the visual output based on the audio input.
Goals and Outcomes
Through this interactive design, I aim to reflect the relationships between Alzheimer’s patients and their close contacts, while also emphasizing the crucial role of caregivers.
The final visual effect is such that when someone speaks into the microphone, the screen displays the original image captured by the camera. When there is silence, the image gradually distorts, and the facial contours on the screen begin to be vague and finally fade. This effect artistically represents the fading memories of Alzheimer’s patients towards their loved ones. Continuous speaking into the microphone keeps the image clear, symbolizing how constant communication can help awaken the patient’s memories and clarify their perception of the world. Conversely, a lack of engagement and conversation causes the patient’s memories, and the facial outlines in the visual representation, to fade out.