Any views expressed within media held on this service are those of the contributors, should not be taken as approved or endorsed by the University, and do not necessarily reflect the views of the University in respect of any particular issue.
In terms of sound design, Lidia and I have confirmed the audio materials for our first sound design today. We borrowed several different microphones from the music store to record ambient sound and sound effects, and included them in the first stage of sound design. During the recording process, Kyra assisted me in recording a lot of sound materials at Alison House. And Leo gave a lot of useful and valuable advice before recording, which made our recording process very easy. Lyida also worked with Leo in the afternoon to study sensor and technology issues, and Lidia and I also took her advice on sound.
We wanted to create an immersive VR museum tour experience that combines sight, sound, and interaction to allow visitors to explore the exhibits in a more intuitive and sensory way, breaking down the physical limitations of traditional museums. Incorporate interactive exhibits: 360° views of artifacts, even disassembling and zooming in to understand internal structures. Narrative design with sound: Dynamic ambient sound and spatialized audio are used to make the content of the exhibition not only visible, but also “heard”.
Rough Idea 2 :
We wanted to make an immersive urban wanderlust experience about Edinburgh, giving participants the freedom to explore the city from a first person perspective. Enhance their perception of the city space through real-time soundscapes, interactive visual elements and dynamic data. Take a guided tour with AR/VR: use AR glasses or your phone to see hidden stories and historical layers of the city. Sound roaming can be added: put on a headset and ambient sounds and AI narration will change depending on the location to enhance immersion. We want to let people experience the city in new ways, not just pass by it.
The Second meeting
The concept of VR and AR use was changed to audio-visual installation due to the lack of technical skills we have for implementation. Moreover, since the visuals were dominating in those project ideas and our group has more sound designers/ audio technicians, we wanted to create something which would be more suitable for our skills.
About First concept draft :
Adapting the 5 stages of grief to create an audio-visual multisensory experience. [https://en.wikipedia.org/wiki/Five_stages_of_grief]
Ideally, the project would use 5 separate and isolated rooms to present each emotion, [denial, anger, bargaining, depression, acceptance]. The emotions are presented in this order, but since the artwork is generative, the visitors can freely move to the next room anytime.
(version B: the visitors need to achieve/find/complete something to be able to progress to the next room)
Preferably the sound would be presented with multiple monitors (around the sides, and at random locations within the space), each equipped with a proximity sensor. Some monitors would play an ambience audio track providing the base of the mood, while the monitors equipped with sensors would trigger an impact, sfx, …
At the same time, these sensors would trigger visual effects, filters, and pre-sets, which would be added to the main visual
Main Visual:
Media display with projectors/ screens, lights, brightening up the otherwise dark space. The image would evolve around the theme of humanoid characters, faces,.. It would be a mixture of real-time footage (perhaps heavily edited) taken from the exhibition rooms with cameras, plus additional digital characters/ drawings/effects … We are also considering using thermal cameras.
(reference project for visuals: Professor Jules’ ‘ARequiemforEdward Snowden’)
Questions for Leo:
– Is it possible to have 5 rooms next to each other which we can fully soundproof/isolate? (but they should still be quite connected to allow smooth transitions across the rooms) If not possible, we have alternative ideas for solutions: such as using headphones instead and create binaural mix
– How many speakers can we possibly get? If we are planning to build the installation across 5 rooms, we would need quite a lot (minimum 25?)
– How many sensors can we get/buy?
– For the visuals, projectors or screens would be better (in terms of quality and sense of immersion). Ideally we would love to do surround visuals, but that might require a lot more projectors
Quick mood board pictures for desired visuals/ setup:
Meeting picture:
The Third meeting:
About Second concept draft:
Introduction
Emotions are intangible, and everyone defines them differently. For some, sadness flows like water, while for others, the feeling of flow brings joy. Since emotions are difficult to define, we chose not to restrict them but instead to create an emotional garden that can change infinitely, allowing everyone who enters to shape their own emotional garden through the control panel.
Inspiration
In modern society, emotional labor refers to individuals regulating their emotional performance in order to meet social expectations, especially in social and workplace Settings. Our daily facial expressions and emotional responses are often regulated to conform to external expectations, and behind this is often the suppression of personal emotions.
Goal
Reflect and discuss the phenomenon of emotional labor and its influence on individuals.
How to express thetheme
Emotion Garden planA
Using heart rate to monitor people’s real emotions and facial recognition to detect fake emotions on people’s faces, the greater the difference, the more withered the visualized plants, and the smaller the difference, the more robust the plants.
(Ps: Visualizing with plants is just an idea I have, not fully researched yet, maybe there is a better way to express it)
Emotion Garden plan B
Through facial recognition plus AR in different emotions on the face to present different visual effects, so that people can “really” see the emotions and feel that.
(Ps: The idea was rejected because of the sensitivity of facial information)
The Presence workshop
Refined concept:
Due to the limitations of available equipment, we reduced the scale of the project into one room, to somehow express the 5 stages together. This proved to be a better solution anyways, since the 5 emotions are not separated with a straight line and one might experience more, or a mixture of several at the same time.
Therefore the new project summary is:
1 room – 5-8 speakers – visual projection. Creating an interactive space where the collective emotions and active presence (of the people in the room) are artistically expressed by audio-visual representation. This is generated by the visitors themselves, with the use of interactive devices around the room.
[Multiple sensors will be available to play with, which would trigger audio content, and would affect the visual presentation].
Sensors:
[Have to be compatible with ARDUINO]
Audio:
Heart rate sensor(s) = influences the low-frequency content
Light sensor(s) = influences the high-frequency content
Proximity sensor(s) = triggers random sound effects
Humidity sensor = influences the base, the ambience of the audio track (?) (it changes slower over time. The density of the crowd will influence the air)
(Temperature sensor = extra visual display (only if we can find one for a low price)) These sensors would individually have various value parameters assigned to them, therefore once a specific value is triggered, the system would employ real-time audio processing to modify the sound.
Visuals:
Perhaps it features a big mass of shape, and the data would control colours (like temperature in Thermal cameras), shapes, particles …
(We could also use MaxMSP jitter to manipulate video output with sound.
What will be the interactive visual system?
– How will the data be processed into abstract visual representation?
– Which parameters trigger what visual? – …?
For both audio and visual content, we have to think about the two extreme ends: How does it look/ sound with no people in the room [no interactions with sensors], and at full capacity [with many people interacting with the sensors]?
TO DO LIST:
Since the assignment is due next Thursday, we decided to divide the tasks up:
Kyra = visuals: basic design in Unity or Touch Designer Xiaole = time management, project planning& strategy
Evan = sound: at least 30 seconds of audio, ambience and/ or additional sound effects
Isha & Lydia = technical components: get OSC to work with Arduino, how to connect these systems triggering sound & visuals, what other tools we need,…
Lidia = writing the project introduction, concept, and aim. References to other similar artworks
Workshop picture:
Handwritten Notes:
By Lidia Huiber
The Fourth Meeting
At this meeting we finalized the final concept.
The project explores the theme of presence and grief through a multi-sensory audio-visual installation that presents the “five stages of grief” (denial, anger, negotiation, depression, acceptance). The five stages of grief (denial, anger, negotiation, depression, acceptance) are presented. The installation combines real-time image processing and digital design with chaotic, distorted visuals and a subdued soundscape that expresses the flow of emotions. Viewers interact with devices such as heart rate sensors, light sensors and knobs to experience a non-linear emotional journey from chaos to calm, revealing the randomness and complexity of grief. The project aims to engage the audience in reflecting on their own emotional state and that of those around them, encouraging deeper emotional connection and a focus on “authentic being”.
At the end of the day we had a detailed division of labor for the project.