The task on building the audio components have progressed, we combined our ambiences with Evan and built the system for the interactive environment change.
A master Blend container displays the individual stage blend containers, in the scale of 0-100, each stage [emotion] will have a value of 20. The RTPC is a Game Sync Parameter called FIVE_STAGES.
Screen video of the Five stages RTPC in action
https://media.ed.ac.uk/media/DMSP%20week%209%205stages%20RTPC/1_jrzrfuxs
Changes:
Up until now we designed maximum 1-2 ambiences for each state, however, Leo suggested we could have an ambience for each speaker (each equipped with a Proximity sensor), so the audience could mix the atmo themselves.
This required us to build a new logic system in Wwise, and create further 3-4 ambiences for each scene. Due to the nature of the Speaker setup in Atrium which we will be using [4 speaker surround], – for each stage we will have a ‘Center’, non-directional atmo playing from all speakers, and 4 directional atmos, panned Front-left, Front-right, Back-left, Back-right.
Once again, we shared the work with Evan, I agreed to do Denial, Anger, and half of Depression.
In Wwise, we created a Game Sync Parameter for each Speaker Proximity sensor, we will assign the corresponding ambiences here individually, in each stage.
We had a successful session of connecting one sensor to a Wwise ambience, where the distance was controlling a High-pass filter RTPC. Next week Tuesday we are planning to test more at the same time:))
Here’s a short video of us testing:
dfb2af39-1f82-4075-983e-0f301c8bacdb
(More about this in our project blog)
On Wednesday (26th March) we will have a big testing in Atrium, to practice how the sound will be moving between the stages!!

