Any views expressed within media held on this service are those of the contributors, should not be taken as approved or endorsed by the University, and do not necessarily reflect the views of the University in respect of any particular issue.

A reminder of the spectral whalesong approach (composition-Morrison, C.).

Although it has been discussed in the 1st submission, and has been referenced throughout, I thought I should make a blog post about how the music was constructed in the first layer, as to be very clear. The chords are based on spectral analysis on both samples of whalesong, and the sea creature sound effects made for the section. The spectral analysis was done by using the programme SPEAR.

Figure 1, analysis of a sea creature sound effect.


This is then converted into notation by using an SDIF (SPEAR file) to score converter, as notating out manually would give a much less precise result.

Figure 2: Image of a spectrogram-chord generator on Openmusic.


This technique of creating harmony from different sounds/timbres is a common technique in spectralism (Fineberg, 2000) a musical style emerging in the 1970s where the musical compositions are based on the natural spectra and/or acoustics of certain sounds (Harvey, 2000). Gérard Grisey work Partiels specifically uses this technique, instead with the harmony is based on the sound spectra of a low E on a trombone (Hasegawa, 2009). This is a more ‘literal’ interpretation of the sea than the other approaches, which I think is appropriate for establishing this layer, the fear of the sea creatures is represented by harmony based on the acoustic qualities of said creatures. Resulting sounds and notation can be found in the appendix. 


Reference list.

Fineberg, J. (2000) Guide to the basic concepts and techniques of spectral music. Contemporary music review. 19 (2), p. 81–113.

Harvey, J. (2000) Spectralism. Contemporary music review. [Online] 19 (3), p. 11–14.

Hasegawa, R. (2009) Gérard Grisey and The “Nature” of Harmony. Music analysis. 28 (2/3), p. 349–371.

 

Interaction_Feedback_Reflection

Improvable Parts <feedback>

Although the final presentation is much better than the previous production, there is still room for improvement through on-site user feedback and personal experience.

  1. The filter effect not as obvious enough. It may be related to the fact that the sound samples are played for too short a time and users cannot quickly feel the filter changes.
  2. The range of ultrasonic sensors could be set more clearly. Although it has almost no impact in actual operation, it can be improved from the numerical perspective.
  3. The interactive connection between sound and vision could be stronger. For example, adding other forms of interaction between the sound part and the visual part in Max to improve this. But this may require a significant amount of time to resolve. I will continue to explore this point towards the end of the course. This will also be a topic worth discussing after the semester is finished.

 

Personal Reflection

During the production of this project throughout the semester, I found that I was not just doing a group assignment, but also absorbing a large amount of knowledge from various aspects through various channels. I have greatly improved both in terms of technical operations and teamwork skills. This was such a meaningful and worthwhile experience for me.

Technically, I had no experience with sensors before doing this project. Although I know that it is very challenging to choose a brand new thing to study and implement it into actual operations, I still want to try hard. As our project is an interactive installation, interactivity is something I always think about the most. At the beginning, my thinking was limited to optimizing this by modifying some codes in Arduino. This did have some effect, but after many discussions with Jules, Leo and Joe, I gradually discovered that there are many ways and angles to better solve the problem. For example, I can modify the sensor connection method, add some transitional objects in Max, and so on. It gave me more ideas in learning technical skills.

In terms of teamwork, our team has a clear division of labour from the beginning, which makes our subsequent work efficient. But we don’t just focus on the part we are responsible for. We will also help each other when other people’s parts encounter difficulties. My interaction part received strong support from Ruojing and Yuan in the early testing and rehearsals. This has provided me with great help, and I am very grateful to them.

I learned a lot during the making of this project. These 12 weeks of struggle have been a very beneficial experience on my learning path. In the future, I think I will continue to study interaction-related skills in depth and strive to create more creative interactive works.

Thanks to the teachers and classmates who have given us strong support. Special thanks to Jules, Leo and Joe who have been patiently guiding us in technology. And finally, thanks to my hardworking group members.




Interaction_Software

For the interactive software, I mainly use Arduino and MaxMSP (see Figure 1). Arduino is responsible for processing the information received by the sensors connected to the hardware Uno, and Max is responsible for receiving the data in the Arduino and connecting it to the Max patch for the audio and visual parts.

Figure 1: Arduino and MaxMSP Software.

 

Arduino

In the preliminary test, I tested the ultrasonic sensor, light sensor and temperature&humidity sensor respectively and created a separate Arduino project for each sensor.

Here are the videos and pictures of the Arduino project for each sensor.

1. Ultrasonic sensor (see Video 1 and Figure 2).

Video 1: Ultrasonic Sensor Connects to Arduino.

Figure 2: Ultrasonic Sensor Separate Arduino Project.

2. Light sensor (see Video 2 and Figure 3).

Video 2: Light Sensor Connects to Arduino.

Figure 3: Light Sensor Separate Arduino Project.

3. Temperature&Humudity sensor (see Figure 4).

Figure 4: Temperature&Humidity Sensor Separate Arduino Project.

After several weeks of adjustments and optimization, I divided the 5 sensors in the Arduino project into two categories: Audio and Visual according to the on-site classification, thus forming two large Arduino projects. Here are the Arduino project files link: Arduino Project.

What the Visual project receives is the first ultrasonic sensor that controls pattern type changes and the first light sensor that controls pattern complexity, as well as the temperature and humidity values ​​of the temperature&humidity sensor (see Figure 5). The Audio project receives the second ultrasonic sensor that controls the x-axis of the panner and the second light sensor that controls the y-axis (see Figure 6).

Figure 5: Arduino Project That Combines the Visual Part Sensors.

Figure 6: Arduino Project That Combines the Audio Part Sensors.

During the testing and operation of Arduino throughout the semester, I referred to a lot of relevant materials, such as the following three videos, which were of great help to my hardware and software settings.

  1. Using the HC-SR04 Ultrasonic Distance Sensor with Arduino.
  2. Light sensor Grove module.
  3. How to Use a DHT11 Humidity Sensor on the Arduino.

However, I still encountered many practical challenges and difficulties throughout the Arduino learning process. The main things that cost me a lot of thought are:

  1. How to connect multiple sensors to the same breadboard and run them successfully;
  2. How to modify the code in Arduino to increase the numerical sensitivity of the sensor;
  3. How to optimize the code in the Arduino project so that data can be sent to Max and both software can run smoothly.

After continuous research and breakthroughs and discussions with tutors, I gradually solved these difficult problems and finally got a more complete Arduino project system.

 

MaxMSP

The interactive part in Max is mainly the data transmission settings of sensors and MIDI controllers.

1. Sensors Part in Max.

In the sensors part, I mainly started by studying how Arduino sends data to Max. I referred to the tutorial video provided by Leo and successfully created a basic “Sending data from Arduino into Max” Max patch (see Video 3 and Figure 7).

Video 3: Sending data from Arduino into Max Tutorial.

Figure 7: Sending Data From Arduino Into Max Patch.

And after that, I successfully tried running this patch using the light sensor and ultrasonic sensor (see Video 4).

Video 4: Light Sensor and Ultrasonic Sensor Sending Data to Max.

After completing these preparations, I began to formally invest in implementing the specific settings of each sensor corresponding to the specific parameters in the total Max patch. And I added objects for sending data to the visual section and audio section in the Max patch according to the determined categories. Here is the total final Max patch link: Max Patch with required audio samples.

(1) Visual Section.

The visual section mainly uses one of the ultrasonic sensors and one of the light sensors. Since the temperature and humidity sensor is connected to the same breadboard as these two sensors, it can only be placed together in the visual section (see Figure 8).

Figure 8: Sending Data from Arduino to Visual Section of Max Patch.

For the humidity and temperature values, I used send objects and receive objects to send them to the filter part and master level part respectively.

The purpose of this filter is to allow users to experience changes in sound frequency by interacting with the humidity sensor. After Leo’s guidance, I discovered that the cascade~ object can implement audio filter changes. With the filtergraph~ object, the cutoff frequency can be changed by the value received by the sensor (see Figure 9).

Figure 9: Filter Part in Max Patch.

The master level part is set up to avoid the phenomenon that when there are many users in the environment, they produce more sounds, resulting in some users not being able to hear the sounds emitted by the installation. So the less obvious temperature value in the temperature&humidity sensor can be used to control the master level, thus realizing the operation of adjusting the total volume invisibly according to the on-site environmental conditions. That is, the more people there are, the higher the temperature, and the greater the master sound level of the installation. Corresponding to the implementation in Max, I used the live.gain~ object set to four channels to connect it to the panner and dac~ object to achieve master level control (see Figure 10).

Figure 10: Master Level Part in Max Patch.

(2) Audio Section.

The audio section mainly uses the remaining ultrasonic sensor and the remaining light sensor (see Figure 11).

Figure 11: Sending Data from Arduino to Audio Section of Max Patch.

As can be seen from the figures above, I added a scale object after the value received by each sensor to convert it to the applicable range of each parameter. And I also added line objects in front of some parameters and set appropriate metro values ​​for them, so that the value changes more smoothly without lag.

 

2. MIDI Controller Part in Max.

The role of the MIDI controller in our project is as a button for switching layers. During the rehearsal, I tested that the button on the MIDI controller we wanted to use had a corresponding value of 67 in Max, so I used the select object after notein object to select this value (see Figure 12).

Figure 12: MIDI Controller Part in Max Patch.

However, I actually encountered difficulties during the setup process. After connecting all the objects at the beginning, I found that when I pressed the button once, the counter would count twice. Later I found out that it was because Max received the value 67 twice when I pressed the button and released the button. But what I wanted is that Max only receives the value when the button is pressed. So I sought Leo’s help, and he guided me to add a stripnote object after the notein, which can filter out the note-off information and leave only the note-on information, achieving the effect I wanted.

The final effect is that when the user presses this button, Max will receive an input of 67 values, so that the counter counts up once to achieve the purpose of switching to the next layer.

 

Feedback and Reflection

I received a lot of feedback on site and had some personal reflections after the final presentation. Here are the detailed descriptions: Feedback and Reflection

 


Week11_Final_Presentation_Yuan_Mei

This week, before our final presentation on Wednesday, our project had to make a significant decision. We decided that having a non-linear loop structure would be most advantageous for our installation. This decision is made because the pace of audience interaction with the level button cannot be predicted. We don’t want audiences to rush through all the levels. Such a result would not align with our project idea and goals. 

Choosing a non-linear looping structure reduced the user’s risk of moving quickly through each level and prevented the viewer from prematurely ending the immersive experience. Therefore, our project did not end at the last peaceful level, followed by a cycle of four levels starting with the viewer’s interaction with the main controller buttons. This change ensures that audience engagement remains dynamic and allows for a more immersive and controlled experience.

In addition, we received valuable feedback from the audience during the final presentation. Based on the feedback, we took action quickly to enhance our project’s clarity. One insightful suggestion was emphasising the theme, ensuring audiences catch the project’s idea. We promptly designed and displayed a poster at the entry point to address this. This placement ensures that every visitor to the room is immediately introduced to our project’s theme, fostering better understanding and engagement right from the start.

Another valuable feedback suggested relocating the instructions for interacting with the sensors to the top of the box rather than placing them aside. This adjustment significantly enhanced the audience’s experience, making the instructions more readily accessible and easily read while engaging with the sensors. By implementing this improvement, we ensured that participants could quickly understand how to interact with the sensors, facilitating a smoother and more engaging experience for everyone involved.

           

Additionally, we received suggestions for further enhancement during submission 2. One suggestion addressed the narrow frequency range, particularly the underutilization of the high frequency across some levels. This feedback prompted us to make adjustments to ensure a better listening experience. By implementing sounds in different frequency ranges across the levels, the project will be optimized, having a richer and more dynamic auditory experience for our audience.

Below are some short videos that we recorded during the presentation.

Interaction:

Location setup:

 

 


Week11_Final Performance_Ruojing Chen

Yesterday was our official performance.
Yesterday morning, we learned that our midi controller was reserved by others, so we had to borrow a new one, but we had already made a cover for the previous midi controller, so we urgently made a new cover for the new controller in the morning.


Before the evening performance, we also met Joe, hoping to solve the problem of the sensitivity of video operation controlled by our sensor, and finished the problem before the performance.

Before the performance, Jules came to experience our project setting in advance, and gave us some suggestions especially about patch, which affirmed our efforts and was very happy to see that we applied what he taught us.
Jules’s advice is as follows:
1.Set more speaker controls to seek a better sound experience.
2.Add frequency levels of music or sound effects to make it sound richer, such as high frequency.

During the whole performance, we also received suggestions from different teachers, and we are constantly improving what we can modify during the performance.For example, in the on-site furnishings, the design of posters and the use of sensors, we have refined and temporarily added them.

As for the design of the whole project content itself, I asked several teachers and got the following suggestions:
1. The venue setting can be a little darker, highlighting the terrible and unknown immersion (some teachers recommended ECA to go into the all-black classroom).
2. The pattern transformation of the video is very abstract, but it can also be a little more narrative. For example, the pattern that increases the intention of falling quickly indicates that the player falls into the ocean and highlights the fear.
3. If there are more channels coming from above, it will highlight the immersion of the characters on the seabed.

Thank you very much for Leo’s serious and responsible company, Jules and Joe’s technical support, all the teachers, classmates and friends who came to watch, and all the members of DMSP_presence group.



Week11_Final_Presentation_Jingqi Chen

This week is final presentation week. The time set by our presence group is from 18:00 to 20:00 on Wednesday night, at the Atrium of Alison House. The reason for setting it at night is that we want a darker environment, which will greatly increase the immersion for our installation, and the value of the light sensor will be easier to control. But the sudden arrival of daylight saving time caught us off guard. Although the brightness is higher than originally expected, the on-site effect is still relatively good.

Following are some videos of the final installation:

 

Sensor Sensitivity Issues

In the afternoon before setup on the day of the final presentation, I met with Joe at the scheduled time to resolve the sensor sensitivity issue. Joe pointed out the problem with the sensor sending data part of the original Max patch, that is, I divided the information received by the sensors on the same Arduino Uno into two route objects (see Figure 1).

Figure 1: The Sensor Sending Data Part in the Original Max Patch.

This will cause Max to only receive one of the sensor data allocated by the route object, and cannot output the data of all sensors at the same time. After gathering all the data into a route object, the data transfer ran normally (see Figure 2).

Figure 2: The Sensor Sending Data Part in the Modified Max Patch.

In Arduino, Joe suggested changing delay() to yield(), so that the code can run step by step without affecting the progress of other tasks (see Figure 3).

Figure 3: The Application of “yield()” in Arduino Project.

After these adjustments, the sensor sensitivity problem was successfully solved. This greatly helped improve the integrity of our installation while ensuring fun.

 

Ending

In addition, we intensively discussed the ending settings before the final presentation. Considering that each user’s perception of emotions is different, some people may want to escape from the environment after calming down, but some people may still be immersed in it and want to feel it again. So we decided not to limit the specific ending, but to let users choose which level to end the experience. This means that the four levels are in some kind of loop. After users press the button to reach the fourth calmness level, they can continue to press the button to restart the first level, and so on. That is, users can press the button an unlimited number of times until they want to end at a certain level.

 

Feedback

We also received some very helpful feedback on site. The first one to arrive was Jules. He experienced the entire interactive installation on his own and gave very substantive advice (see Figure 1).

Figure 1: A photo Showing Jules Giving Feedback.

The first thing is that most of our sound samples are concentrated in the same frequency band, mostly in the middle and low frequencies, resulting in the phenomenon that music and sound effects are easy to “fight” with each other, so that the two can not being well highlighted or integrated together. The solution is to replace some music and sound effects with higher frequency sounds, such as piano arpeggiated melodies or high-frequency sea monster sounds.

The second one is that it would be more immersive if we could add four speakers above to create an underwater perspective. Since our theme centered around thalassophobia, it would have been more engaging to create a deeper ocean atmosphere on site.

The third one is that we would better make a poster and stick it on the door outside, so that everyone can have a general understanding of our group’s installation before entering. So we drew our own cute posters and put them on the door of the Atrium (see Figure 2 and Figure 3).

Figure 2: Poster making process.

Figure 3: Poster of our group’s installation.

The last point is the placement of hardware equipment. Jules suggested placing the two sensor boxes in columns together on the table, rather than in the same horizontal row (see Figure 4). In this way, when the user controls one of the light sensors, the other light sensor can also be controlled together, so that the user can better feel the changes in sound and vision at the same time.

Figure 4: The Placement of Hardware Equipment.

During the formal presentation, a tutor made a very clever suggestion about the placement of our equipment. She suggested that we could attach the sensor usage instructions directly to the sensor boxes, so that users do not have to glance at the iPad next to them and then come back to interact with the sensor, which reduces some troublesome steps. Therefore, we wrote small notes with instructions for each sensor and posted it on the sensor boxes for the convenience of users (see Figure 5).

Figure 5: Sensor Instruction Note.

Most users found our interactive installation interesting, but the visuals could be richer and the sensor settings could be clearer. Overall it’s quite immersive. I am personally satisfied with the overall setup on site (see Figure 6). It can be easily understood by users without losing the sense of atmosphere. What we need to do now is to analyze the feedback we get in the presentation and solve the problems found. In addition, I personally think that some optimizations can be made in the software, such as making the project look more concise and clear.

Figure 6: On-site Installation Setup.

 

Thanks very much for everyone’s hard work during this time. Special thanks to Jules, Leo and Joe for their strong support and technical help. Thanks also to all the teachers and classmates for attending. From the prototype in the first week to the actual implementation now, I am very moved by the step-by-step efforts of our Presence group. Thanks to my group members.

 


A philosophical idea on sound design and compostion. (Morrison, C)

Both when forming the spectral harmonies and during the chopping up of the improvisation, there was something I realized about the sound worlds being created that I felt important to mention. Both the ‘sound effects’ and the ‘music’ had a similar aesthetic effect. This is particularly true of the improvisation samples, which could easily be perceived as sound effects, and the spectral chords, half of which where based on a few of the sound effects.

This lead me to a conclusion, that sound design and composition in this context aren’t necessarily separate mediums, but exist in the same medium of sound. The main distinction between sound design and composition is how we get towards the sounds. Sound designers have more of an expertise  effects via sampling, electronic manipulation, recording foley, etc, with a main focus on the use of digital technology. Whilst as a composer (in this context) my main  Arguably these ideas are echoed in spectral composition philosophy where timbre and harmony are considered to be one in the same (Harvey, 2000)*.

*this is true here, where the sound design, is much dealing with different kinds of sound, and I mainly am dealing with different kinds of harmony, thought timbre is considered in my work a lot (orchestration, instrumentation, extended techniques, etc.) and harmony is thought to some respects in sound (i.e frequency manipulation, white/pink noise, etc.)

Harvey, J. (2000) Spectralism. Contemporary music review. 19 (3), p. 11–14.

Re-sampling of the chaos improvisation. (Morrison_C)

There was another problem I had do deal with from the ‘chaotic section’. that being that it was only a single track, not a series of sounds, that can then be manipulated in an interactive context. So, for this section, I picked out a few points of interest. This involved separating many points, including:
-micropolyphonic textures (7) a kind of texture where a bunch of complex lines within a small register create a single atmospheric wash (Drott, 2011).
-The high whistles (2) which may have been created via overtone singing. This is a style of singing, common in Tuva (Bergevin et al, 2020) and Mongolia (Wu, 2019) , involves the enhancing the volumes of the overtones when signing, creating more than one pitch from a single voice (Sundberg et al. 2023).
-distorted noise (4)
-a continuous echoing of avant-garde vocal bends (5)

This is just to name a few. Here are all sounds, ordered from 1-8.



These sounds where aesthetically very similar to many sound effects that formed. This is continued to spark a philosophical idea mentioned in previously in the blog on the construction of new spectral chords, and will be expanded on in the next blog entry.



Reference list
Bergevin, C. Narayan, C. Williams, J. Mhatre, N. Steeves, J. Bernstein, J. and Story, B. (2020) Overtone focusing in biphonic Tuvan throat singing. eLife. 9.
Drott, E (2011). Lines, Masses, Micropolyphony: Ligeti’s Kyrie and the ‘Crisis of the Figure’. Perspectives of New Music, 49(1), pp.4–46.
Sundberg, J. Lindblom, B. and Hefele, AH. (2023) Voice source, formant frequencies and vocal tract shape in overtone singing. A case study. Logopedics, phoniatrics, vocology. [Online] 48 (2), 75–87.
Wu, S. (2019). To share or not to share: contested heritage in Inner Mongolia, China —A case of overtone singing (khoomei). International Journal of Heritage Studies, 26(3), pp.267–280.

Consonant sounds (and what to do with them). (Morrison_C)

When constructing the spectral chords previously referenced in submission 1, and in the previous blog, I created two harmonies aesthetically distinct from the rest of the content. These harmonies where diatonic, and in some ways more ‘consonant’ than the rest of the chords. I felt that these harmonies created a peaceful and ambient harmonic wash.

   




I felt these stuck out too much within the ‘sea/ocean’ section. However, they did fit well as a basis for the consonant section. I then formed 2 ‘fused chords’ which stuck together different aspects of each chord, and attached them together to derive more diatonic harmonies.



These has been applied to multiple different timbres used throughout the instillation, to give further coherence and connection throughout. 

css.php

Report this page

To report inappropriate content on this page, please use the form below. Upon receiving your report, we will be in touch as per the Take Down Policy of the service.

Please note that personal data collected through this form is used and stored for the purposes of processing this report and communication with you.

If you are unable to report a concern about content via this form please contact the Service Owner.

Please enter an email address you wish to be contacted on. Please describe the unacceptable content in sufficient detail to allow us to locate it, and why you consider it to be unacceptable.
By submitting this report, you accept that it is accurate and that fraudulent or nuisance complaints may result in action by the University.

  Cancel