Any views expressed within media held on this service are those of the contributors, should not be taken as approved or endorsed by the University, and do not necessarily reflect the views of the University in respect of any particular issue.

Sound_The carrier of all sound–Max

Preliminary conception

Sound dominates this project. We emphasise the expressive force of sound to achieve the effect of creating an atmosphere, and compared with vision, using max in sound will make me more confident.The following table is my idea about sound effects and music.

We decided to use the expression of four emotional levels. The first thing that came to my mind was the square object that can be used in the patch, which can be pictslider or nodes. So at first, I put both of them in my test patch and made some attempts to decide which one is more suitable for our project. This is also one of the questions that we asked Jules in the first presentation.

pictslider & nodes 

                   

Deconstruction of sound part patch

With the progress of the project, the concept gradually became clear, and my patch became more perfect.

Construction of basic layer(patch link)

As shown in the figure, you can see that I set the four corners of pictslider as A、B、C、D points, and the circles in them can be controlled. The four playlists below correspond to A、B、C、D points respectively. On the first level, we show the scene of sea monsters, so these sound effects are all sea monsters and underwater creatures.In this way, the other three layers of scenes are also evenly distributed in these four corners.

Because the variables to be regulated by the sensor were not determined at that time, I set a number of manually controllable elements, such as speed, pitchshift and timestretch, as dynamic quantities to make the audio change and control it by the sensor.Detailed comments have been marked in the picture.

In terms of artistic presentation, I have made a lot of cuts in the sound effects.We have produced many sound effects on each layer, but considering that many people will not have the patience to listen to so many similar sound effects in actual performances, which will make people feel tired aesthetically. In the end, in every emotional layer of the whole project, we used 12-18 sound effects evenly distributed, while the last layer used only four sound effects as a finishing touch to express the sense of calm.

here is the sound effect part on patch:

For the music part, I do the same thing. Corresponding to each level of emotion, the chords of music are more concise, and each level is exactly eight chords, so it is two to be assigned to four corners.

here is the music chorus part on patch:

This is the patch link of our whole project.

Use of new objects

I met some new problems in the process of patch improvement, which also made me learn new knowledge about max. The problems encountered are as follows:

  1. Based on the audience’s listening comfort, how to make these four layers of emotions cross-fade and naturally present when switching?
  2.  Based on the need of immersion, how to output it to multiple speakers to make it have audio-visual changes?
  3.  Based on the need of diversified regulation of sound, how to make the sound itself change more?

For these three questions, the following three parts are used in max to give corresponding answers.

autopattr&pattrstorage

For the first question, Leo gave me this solution when we had a meeting with Leo.I use pattrstorage object as a preset to increase and decrease the level in order to make the transition between layers smooth.I use it to connect a midi controller to control the switching. Leo refers to the instructional video of this link:https://youtu.be/5-4YZwbKB-k?si=1NeUJhZddplc0xT_

Video shows specific functions:

As shown in the figure, the file read in the operation is a code file with. json suffix:

panner

For the second question, we asked Jules for a lot of help. I want to create the fear of sea monsters surrounded from all directions, so I need to use sound image processing. At the same time, I also need A、B、C、D points to correspond to different speakers in speaker settings.

filter setup

In the final presentation, Leo gave us the suggestion that we need more changes to make our project sound less monotonous. So we use the cascade~to filter an input signal using a series of biquad filters and use filtergragh~to generate filter coefficients.

As shown in the figure, our control range is cut off 500-5000 Hz, and(30 ,100) is the range given by the humidity sensor.

Improvement suggestions

The sound part received valuable opinions from many teachers.

Set more speaker controls to seek a better sound experience.

Add frequency levels of music or sound effects to make it sound richer, such as high frequency.

If there are more channels coming from above, it will highlight the immersion of the characters on the seabed.

Personal reflection

The construction of the sound part in max patch takes the longest time among all tasks, because new problems are constantly discovered and new tasks appear due to the requirements of project presentation, and at the same time, we are constantly asking the teacher for help to solve the problems.

In the third rehearsal, I was still trying to make sure that the signal sent by panner could not be transmitted to four speakers. Although the sound signal was successfully transmitted in the end, more speakers were not used to reflect the immersion in the final presentation.

We each have our own responsibilities throughout the birth of the project, but I have participated in every part of our group except for the music part. Because of the company of team members and the support of teachers, I was not very anxious from beginning to end. This embryonic project gave me the courage to continue exploring audio-visual interactive software.

Thank you again for Leo’s responsible company and all support, Jules and Joe’s technical support, all the teachers, classmates and friends who came to watch, and all the members of DMSP_presence group.

Visual_Art Concept

Topic ideas and references

We are the presence group. At the beginning, our group thought a lot of topics, including emotional changes and psychological changes to express the existence and feelings of the spiritual level. At the same time, we also thought of natural themes such as seasonal changes and weather changes to express the existence and somatosensory feeling of the physical level. The spiritual level is more subjective.On the physical level, it is too vague.

Therefore, we want to combine these two feelings and have determined the final theme- Thalassophobia, which combines the natural ocean and the fear that is inspired by the sea.

I found some works by many interaction designers who work on emotions and natural environment as my visual reference.These works include the visual presentation of particles and lines, as well as the operation form of gesture interaction, which constitutes my initial idea of the final presentation of our project works.But I have to admit that my artistic ideas are limited by my technical abilities.

This video is about particle imaging of whales in digital media studio Bleeeeta:

The following pictures are The Folded Starry Sky interactive audio-visual performance created by interaction designer_ Ma ShiHua in China.

                     

The following picture is Gesture interaction for order in chaotic words

                     

Project visual communication

Draw_mode

Our works contain four different levels of emotions:

  1. To hear the sound of sea monsters approaching
  2. The sense of chaotic and confusion caused by fear of the ocean
  3. The third is the fear of escaping into the unknown after chaos
  4. The peace and calm left to people’s imagination.

Line particles can express our fear of the ocean concisely, and I set up different pattern changes to reflect the expression of these four emotions.There are some draw modes in my patch. Here are some pictures and lines of different modes.

tri_grid(sea creature) & triangles_adjacency(chaotic)

               

triangles_strip_adjacency(unknown wave) & line_strip(calm peaceful)

               

Interactive method

Considering that we need to do gesture interaction, my visual effect part gives some variable parameters to the sensor as artificially controllable parameters.

These videos are the visual effects produced by direct regulation on my patch.

You can see in the video that I adjusted the draw_mode to change the patterns and one of the Number Objects to control the scaling strength of the lines.As shown in the figures:

Regulating line density:

Make the names of different modes in draw mode into remote controllers.

What I neglected is that it is sensitive to directly control the pattern change in patch, and even the line change is more diverse.In live performance, the pattern change is very rigid and does not give the audience a visual impact, which may be related to the sensitivity of the sensor. It seems that the signal transmitted to max by people through the sensor as a medium is gradually decreasing.

This is a video material recorded on the spot:

Improvement suggestions

During the live demonstration of the project, I communicated with some teachers, and they gave me some valuable suggestions on visual performance.

“The pattern transformation of the video is very abstract, but it can also be a little more narrative. For example, the pattern that increases the intention of falling quickly indicates that the player falls into the ocean and highlights the fear.”

I understand why the teacher made such a suggestion. At least compared with the visual effects of these visual concepts I made in the early stage, the visual effects of live performances did not meet my inner expectations.

 

 

 

 

Visual_Technical Concept

Deconstruction of visual part patch

First of all, we have to say that we are a professional group of all-sound, so we are a weak link in visual effect, and there is no fancy design. But based on my desire to try the visual aspect, I am responsible for the conception and idea of the visual part of the project.

Because we have some experience in using max for live, after discussing the original artistic concept of our group, I thought of using jitter to realize it. This can not only play our role as a sound designer, but also make the sound visual, which is very appropriate for our own professional development.

This is my primary idea—Use the imported sound to trigger the visual system to play, and then control some parameters of the sound to make the visual patterns more diverse.

In short, after the audio signal is received, it is converted into a matrix signal that can be used for images. The matrix signal is attached to the mesh, so that the change of audio can drive the change of x_y_z axis of the mesh. At the video signal sending end, the visual presentation after receiving the signal can be seen through the jit.window.

The picture is a patch annotation diagram about three-dimensional visualisation of sound.

The following is a link to the patch:https://www.dropbox.com/scl/fi/p091k40nmv145tcoim020/Three-dimensional-visualization-of-sound.maxpat?rlkey=3ni75uxbdbo1di3sht2l14xtz&dl=0

Use of srcdim and dstdim object

In the patch, you can see that I used two objects, srcdim and dstdim. The following are videos to explain the contrast of image changes caused by these two controlled parameters.

The difference between srcdim and dstdim is that the former controls the scale amplitude of the whole matrix, while the latter can update the scale amplitude of the upper or lower edge in real time. Because the audio signal is a one-dimensional matrix, after setting its X and Y axes, it will form a ground-like waveform that changes like a mountain.I use this principle to realize the change of patterns.

The following is a link to the patch about the comparison between two objects:https://www.dropbox.com/scl/fi/mbb4gf6yzz50a2a7t4d9f/dstdim-vs-srcdim.maxpat?rlkey=9t5rdkwz5nghne4sp8s7znp72&dl=0

 

Week11_Final Performance_Ruojing Chen

Yesterday was our official performance.
Yesterday morning, we learned that our midi controller was reserved by others, so we had to borrow a new one, but we had already made a cover for the previous midi controller, so we urgently made a new cover for the new controller in the morning.


Before the evening performance, we also met Joe, hoping to solve the problem of the sensitivity of video operation controlled by our sensor, and finished the problem before the performance.

Before the performance, Jules came to experience our project setting in advance, and gave us some suggestions especially about patch, which affirmed our efforts and was very happy to see that we applied what he taught us.
Jules’s advice is as follows:
1.Set more speaker controls to seek a better sound experience.
2.Add frequency levels of music or sound effects to make it sound richer, such as high frequency.

During the whole performance, we also received suggestions from different teachers, and we are constantly improving what we can modify during the performance.For example, in the on-site furnishings, the design of posters and the use of sensors, we have refined and temporarily added them.

As for the design of the whole project content itself, I asked several teachers and got the following suggestions:
1. The venue setting can be a little darker, highlighting the terrible and unknown immersion (some teachers recommended ECA to go into the all-black classroom).
2. The pattern transformation of the video is very abstract, but it can also be a little more narrative. For example, the pattern that increases the intention of falling quickly indicates that the player falls into the ocean and highlights the fear.
3. If there are more channels coming from above, it will highlight the immersion of the characters on the seabed.

Thank you very much for Leo’s serious and responsible company, Jules and Joe’s technical support, all the teachers, classmates and friends who came to watch, and all the members of DMSP_presence group.

Week10_Max crossfade/craft making/final rehearsal_RuoJing Chen

This Thursday, because max still has some minor problems, we made an appointment with Jules to solve the cross-fade problem between playlist layers.
At the same time, the meeting with Leo solved our idea of using sensor to control the change of audio frequency.
However, because sensor has not been very sensitive in controlling patterns, we made an appointment with Joe, and this problem will be solved in the afternoon of our performance.

After attending the dmsp class on Thursday, the teacher suggested that we pack our sensors to make them look orderly and in line with the theme, so our group made an appointment to make a control box for our sensors with some cardboard and wrapping paper on Friday.

On Saturday, we borrowed Atrium, the venue for the official performance, for our last formal rehearsal, to ensure that our wiring was long enough and the speakers were properly placed.

Week9_FormalTest_Ruojing Chen

Together with the team members, I applied to the school to borrow four speakers, sensors, sound cards, projectors and white screens for testing.
The day before the first test, I just finished creating the music part, and imported chorus into our patch. We canceled the use of nodes object, and linked the music with the sound part. The X and Y axes are controlled by sensors, and each level is accompanied by the sound effect of that level. At the same time, the midi controller button is used to trigger the switch of different levels.

The following is the design of our different emotional levels in music and sound effects.

The following is Live rehearsal in the music store.

The following is a sound effect demonstration shot in the music store.

The following is a visual display shot in the music store.

The problems that need to be solved now are:
1. crossfade is not done when switching between different levels, and variable control is needed.
2. Now the control variables are monotonous, and more control needs to be done, such as filter.
3. The current four-level switching is controlled by four buttons instead of one button, which means that the linear development of these scenes has not been realised.
4. There were some problems in the output of our speakers when building the site. Only two levels were effective in totalmix, and three level meters appeared after changing dac~1234 in max. On this basis, I added ezdac object. Although it was successfully output to four speakers, the sound effects I heard were not available in some speakers.
5.Metro needs to be set to ensure the integrity of the entire audio playback.

Week8_Technical Matters_Ruojing Chen

On Tuesday, we asked joe to help us solve the problems encountered in the code when max connected to the sensor. We need to adjust some codes to ensure the rationality of max’s operation.

On Wednesday and Thursday, we had a meeting with Leo to solve some problems about max.
The technical problems we mainly solve are as follows:
On Wednesday, my patch only did the sound effect part. Our sound effects have four levels. I classified them well, but I didn’t know how to transition from the first level to the second level. I thought of using preset object, but I encountered some problems. Leo suggested that I use buffer to store all the sound effects and change the levels at any time, but the final solution was to use pattrstorage object to control the appearance and changes of each level through the storage of the level meter data. Another technical problem is that max involves the on-site display. Because we use multiple speakers, we need to create an effect in max that can output different sound effects to multiple speakers. But we haven’t decided whether we want surround sound effect or multi-channel effect. Leo suggested that I use ICST ambisonics tools to output surround sound.
On Thursday,We took class so we asked Jules a lot and he showed us the use of panner.
I have to think about how to make the panner function into our patch.

Here is three different parts of our project !

Week7_ProjectImprovement_Ruojing Chen

This week, we have encountered many technical problems, besides max, there is also the sensor part. For example, although the sensor is connected to the parameters that max needs to control, the data of the parameters controlled are very limited, and only jump between two numbers, so I wrote an email to Joe to make an appointment for next week, hoping to get some help. In addition, my work also includes building the sound effect part and music part of max.
The meeting with the tutor Leo this week also mentioned that we would concentrate on two days next week to solve the technical problems we encountered.

Week6_MaxJitter_soundeffectsPart_Ruojing Chen

After talking to our tutor Leo this week, we made the following things clear:
1. list all controllable variables on Max and select some variables object that can be controlled by sensor.
2. Connect the sensor to Max to see if it can be controlled.
3. The connection between the sound effects and the Max can be used (the Pictslider object adjusts the parameters of the xy axis). I need to build it, and still choose the controllable variable to connect the sensor.


This is a technical problem that was solved last week:
How to control color change
How to control pattern change


This is a defined variable that needs to be controlled by sensors:
The control of color is not in line with the theme and is not technically realized, so it is ruled out.
The visual part can adjust the pattern, including draw mode and size of the waveform changes.
I set the range values of the two:[0,11] and [-120,2]

It has got the sound effect package made by the team members, as shown in the figure below, which are the implementations of the sound effect part I built.
Problems encountered:

The four corners of the pictslider object only support the transmission of four sound effects, and there are many transmission media that need larger volume for sound effects.


The nodes object can hold enough sound effects, but how to control the yellow controller ball is a key problem. If the sensor can be connected with the ball, the audience can control it in real time to feel the trigger of different sound effects caused by their actions.

css.php

Report this page

To report inappropriate content on this page, please use the form below. Upon receiving your report, we will be in touch as per the Take Down Policy of the service.

Please note that personal data collected through this form is used and stored for the purposes of processing this report and communication with you.

If you are unable to report a concern about content via this form please contact the Service Owner.

Please enter an email address you wish to be contacted on. Please describe the unacceptable content in sufficient detail to allow us to locate it, and why you consider it to be unacceptable.
By submitting this report, you accept that it is accurate and that fraudulent or nuisance complaints may result in action by the University.

  Cancel