Any views expressed within media held on this service are those of the contributors, should not be taken as approved or endorsed by the University, and do not necessarily reflect the views of the University in respect of any particular issue.

Performance Part1-technology

Touch Designer

Background

In the conception of Part 1, we hoped to present the beginning of the story of Autumn Sonata to the audience in a clear manner according to the development of the storyline. The team believed that short experimental film could help us to tell the story, so we shot an experimental film to showcase the fear and anxiety of the girl towards piano practice under the excessive expectations of her pianist mother. After completing the filming and editing of the short film, we imported the video into Touch Designer and searched for corresponding tutorials on the website to help us achieve the effect of garbled characters appearing on the screen according to changes in sound. The final result was the successful implementation of showing distorted images of the daughter in the video based on changes in sound, which conveyed the daughter’s anxiety under the excessive expectations of her mother.

Process

We found some video and text tutorials online, the main references are:
https://www.youtube.com/watch?v=IFegKFjtj80
https://www.youtube.com/watch?v=rvAB3Rzh7CI

Here we have three parts: Sound Analysis, Video Changes and Background Music

1) Sound Analysis

We used the Audio Device In controller as the source of the sound input, which is different from the video tutorials we referred to. Then we analyzed the relevant parameters of the sound according to the tutorials.

2) Video Changes

We divided the video editing into two identical parts: the main video changes and the daughter’s separate video in the mother-daughter frame. This was done so that we could later overlay the daughter’s video onto the right side of the original video to achieve the effect of the daughter’s part changes separately.

3) Background Music

We imported the background music file and added the Audio Device Out controller so that we could connect to the Audio Interface and provide the audience with better sound effects.

In addition to these three parts, we added controllers to better achieve the changes in certain parts of the video. We used code to set the controllers to only achieve the garbled effect in specific keyframe areas. We also set the reload shortcut code for the video and audio so that when we hit the “1” key on the keyboard, the sound and video will restart playing simultaneously.

Performance Part 2 – Collecting Data from ‘Feelings’ to Compose

Composition Creation

In the second part of the performance, the music aims to connect with the feelings of the ‘daughter’, to express her emotional process from confusion to fear to struggle when she is in a repressed inner world. When conceiving the sound, I thought more about how to leave more of the sound creation to the performer.

Depending on the design of the performance setting, we set up strings with bells of different sizes tied around the performers, which, when touched by the performers as she is dancing, produce a random sound of bells that become part of the sound source that will form the music.

I intend to make part of the sound follow the dance movements of ‘daughter’ by capturing data on the locations of the hand movements in space, sonifying the data and using a smooth and flowing timbre to express the changes in the daughter’s state of mind.

In addition, a piece of background music is needed to set the mood for the whole performance, to underline the three internal states of the daughter and to provide a guide for the emotional development of the performance.

Composition process

Background Music

Composition project file in Reaper

Using crunchy piano to sketch out melodies, they go towards rendering a sense of perplexity. A single note with a soprano timbre is added to the piano, appearing every other bar to reflect the flowing piano sound.

On the bass side, three percussions form the basic rhythm, with the low, harmonic overtones playing the role of the kick in the drum kit, which, together with the tuned drums, sets the overall disorienting and tense atmosphere. Powerful drumming joins in as it moves into the second section, matching the daughter’s physical movements as she tries to break through the invisible wall of pressure.

In order to give the overall music a more oppressive, hollow sense, ethereal synthesizer sounds are used to play the harmonies.

Interactive sound

The M5stick is attached to the performer’s wrist and receives real-time data changes in Max in terms of yaw, roll, pitch, etc.

By limiting the threshold to the data collected, the range of variation is restricted to a usable interval using scale. When the data changes, the corresponding pitch and timbre are actuated and the rhythm of the sound is changed at the same time.

The actor can promote two states of sound, a detuned piano with a sense of dropping, and a sharp ripping sound.

Performance Part2-Technology

Processing

Background

In the second part of our performance, we aimed to portray the inner conflict and struggle of the daughter for the audience. Therefore, we wanted to capture the actor’s movements and project them into a 3D particle space to depict the daughter’s inner world. Eventually, we chose the Kinect V2 sensor for this purpose. As the Kinect is an RGBD sensor, it can recognize the human body and automatically calculate the motion data of each joint based on the person’s proportion in real-time, capturing the person’s outline.

After selecting the sensor, we attempted to use Touch Designer to achieve the motion particle presentation. However, as some team members had more experience using Processing, we decided to use Processing to present the motion data captured by the Kinect for better visual effects.

Process

In the Processing coding, there are mainly three parts: Particle from Kinect capture, particle clustering and dispersion influenced by sound changes, and movement of a red wireframe.

Firstly, the setup() function initializes the libraries being called, sets up the screen, audio input, and particle array.

1)Kinect

We used the KinectPV2 and peasy libraries to access the Kinect camera and create an interactive camera controller. We called the Kinect data by defining “kinect = new KinectPV2(this)”. In the code, after initializing the Kinect sensor, It creates arrays and ArrayLists to store particle coordinates, image data, and particle size information. Using If statement, Processing checks whether the Kinect detects any human bodies. If the actress is detected, it iterates through the pixels of the body tracking image and looks for pixels that correspond to the user’s body. For each pixel found, it creates a particle-like effect at the corresponding location and adds it to the ArrayList of particles. If no actress is detected, it randomly generates new particles. The particle movement, dissipation, and iteration are also set up at the same time.

2)Sound

We also used the processing.sound library to capture audio input from the computer and extract volume information. By using “float volume = loudness.analyze()*5”, we can quickly adjust the sensitivity of how sound affects particle dissipation according to the impact of environmental noise during live performances.

3)Red Box

We used P3D to draw a rotating 3D cube in space, controlled by “rotateY(-sin(frameCountspeedR));rotateX(-cos(frameCountspeedR));” to adjust its rotation speed. In the initial design, we used a white wireframe, but later changed the stroke color to red. During live testing, we found that the wireframe was not clear enough, so we adjusted the strokeweight() data to make the wireframe thicker, making it easier for the audience to see.

Max and Processing Connection

We intended to output the sound data generated by the performer in real time to the processing via Max, thus causing a diffusion of the visual particle effect to occur. The LAN is set up in the Arduino and Max is connected under the same port.

Using Max as the emitting device, the volume data is sent to the ip address where the processing is located, setting the data to vary between 0 and 1.

In the Processing part, we conducted tests on data reception using the test code from the OSC database based on relevant references we found online.

After modifying our own IP address and receiving port, we carried out sending and receiving simultaneously.

However, in the end, we did not receive any data from Max on the Processing side. Instead, Max received ‘123’ sent from Processing on the Max side. So we gave up this plan and still use environment sound to affect the particles.

Performance Part2-Visual design

Regarding visual aspects, I believe that Part 2 holds the greatest significance within the entire performance. It serves as a bridge between the “daughter’s” struggle with self-awareness in Part 2 and the self-acceptance in Part 3. To portray the character of the “daughter,” we assigned a member of our group to play the role. We utilised particles to create portraits and employed an abstract technique to capture the essence of the “daughter’s” image. This was done because the “daughter” not only exists within the context of the film but also represents numerous children who experience oppression from their parents. The depiction of the particles used in the portrait can be observed in Figure 1

Fig.1 Processing visual video

Our group believed that Part 2 held significant importance in terms of visual elements, acting as a link between the “daughter’s” struggle with self-awareness in Part 2 and her subsequent reconciliation in Part 3. To depict the character of the “daughter,” we assigned a member of our group to play the role and utilised particles to construct an abstract portrait representing the “daughter’s” image. We opted for this technique as the “daughter’s” role in the film serves as a representation of numerous children who face oppression from their parents in real life. The particle depiction of the portrait can be seen in Figure 2.

The square shape in the background was intended to signify the invisible walls that a mother’s expectations create, binding her children. Initially, we planned to use a black-and-white colour scheme for the visual elements. However, following feedback from Jules after observing our performance on March 27th, 2023, we ultimately decided to alter the cube in the background to a bright red colour, creating a striking contrast between the black and white blocks.

Finally, we choreographed the movement of the actor portraying the “daughter,” who awakens in a world of her own inner self, surrounded by threads representing her mother’s expectations. She becomes panicked and attempts to break free, but instead is manipulated like a puppet, becoming entangled in the threads.

 

Performance part1-Video design

The first part of the main interaction design is the impact and change of the live real-time human voice input on the video screen. The visual design is dominated by the video projected on the curtain. The main content of the video is the creative concept of abstract plot expression using live filming, refining elements such as the mother’s reprimand and facial movements, and integrating abstract editing expression to present the audience with a conceptual video of the tension and oppression in the mother-daughter relationship. As the first act of the performance, the video plays the role of introducing the story and laying the emotional tone of the story, so that the audience can feel both the oppressive relationship between mother and daughter when watching the first act of the performance, and the flowing, unrestrained change of their own emotions.

The video focuses on the story of a daughter practicing piano who is constantly struggling under the oppressive scolding and gaze of her mother. In the design of the video script, some metaphors are used to express abstract emotions – “hand” as a metaphorical element to express emotions: a red thread tied on the finger as a clue and the beginning of the story, followed by a large and a small hand pulling each other, and a struggling hand behind the gauze. The two hands pulling each other, the struggling hands behind the gauze, etc., all express the mother’s constant manipulation and restraint of her daughter. The close-ups of the “mouth” and “eyes” are direct expressions of the mother’s attitude, and the close-up perspective highlights the intensity of her feelings.

In the shooting of the video, the props and the actors’ costumes and makeup were plain to create a sense of story, and the images of “mother” and “daughter” were clearly distinguished. In the editing and processing of the video, the overall tone of the original video was reduced from brightness to black and white to express the depression and suffocation between the mother and daughter relationship. The editing uses a “flashback” shot as a transition to express the mother’s control over her daughter as a subconscious irregular control and bondage, and the overlapping of the two scenes also expresses the struggle between the daughter’s psychology and reality.

Draft Script

Video Screenshot

Final Reflections – Shruti

My role

I was one of the digital artists in the group and performed the first part. I helped finalize the concept and participated in designing ‘our stage’ among other things but mainly focused on the visual performative aspect starting from nature to the polluted world. I also played a small role in making sure we had projectors during our practice sessions and for the performance.

Performance Day

The max patch I made, which I have discussed in a previous blog, had all the effects I wanted to use. The three folders contain the three different visual aids I needed – nature, pollution, and glitch effects. All these worked well during rehearsal and practice sessions. 

But, twenty minutes before we had to perform, Vizzie (max) crashed thrice. I was able to restart it twice but it continued to crash and by this time the audience had already gathered and we were running a few minutes late.

I was very confused as to why it was crashing now when I’ve been working with this the same way for the past week. I realized then. I did add a new video, rather a large file into one of the folders during the rehearsal before the final performance we had and Vizzie did not like that. I quickly removed that video and reopened the software and added the folders again. The performance finally began. 

The first show we had may have not been our best, lot of nerves and too many errors. I felt I was too worried about moving between visuals that I lost out on the effects and glitches I wanted to add.

Post my part, I was supposed to project garbage floating around onto the ground in a way of directing the audience to the next section. First, the Ipad failed to open, then the Pico projector did not work. Both of these functioned well during rehearsal.

We received some good constructive criticism and encouragement, and we went again.

As Jules said, the second time was much better! I was able to create more effects and make the visuals more chaotic, and thanks to Vibha who at the end of her performance added the sunrise and we synced up, completing the loop. No garbage was projected onto the floor, but that was a lesson learned. 

I like to call the above screengrabs ‘Shades of Shruti’, as I performed the visuals sitting behind screen 1. The shades of the visuals as well as my nervousness are all too clear.

 

Reflection

This entire experience of collaborating with these talented individuals and creating this performance was a fun journey. I learned Vizzie, though mostly about how it does not work, and how to use a MIDI controller, but mainly working with sound designers. I will miss our design discussions. 

Some of the challenges we did face were synchronizing both audio and video. They need visuals to create the sounds and we as digital designers found it hard to start something from scratch without having a clear brief. I guess that is a learning as well, not always would we have a clear indication of what is needed to begin working on anything. It is also important to stop ideating at some point and begin creating, which we started out later than we should have. Nonetheless, the performance had a narrative, start and end. But definitely has a lot of scope for improvement and additions.

I feel the future of this project can be an extended version where it runs in a continuous loop and even the audience has to move in a circle to view it. Both the visuals and sounds can be more immersive making the audience feel as if they are part of the performance, they could even trigger certain outcomes.

 

Monster video AI innovation attempt

 

It is an attempt of 3D video. It uses the 3D model seeds made by itself and the videos shot in real scenes to generate videos through AI intelligent algorithm, thus ensuring that the video prototype and 3D model prototype in the database are provided by itself and reducing the copyright problems that may be caused by AI.

I speculate that this will be a trend in the future of AI, ensuring that the original database model is provided by author.

AIW-Initial brochure producction

I made the early production of the electronic brochure, and summarized the team production process and possible obstacles to the production according to team member personal write up and pictures. At the same time, it adds some missing contents, such as some environment-related research reports. This is just the pre-production, after discussion, the content is too tedious, and then vibna re-made.

css.php

Report this page

To report inappropriate content on this page, please use the form below. Upon receiving your report, we will be in touch as per the Take Down Policy of the service.

Please note that personal data collected through this form is used and stored for the purposes of processing this report and communication with you.

If you are unable to report a concern about content via this form please contact the Service Owner.

Please enter an email address you wish to be contacted on. Please describe the unacceptable content in sufficient detail to allow us to locate it, and why you consider it to be unacceptable.
By submitting this report, you accept that it is accurate and that fraudulent or nuisance complaints may result in action by the University.

  Cancel