Any views expressed within media held on this service are those of the contributors, should not be taken as approved or endorsed by the University, and do not necessarily reflect the views of the University in respect of any particular issue.

Our stage design

Stage design

Our team took field measurements of the Alsion house, such as the height of the central iron frame and the length, width and height of the venue, to determine the floor space and curtain size required for each performance.

Based on the scale of the site, I made a sketch. Considering the placement of audio equipment, the audience’s sense of hearing and immersion, we finally divided the venue into the following picture. The screens of the first part cover part2 and part3 first to create a sense of mystery. We need to wait for the audience to complete part1. After watching, follow the signs on the ground to move the viewing positions of part2 and part3. Therefore, our performance is carried out in a progressive narrative mode.

 

Final Reflection-Xiaozhuang Gao

Contribute

In “Invisible String”, I (Xiaozhuang Gao) was responsible for the programming techniques and visual design of part 2, as well as taking on the role of coordinating the team. I needed to make creative decisions regarding the core concepts and forms of the performance, in order to drive the project forward and unify team members’ ideas. Therefore, in this reflection, I will introduce the work I did in the group and what I learned in this course.

Reflection- technology support

Firstly, in the early stages of the project, I was responsible for the programming part of Processing and Kinect. After Submission1, as we believed that Part1 of the performance had already involved abstract particle graphics, we wanted to control the growth speed of the particles through sound interaction. Based on this idea, I began to write the particle code, and the initial particle generation was designed in Touch designer. As I had prior experience using Touch designer, I believed that it was a great software for sound and visual interaction, and that we could control particle changes through functions such as Life Expect, Drag, and Life Variance.

Fig 1. First particle testing on TD

During the technical implementation, I found that the particles in Touch Designer were essentially composed of individual pixels and could not be resized to the expected size. Therefore, I switched to using Processing to achieve the particle effect.

As the project progressed, we decided to combine the Processing particle section with Kinect motion capture for Part 2 of the performance instead of Part 1. After discussing with my teammates, I believed that the figure captured by Kinect V2 should be trapped in an invisible space, metaphorically representing the cage formed by a mother’s expectations for her children. Therefore, when drawing the graphics, I utilized the built-in P3D mode in Processing, adding a parameter for depth. To set different origins and rotations for different objects, I used the pair of matrix statements pushMatrix() and popMatrix(). With these commands, I was able to create 3D graphics while maintaining the rotation of the cube. I used the code instruction rotateY(-sin(frameCountspeedR)); rotateX(-cos(frameCountspeedR)); to control the rotation speed on the X and Y axes, thereby expressing the concept of the daughter being trapped in bondage and unable to escape. (fig2.Coding)

Fig2.Coding

And, based on the concept of “confinement”, I came up with an idea: Can we calculate the position of the vertices of the cube to prevent the particles from escaping during diffusion? After testing, I found that because the cube is constantly rotating randomly, it is impossible to calculate the exact vertices, which also means that the particles cannot be completely confined within the cube. However, after consulting tutorials, I adopted a compromise by controlling the left and right, and top and bottom boundaries of particle movement. Additionally, while testing the kinect, I attempted to test the movement of particles in a 3D environment but found that the effect was not satisfactory, so I did not enable the Z-axis depth value.

In fact, there are still many areas for improvement in the visual code presentation of Part 2. For example, on March 27th, 2023, during a tutorial, the instructor pointed out that the visual richness of Part 2’s particle display was somewhat disconnected from Part 1’s key imagery. Although I eventually changed the background cube to red, the problem still persisted. In my opinion, in future project development, the color of the particles could reflect the color of the clothing captured by the Kinect, rather than just being white.

At the same time, because of the interaction with sound, the speed of particle diffusion and disappearance was not well controlled, sometimes resulting in a too-scattered state. This part of the code still has room for improvement.

Reflection- Visual design

In terms of visuals, Shutong Liu and I were responsible for the visual design of the project. At the beginning of the project, our group’s visuals were monotonous and abstract. Following feedback from our submission 1 mentor, we were encouraged to increase the feedback of the visual projection. Drawing from my previous experience in filming and producing key images, I suggested to the group that we use key images to showcase our visuals, and use narrative techniques to present our performance. Subsequently, I drew the storyboards for part 1 video and the video was filmed based on this. Meanwhile, our group discussed many versions of the performance venue layout, and I combined the opinions of the group members to design the performance venue (Fig3. Performance venue). I also designed the poster for our final performance (Fig4. poster), and our group worked together to build the line installation for part 2.

Fig3.Performance venue

Fig4. poster

In my opinion, the biggest visual challenge was the planning of the performance venue and the construction of the installation. The planning of the venue took too much time and we discarded many ideas and sketches throughout the process. Although the planning of the venue needed to consider the specific performance format, the audience, and the performance area for the performers, due to too many considerations in the early stage, we could not determine the specific performance format for each aspect. This resulted in our group wasting too much time on the visual planning of the venue, which delayed the normal progress of other aspects of the work.

Furthermore, the construction of the installation was also a challenge. We spent a lot of time trying the wrong way to fix the wires, which resulted in all the wires getting tangled together the day before the performance. We had to start over and use a different approach. If we had discovered the flaw in the wire-fixing method earlier, we would have had more time to create the visual installation.(Fig5. Installation processing)

Fig5. Installation processing

Reflection-Group coordination

Finally, I also served as the overall coordinator within the group. For video filming, I took full responsibility for the shooting of the film (Fig6.recording), recording the process of the group’s work on the lighting arrangement at the scene and other related matters. At the same time, I was also the voice performer for the part 1 performance (Fig7.voice performer)and participated in the choreography of the dance movements for part 3.

Fig6.recording

Fig7.voice performer

Conclusion

In general, this was my first time putting interactive performance into practice. Although our group consisted of only four people, we often felt that we were short-staffed. However, it was precisely because of this that our group’s work was not disjointed, and each member participated in every aspect of the work, rather than working separately. Of course, there are still many areas in our performance that need improvement, as pointed out by our mentor in the feedback given after our last formal performance. Specifically, the transitions between each stage of our performance need to be more closely linked in order to provide a better experience for the audience.

 

Project Source File

Part2 video

https://youtu.be/5MQRS2qN5ck

Processing
https://drive.google.com/drive/folders/1oyMpaftdqVVoR2Yg1Jd-NMoTqF9XqOA1?usp=sharing

Processing Test (Old Version)
https://drive.google.com/drive/folders/1b57YIC2tuZJ0PA9p-rgbjUhLvx3UEDa5?usp=sharing

Submission 2-Final report&documentary

Our Final documentaryhttps://youtu.be/5wdCRBJSlR4

Part1 Video:https://youtu.be/6fKzaTzt7N0

Part2 Video:  https://youtu.be/5MQRS2qN5ck

Part3 Video: https://youtu.be/ltNtx1xkXow

Our project’s own source files

Processing
https://drive.google.com/drive/folders/1oyMpaftdqVVoR2Yg1Jd-NMoTqF9XqOA1?usp=sharing

Touch Designer
https://drive.google.com/drive/folders/1mTUTHKC1a6cvSnVqj6IKhykOgJr1NG63?usp=sharing

Processing Test (Fist Version)
https://drive.google.com/drive/folders/1b57YIC2tuZJ0PA9p-rgbjUhLvx3UEDa5?usp=sharing

Part 1 Mixing https://drive.google.com/file/d/1mFLVemaLbHSIs_EXReBWH9cNFqvpOlIB/view?usp=share_link

Part 2 Composing https://drive.google.com/file/d/1hJ__2rSRBj7yTwlH3oCR8tdIHqadInnu/view?usp=share_link

Part 2 Max Patchers

https://drive.google.com/file/d/1xCIy2S-RgqypvWlqYt3FvaF5v551XoO7/view?usp=share_link

Part 3 Mixing

https://drive.google.com/file/d/1UBbC0qdSQUK-sPXm9zSnHK_XTKBExe_W/view?usp=share_link

Background

The creation of Invisible String originated from the personal experience of one of the group members, Shutong. She had a habit of biting her nails, and whenever she felt nervous, she would start biting them. So she started to trace the origin of this habit, and when she saw the film Autumn Sonata, she remembered that this habit was a stressful reaction to her mother’s demands that made her feel nervous during her piano lessons.

As Shutong  told us about this experience, we realised that such encounters did not only happen to her alone but that in different ways, we were bound by our parents in one form or another. Many people grow up with high expectations from their families and too much pressure on their children. We, therefore, decided to use the film Autumn Sonata as a backdrop for our performance, creating an interactive performance piece, invisible strings, which uses the film as a basis for exploring the relationship between children and parents, about expectations, bondage and the process of self-reconciliation.

Design goals

  • Our group set the following goals for our performance
    – To explore in depth the relationship between children and parents
    – A multi-sensory, multi-technical performance
  • Our group set the following goals for our group
    -Good teamwork
    -A clear division of labour, each with their own strengths
    -planning the allocation of time

Refine after submission1

After Submission 1, we got some design feedback. In it, we summarise effective information, including consideration of venue constraints, audience experience, and the presentation of increased visual content.

In our original concept, we designed a lot of large-scale installations, which made us need a very large space to support the performance. But the space we can get is limited, so we modified our proposal to remove these large art installations.

The second is the modification of the visual content. In the original design plan, our visual design was relatively boring, more like a display of installation art than a performance. Therefore, we redesigned the visual expression of each part in the later stage and integrated the concept of the original large-scale installation into the visual expression. For example, the first part is the high pressure that the daughter feels under the verbal accusation of the mother. The original scheme of projecting abstract growing particles onto the ground from top to bottom with a projector was changed to a real shot. Videos convey the possible impact of parents’ verbal and behavioural suppression on their children. This will serve as the beginning of the story and the prelude to the whole performance.

Looking at our original design scheme from the audience’s perspective, we realised that the expression of the content might be too abstract. So we revisit the proposal. Present the story of Autumn Sonate to the audience step by step as a storyline. This will enable the audience to understand the story more clearly and feel the emotions that our works want to convey.

 

Research and Critical thinking

1. Performance and Installation

During the creative design phase of this project, we searched extensively online for reference projects. Ultimately, we selected some of them and developed our design based on them.

Image References about Performance

In these reference images, the performer and the video presentation seem to be two independent entities. However, visually appealing effects may help the performer to better convey the musical presentation to the audience, for example, by using video to help viewers understand the expression of the music. Therefore, in our performance, the presentation of the video content may be abstract, but the interaction and musical presentation can be coordinated with the image to complete the story narration.

In reference image 2, the performer’s hand and the video image or hand shadow in the image correspond to each other, bringing some stunning visual experiences to the audience. We drew inspiration from this short experimental film and realised that when many objects are repeatedly stacked in the image, it can give the audience a sense of oppression. This is similar to what we want to present, where the mother pressures her daughter. At the same time, through the shadows in this work, we came up with the idea of a simple interaction between the actor and the video material. The person’s shadow in the video and the actual performer exists in two different spaces but are connected. Based on these ideas, we completed the ideation process for the third part.

Video References about Interaction

In addition to image references, we also looked for related interactive installation art videos as our references.

1. Tomás Saraceno 

Firstly, we looked at the works of Tomás Saraceno. In this work, Tomas used lines to form spider-web shapes, and when the audience touches the lines, they create sound.

We believe that lines can also be represented as the strings of a piano. In our performance, the mother and daughter are connected by an invisible bond, visualised by a red thread. If a piano wants to make a sound, it needs the vibration of the strings, which is also the vibration of the lines. Therefore, in our design, in the second part of the work, we used black and white lines to represent the piano strings. However, unlike Tomas, it is difficult for us to make the strings produce sound. Therefore, we chose to tie solid bells to the lines in this part. When the audience interacts with the lines, the sound of the bells will appear. At the same time, the positional relationship of the actors also generates random sounds through Max as a supplement to the sound of this part.

2. Particle References

We have found some reference videos on motion capture and particle effects. The interaction between dance and particle effects can bring great visual impact. However, if the visual of particles dominates the visual presentation, it may make the performer lose its functionality in our performance. Also, the interactive effect may not be better displayed, only playing a video made by the particle system for the audience. In our design, the main visual colour is black and white to match the piano theme. To simplify the related visual effects, we designed the colour of the character’s particles as white, reducing the degree of dispersion. We also add sound control to the particles but only make them dissipate when the sound reaches a fixed range.

3. Touch Designer

In most Touch Designer cases, we found online imported pre-produced music files and analysed them to generate video changes. For example, in our reference case, it used a chop to the controller and noise controller to simulate the feeling of blood spreading from a paper by analysing the emotion and style of the music. However, according to our idea, we hope the sound can generate real-time video effects changes. Therefore, we use the Device in the microphone module instead of audio to achieve real-time sound analysis and video changes.

Sketch of the performance venue

 

Taking into account the audience’s walking path and the placement of equipment such as projectors, we created a layout diagram for the venue.

Overhead view of the performance venue

Technology (visual; sound) flow chart

 

Performance

Part 1

Performance Part1-Visual design

The first part of the main interaction design is the impact and change of the live real-time human voice input on the video screen. The visual design is dominated by the video projected on the curtain. The main content of the video is the creative concept of abstract plot expression using live filming methods, distilling elements such as the mother’s reprimand and facial movements, fusing them with abstract editing expressions to present the audience with a conceptual video of the tense and oppressive relationship between mother and daughter. As the first act of the performance, the video serves as an introduction to the story and sets the emotional tone of the story, allowing the audience to feel both the oppressive relationship between mother and daughter and the flowing, uninhibited changes in their own emotions as they watch the first act.

The video’s focus on the piano-practising daughter constantly becoming struggling and in pain under her mother’s oppressive scolding and gaze. In the design of the scripted content of the video, the visuals use metaphors to assist in the expression of abstract emotions – the ‘hand’ as a metaphorical element to express emotions: a red thread tied to a finger is used as a clue and the beginning of the story, followed by a large The two hands pulling on each other, the struggling hands behind the gauze, etc., all express the emotions of the mother’s constant manipulation and restraint of her daughter. The close-ups of the “mouth” and “eyes” are direct expressions of the mother’s attitude, and the close-ups highlight the intensity of the emotions.

In the filming of the video, the props and the make-up of the actors are relatively plain in order to create a sense of storytelling, and a clear distinction is made between the images of the ‘mother’ and the ‘daughter’. In the editing and processing of the video, the overall tone of the original video was reduced to black and white to convey the depressing and suffocating nature of the relationship between mother and daughter. The use of “flashback” footage as a transitions to express the mother’s control over her daughter is a subconscious and irregular control and bondage, and the overlapping of the two scenes also expresses the struggle between the daughter’s psychology and reality.

 

Performance Part1-Touch Designer

In the conception of Part 1, we hoped to present the beginning of the story of Autumn Sonata to the audience clearly according to the development of the storyline. The team believed that a short experimental film could help us to tell the story, so we shot an experimental film to showcase the fear and anxiety of the girl towards piano practice under the excessive expectations of her pianist mother. After completing the filming and editing of the short film, we imported the video into Touch Designer and searched for corresponding tutorials on the website to help us achieve the effect of garbled characters appearing on the screen according to changes in sound. The final result was the successful implementation of showing distorted images of the daughter in the video based on changes in sound, which conveyed the daughter’s anxiety under the excessive expectations of her mother.

Processing of learning Touch designer

We found some video and text tutorials online. The main references are:
https://www.youtube.com/watch?v=IFegKFjtj80
https://www.youtube.com/watch?v=rvAB3Rzh7CI

Here are three parts: Sound Analysis, Video Changes and Background Music.

1) Sound Analysis

We used the Audio Device In controller as the source of the sound input, which is different from the video tutorials we referred to. Then we analyzed the relevant parameters of the sound according to the tutorials.

2) Video Changes

We divided the video editing into two identical parts: the main video changes and the daughter’s separate video in the mother-daughter frame. This was done so that we could later overlay the daughter’s video onto the right side of the original video to achieve the effect of the daughter’s part changes separately.

3) Background Music

We imported the background music file and added the Audio Device Out controller so that we could connect to the Audio Interface and provide the audience with better sound effects.

In addition to these three parts, we added controllers to achieve the changes in certain video parts better. We used code to set the controllers only to achieve the garbled effect in specific keyframe areas. We also set the reload shortcut code for the video and audio so that when we hit the “1” key on the keyboard, the sound and video will restart playing simultaneously.

 

Performance Part1-Sound Design

The first part of the project is the introduction to the scene in terms of sound that both fits the specific movements of the characters and conveys the depressingly low atmosphere in the images. The two main parts of the recorded sound samples are the two phases of the voice, the multiple whispers and the single chant, and the recording of sounds with a sense of movement and rhythm with the objects (balloons, cloth curtains, woollen balls )used for filming.

After sorting through all the recorded sounds, adding filters, reverbs, delays, and other effects make the sounds more in tune with the narrative qualities. To shape the actual piano performance episode, we recorded sound clips of the group member playing the piano piece Pour Le Piano (Achille-Claude Debussy, 1901). Writing the musical parts, adding musical elements such as synth, violin, bass and piano to build the plot development of the images.

In the first stage of the performance, an actor will play the role of a mother on set, feeding in an angry, accusatory voice against her daughter in the picture in real time. The vocal will affect the picture and give it a distorted effect.

 

Part 2

 

Performance Part2-Visual Design

In terms of visual considerations, we felt that Part 2 was the most important part of the performance, a transition between the struggle of the daughter’s self-consciousness in Part 2 and her reconciliation in Part 3. We chose a member of the group to play the role of the “daughter”, and we used particles to form a portrait, using an abstract means to encapsulate the image of the daughter because the role of the “daughter” in the film exists not only in the film but also in real life. It is a microcosm of the countless children who are oppressed by their parents.
(as shown in fig1.particles below)

fig1.particles

The square shape in the background was also designed to be a metaphor for the invisible walls of a mother’s bondage to her children, who are trapped within them.
Initially, we planned to use black and white for the visual design, but on 27/3/2023, following feedback from Jules, who had seen the group’s performance and suggested we add colour to the visuals, we ended up changing the cube in the background to a bright red colour to add visual impact to the image between the black and white blocks.
Finally, we choreographed the movement of the actor playing the daughter, who wakes up from the darkness in a world of her own inner self, covered with threads created by her mother’s expectations of her, and begins to panic, trying to break free but having to be pulled and twisted like a puppet entangled.

Performance Part2-Processing

Background

In the second part of our performance, we aimed to portray the inner conflict and struggle of the daughter for the audience. Therefore, we wanted to capture the actor’s movements and project them into a 3D particle space to depict the daughter’s inner world. Eventually, we chose the Kinect V2 sensor for this purpose. As the Kinect is an RGBD sensor, it can recognize the human body and automatically calculate the motion data of each joint based on the person’s proportion in real-time, capturing the person’s outline.

After selecting the sensor, we attempted to use Touch Designer to achieve the motion particle presentation. However, as some team members had more experience using Processing, we decided to use Processing to present the motion data captured by the Kinect for better visual effects.

Process

In the Processing coding, there are mainly three parts: Particle from Kinect capture, particle clustering and dispersion influenced by sound changes, and movement of a red wireframe.

Firstly, the setup() function initializes the libraries being called, sets up the screen, audio input, and particle array.

1)Kinect

We used the KinectPV2 and peasy libraries to access the Kinect camera and create an interactive camera controller. We called the Kinect data by defining “kinect = new KinectPV2(this)”. In the code, after initializing the Kinect sensor, It creates arrays and ArrayLists to store particle coordinates, image data, and particle size information. Using If statement, Processing checks whether the Kinect detects any human bodies. If the actress is detected, it iterates through the pixels of the body tracking image and looks for pixels that correspond to the user’s body. For each pixel found, it creates a particle-like effect at the corresponding location and adds it to the ArrayList of particles. If no actress is detected, it randomly generates new particles. The particle movement, dissipation, and iteration are also set up at the same time.

2)Sound

We also used the processing.sound library to capture audio input from the computer and extract volume information. By using “float volume = loudness.analyze()*5”, we can quickly adjust the sensitivity of how sound affects particle dissipation according to the impact of environmental noise during live performances.

3)Red Box

We used P3D to draw a rotating 3D cube in space, controlled by “rotateY(-sin(frameCountspeedR));rotateX(-cos(frameCountspeedR));” to adjust its rotation speed. In the initial design, we used a white wireframe, but later changed the stroke color to red. During live testing, we found that the wireframe was not clear enough, so we adjusted the strokeweight() data to make the wireframe thicker, making it easier for the audience to see.

Max and Processing Connection

We intended to output the sound data generated by the performer in real time to the processing via Max, thus causing a diffusion of the visual particle effect to occur. The LAN is set up in the Arduino and Max is connected under the same port.

Using Max as the emitting device, the volume data is sent to the ip address where the processing is located, setting the data to vary between 0 and 1.

In the Processing part, we conducted tests on data reception using the test code from the OSC database based on relevant references we found online.

After modifying our own IP address and receiving port, we carried out sending and receiving simultaneously.

However, in the end, we did not receive any data from Max on the Processing side. Instead, Max received ‘123’ sent from Processing on the Max side. So we gave up this plan and still use environment sound to affect the particles.

 

Performance Part2-sound design

Composition Creation

In the second part of the performance, the music aims to connect with the feelings of the ‘daughter’, to express her emotional process from confusion to fear to struggle when she is in a repressed inner world. When conceiving the sound, we thought more about how to leave more of the sound creation to the performer.

Depending on the design of the performance setting, we set up strings with bells of different sizes tied around the performers, which, when touched by the performers as she is dancing, produce a random sound of bells that become part of the sound source that will form the music.

we intend to make part of the sound follow the dance movements of ‘daughter’ by capturing data on the locations of the hand movements in space, sonifying the data and using a smooth and flowing timbre to express the changes in the daughter’s state of mind.

In addition, a piece of background music is needed to set the mood for the whole performance, to underline the three internal states of the daughter and to provide a guide for the emotional development of the performance.

Composition project file in Reaper

Interactive sound

The M5stick is attached to the performer’s wrist and receives real-time data changes in Max in terms of yaw, roll, pitch, etc.

The LAN set up in Ardunio allows Max to receive data messages sent by the M5Stick.

Limiting the threshold to the data collected limits the range of variation to a usable interval using scale. When the data changes, the corresponding pitch and timbre are actuated and the rhythm of the sound is changed at the same time.

The actor can promote two states of sound, a detuned piano with a sense of dropping and a sharp ripping sound.

Performance Part3-Visual design

The third part of the performance is the interaction between the performer and the video through the collaboration of the performer, the projection and the video. To ensure that the performance is effective in a dark environment, the visual design of the projection in the performance is based on the video projected on the gauze, with the performers docking behind the gauze and the projection in the air to complete the performance. The main content of the video is shot in real life and post-processed into the form of a shadow. The shadow in the video represents the inner character of the daughter, and the performer represents the ‘daughter’, with the daughter and the shadow pushing each other away and searching for each other. As the final scene and conclusion of the video, the projected image represents the daughter’s struggle to find her inner self and to reconcile with herself.

In the design of the scripted content of the video, the shadow representing the daughter’s inner self goes through trials, fears and confusion as she explores her inner self and eventually finds a balance between herself. The red residue behind the shadow echoes the red line, the dissipation of the particles of the characters echoes the particles of the second act, and the appearance of the mouth and eyes in the image is the climax of the third act, where the daughter finally finds herself in the darkness of confusion and embraces “herself”.

Performance Part3-sound design

Sound Quality

In the third stage of the performance, the performer interacts with the ‘shadow’ in the video, and part of the sound needed to match the qualities of the shadow in the visual element, so it was created by combining the recorded sound of fabric and thread rubbing, and by overlaying the frequencies of several sounds after equalising them, to make some sound effects with the effect of shadows moving.

Because the shadow contains a stressful aspect that the daughter used to wallow in, whispers to the left and right, symbolising the droning and accusations from the mother will accompany the shadow. To these are added depressing synth tones and fast-paced sound effects echoing the shadow’s movements. The actual actor stands out through the chanting vocals, which portray her self-finding process.

Music

There are three main stages in the development of the music; the first stage is the two figures pulling and searching for each other, the second stage is a return to the depressingly gloomy atmosphere of the first performance section, and the third stage is a melodic narrative of the two figures merging into one.

The base of the entire piece is set by washy synth sounds, which are joined by a morphing wind section with a sense of refrain as it moves into the second phase, and a choral effect piano sound is applied to externalise the merging process in the final melodic section.

Reflection

From left to right are Crystal Wu, Lerong Qi, Shutong Liu and Xiaozhuang Gao

What is good in the process?

Throughout the project, our team communicated and divided up the functions so that everyone could contribute their skills to the project to the best of their ability.

Lerong Qi was responsible for the sound design, Shutong Liu for the visual design, Crystal Wu for the technical support and Xiaozhuang Gao for the visual design and technical support for the code, and he is also Project Coordinator and controlling the overall project finalisation concept. Once we had confirmed the division of labour between each person in the team, every communication in the group could lead to an effective conclusion. Each person can present their own suggestions and ideas for different parts of the project, and the person in charge of each part makes the final decision. At the end of the day, the team member responsible for coordinating the whole project ensures and follows up on the visual, sound and technical outputs in a consistent style and that each part goes smoothly and delivers a complete story and effective output.

What is difficult in the process?

When we produced the installation for the second part of the performance, we faced unexpected difficulties. Our team hopes to build an installation about the string and combine bells to interact with the performers and make sounds to express the emotions of the “daughter” growing up under the oppression of her mother. To make the set-up and dismantling of the scene easier to manage, we asked the advice of the instructor Andrew. He suggested that we use wooden sticks to pre-wire the set-up so that during the performance, we only need to fix the position of the sticks in the venue, and the installation of the device can be completed quickly. However, when we took this approach and spent at least five days surveying the site, purchasing materials, and fully tying up all the wires, we found that the number of wires we used was far more than we expected. Whenever we lay out the sticks and rope, we spend three to four hours untying the knots. With the addition of bells, the knotting situation is even more serious. On the day of the test, Wednesday, we brought the lined-up lines to the venue, but we couldn’t untie them. As a result, we had to modify the setup plan of the device the day before the show and tie the strings directly to the wooden sticks on site. This also means that we must spend 1-2 hours completing the construction of all the lines before the official performance, which brings us great pressure.

Secondly the second difficulty is that our team is short of manpower, and our project is relatively large. For a team of four, the project is not too easy and smooth. In addition to our own design work, we also need to rent equipment and measure venues, repeat lottery tickets, etc. The four of us often need to carry sound equipment and filming equipment back and forth between Alison House and Booket Centre.

 

What can we do better in the future?

In this project, we had a lot of discussions on the concept and content of each part at the beginning, which took up a lot of our time, so we started to implement all the plans later, and the time to complete each part was compressed. Although the clear concept and design allowed us to achieve the ideal presentation effect, we can do better in this part. By figuring out everything early on and starting the actual build, we might have more time to rehearse and troubleshoot tangled wires. It can bring better presentation effects to the audience during the actual performance.

But in general, we realised 90% of our vision for the whole project, which can be said to be the result of our very successful teamwork. We would like to thank every team member for their participation and dedication.

 

References

Performance Part2-Technology

Processing

Background

In the second part of our performance, we aimed to portray the inner conflict and struggle of the daughter for the audience. Therefore, we wanted to capture the actor’s movements and project them into a 3D particle space to depict the daughter’s inner world. Eventually, we chose the Kinect V2 sensor for this purpose. As the Kinect is an RGBD sensor, it can recognize the human body and automatically calculate the motion data of each joint based on the person’s proportion in real-time, capturing the person’s outline.

After selecting the sensor, we attempted to use Touch Designer to achieve the motion particle presentation. However, as some team members had more experience using Processing, we decided to use Processing to present the motion data captured by the Kinect for better visual effects.

Process

In the Processing coding, there are mainly three parts: Particle from Kinect capture, particle clustering and dispersion influenced by sound changes, and movement of a red wireframe.

Firstly, the setup() function initializes the libraries being called, sets up the screen, audio input, and particle array.

1)Kinect

We used the KinectPV2 and peasy libraries to access the Kinect camera and create an interactive camera controller. We called the Kinect data by defining “kinect = new KinectPV2(this)”. In the code, after initializing the Kinect sensor, It creates arrays and ArrayLists to store particle coordinates, image data, and particle size information. Using If statement, Processing checks whether the Kinect detects any human bodies. If the actress is detected, it iterates through the pixels of the body tracking image and looks for pixels that correspond to the user’s body. For each pixel found, it creates a particle-like effect at the corresponding location and adds it to the ArrayList of particles. If no actress is detected, it randomly generates new particles. The particle movement, dissipation, and iteration are also set up at the same time.

2)Sound

We also used the processing.sound library to capture audio input from the computer and extract volume information. By using “float volume = loudness.analyze()*5”, we can quickly adjust the sensitivity of how sound affects particle dissipation according to the impact of environmental noise during live performances.

3)Red Box

We used P3D to draw a rotating 3D cube in space, controlled by “rotateY(-sin(frameCountspeedR));rotateX(-cos(frameCountspeedR));” to adjust its rotation speed. In the initial design, we used a white wireframe, but later changed the stroke color to red. During live testing, we found that the wireframe was not clear enough, so we adjusted the strokeweight() data to make the wireframe thicker, making it easier for the audience to see.

Max and Processing Connection

We intended to output the sound data generated by the performer in real time to the processing via Max, thus causing a diffusion of the visual particle effect to occur. The LAN is set up in the Arduino and Max is connected under the same port.

Using Max as the emitting device, the volume data is sent to the ip address where the processing is located, setting the data to vary between 0 and 1.

In the Processing part, we conducted tests on data reception using the test code from the OSC database based on relevant references we found online.

After modifying our own IP address and receiving port, we carried out sending and receiving simultaneously.

However, in the end, we did not receive any data from Max on the Processing side. Instead, Max received ‘123’ sent from Processing on the Max side. So we gave up this plan and still use environment sound to affect the particles.

Performance Part2-Visual design

Regarding visual aspects, I believe that Part 2 holds the greatest significance within the entire performance. It serves as a bridge between the “daughter’s” struggle with self-awareness in Part 2 and the self-acceptance in Part 3. To portray the character of the “daughter,” we assigned a member of our group to play the role. We utilised particles to create portraits and employed an abstract technique to capture the essence of the “daughter’s” image. This was done because the “daughter” not only exists within the context of the film but also represents numerous children who experience oppression from their parents. The depiction of the particles used in the portrait can be observed in Figure 1

Fig.1 Processing visual video

Our group believed that Part 2 held significant importance in terms of visual elements, acting as a link between the “daughter’s” struggle with self-awareness in Part 2 and her subsequent reconciliation in Part 3. To depict the character of the “daughter,” we assigned a member of our group to play the role and utilised particles to construct an abstract portrait representing the “daughter’s” image. We opted for this technique as the “daughter’s” role in the film serves as a representation of numerous children who face oppression from their parents in real life. The particle depiction of the portrait can be seen in Figure 2.

The square shape in the background was intended to signify the invisible walls that a mother’s expectations create, binding her children. Initially, we planned to use a black-and-white colour scheme for the visual elements. However, following feedback from Jules after observing our performance on March 27th, 2023, we ultimately decided to alter the cube in the background to a bright red colour, creating a striking contrast between the black and white blocks.

Finally, we choreographed the movement of the actor portraying the “daughter,” who awakens in a world of her own inner self, surrounded by threads representing her mother’s expectations. She becomes panicked and attempts to break free, but instead is manipulated like a puppet, becoming entangled in the threads.

 

Meeting 23/3/2023

Kinect and TD test

Our group started working on our show mainly in three different directions. Firstly, the second part of our whole show involved motion capture, so we continued with Kinect and TD
Combined test, the 1st generation Kinect we rented before could not perform motion capture due to system problems. This week we bought a 2nd generation Kinect and initially completed the motion capture.

Sound creates

 

Flim edited

We made a preliminary edit of the video shot last week, and this part will be used in the first part of our performance. The interactive form is a live recording using live sound, and the change of the picture is realized through TD

Meeting 19/3/2023

 

The first part of our performance-interaction experiment film

Interactive form – sound control screen to achieve distortion and other effects
Software-Touch designer

Brief-We use the red line as a metaphor to represent the relationship between the daughter and the mother in the film, which runs through the entire first part of the experimental images.

Storyboard

 

 

19/3/2023

We filmed live, and here are the images we took on location, we completed the video content for the first part of the show, and then we needed to test the interaction between TD and video and sound.

 

 

Kinect test

17/3/2023

The second part of our performance uses motion capture, so we tested the Kinect; we will combine the Kinect and TD to achieve motion capture during the actor’s performance at the same time. Still, when we tested in the window system, we found that there is a problem with the Kinect can not be displayed; we still need to test further. If it proves that the Kinect can not be used by us, we can only use the ordinary camera, in the scene with light to achieve the motion capture.

 

Meeting-4

Suggestions for the addition of visual presentation and projections in the feedback, as well as restrictions on-site use, were considered
The performance form is changed from 3 links to 1 link, and our design focuses on the interaction of sound and dynamics
visual content
-Technical-level processing and touch designer-abstract particle content
-Narrative content-Experimental video

possible element use
-broken mirror
– white curtain
-Wire
– bell

We plan to change the final performance mode, we reduced the originally planned 3 sessions to 1, and we started to think and make the initial visual effects this week. We first experimented with Processing.

Experimental video reference

css.php

Report this page

To report inappropriate content on this page, please use the form below. Upon receiving your report, we will be in touch as per the Take Down Policy of the service.

Please note that personal data collected through this form is used and stored for the purposes of processing this report and communication with you.

If you are unable to report a concern about content via this form please contact the Service Owner.

Please enter an email address you wish to be contacted on. Please describe the unacceptable content in sufficient detail to allow us to locate it, and why you consider it to be unacceptable.
By submitting this report, you accept that it is accurate and that fraudulent or nuisance complaints may result in action by the University.

  Cancel