Any views expressed within media held on this service are those of the contributors, should not be taken as approved or endorsed by the University, and do not necessarily reflect the views of the University in respect of any particular issue.

Process – Group 2 Submission 1

Process Group 2

List of group members:

Jin Qin (s2309515): Installation + Arduino coding + time control
Rudan Zheng (s2309043): Installation + Arduino coding + financial
Jaela Wang (s2417731): Video and editing + Installation + Equipments
Yuan Tian (s2453498): Music generation + Installation + pops

Initial Concept Generation

We confirmed our project keywords: Sound and AI. After a few days of brainstorming, we came up with some interesting preliminary ideas as follows:

  1. Sounds of the season: capturing the sounds of the seasons and synthesizing them into a piece of music with AI
  2. The world of color blindness: If I were a color blind person, could I experience the colorful world through sound
  3. Musical Puzzles: exploring whether someone who doesn’t know music can make a piece of music, through a collage (similar to a material puzzle, then synthesized with ai music)
  4. Musical visualization device: embodied music
  5. Vibrations of music (amplified) – water/vibrations of lines/line drawings/other media
  6. Music and Time: rhythm and the different frequencies of time, exploring whether music can change one’s perception of time
  7. A world of astigmatism: what would the world look like if I blurred your eyes (Using AI as an example, blur the image with hairy glass or water and use ai for analysis)

Among those ideas, we prefer the element of water, namely the fifth idea, and then we discussed the details and its way of realization.

Our initial scheme was:

A drop of water evaporates for 30 minutes. During the process, the vibration of sound will change the refraction of light, and display wonderful light and shadow exhibition. The process can be divided into three stages, similar to a person’s lifetime. Also, the music will be separated into three steps to cooperate with the changes. At the same time, there will be a projector for casting various environments and animation to create a more immersive experience for the audience.

In terms of sound design, the music is selected to represent the trajectory of different periods in life, using the vibrations of the speaker’s diaphragm to make contact with the wooden board, allowing a drop of water to resonate with the sound, which is perceived visually through the amplitude of the water drop’s rippling vibrations. Over time, the water droplets slowly and unconsciously evaporate, allowing for an association between perception and imagination, engaging the senses, and bringing about an interactive experience of time and space.

We collected some reference pictures and tutorials, and then created a mood board.

Dissection of “Process”

However, after our clarification to Jules on Thursday in week 3, Jules pointed out that our discussion was limited in the practical aspects, without the dissection of the concept of “Process” and the inclusion of AI. Therefore, we carried out an analysis of the concept of “Process” in two dimensions: the first one is the dissection of project content and connotation, and the second one is the dissection of the design process.

Dissection of the Process of our project Content

We analyzed the “Process” based on our initial concept of water projection. The analogy is applied to explain the project objects and connotations.

The water (the project object) is equivalent to me (the analogy object).

The dripping speed of the waterdrop represents the speed of my growth.

The size of the water in the container represents the extent of my development.

The light expresses society.

The beam of light striking the water means the influence of culture on us.

The vibration of sounds or music represents the influence of our family/others’ evaluation of us.

The light and shadow of water projected on the wall represent my image in the eyes of others.

The flow of water from less to more to none in this device shows the period of my life from birth to the completion of my sentence.

Dissection of our design process

AI artists are playing a more significant role in the art and Design field. Meanwhile, Hidden dangers and controversy are ensuing, such as the unclear copyright, the substitutability of AI artists to human artists, and the artistry of AI works of art. Therefore, we would like to discuss some questions:

  1. What role does AI play from the artist’s design process to the final production?
  2. Whether AI can replace artists in design or not?
  3. If AI intervenes in most of the content during the design process, can the final work touch the audience?

Under such questions, we designed a working flow to test the competition between AI artists and human artists through two controlled experiments.

Human Artist AI Artist
First Stage Take a series of photos, and create an animation based on the photos The series of images is fed to AI, and the AI artist outputs continuous pictures for animation
Second Stage Create music based on the animation created in the first stage Generate music based on the animation video feeding to AI like Clip Interrogator
Third Stage Exhibit two final composition works, and let the audience choose their favourite work
Final Preference

We prefer the dissection of the project content after analysis and plan on the two different understandings. From our perspective, AI is regarded as a tool for helping human artists broaden their minds. A personal initiative of a human artist is the reason for the origin and existence of art. In this case, we focus on our project content with AI working as a tool to compensate for our shortcomings. In addition, our work is expected to review social phenomena with social meaning rather than discuss “meaning”.

Besides, combining installation creation with coding is more challenging for us and needs to be tested, improved, deleted, and modified for the long term. For us, the process of challenging and creating is meaningful, so we will keep recording the project creation process.

The specific plan

Fig. 1. Text Flow of Project Process

Fig. 2. Picture Flow of Project Process

The installation includes a distance sensor. Once the audience approaches the sensor, the speaker, the light, and the water-dropping system start running. The speed of water dropping changes according to different stages. Ripples appear, and the speaker’s vibration can influence the water’s morphology and dynamics. Ray thought the laser paper and water project had notable water changes on the wall. When the water is full of the container installation, it will be poured away from the container. The whole process can be divided into three stages, where music and water-dropping speed are continuously changing.

Fig. 3. Graph of Relationship of Time and Speed of water-dropping

The speaker is placed flat under a table with a container, on top of the speaker’s diaphragm, water drops are placed in the container so that the drops create ripples through the sonic Hertzian vibrations of the music. We will take photos of the shadows and lights of the water and record some sounds. Those photos and sounds will be learned by AI, such as Mubert-text-to-music, disco diffusion, and clip interrogator, and then AI will generate some pieces of music. Before each stage starts, we invite an audience to pick one piece of music from the three parts we provide to influence the ripples.

Materials we need

Water, Mirror, Camera, Spotlights, Laser Paper, Transparent Container, Straws, Distance Sensor, Arduino Motherboard, Water-dropping Installation, Room without window or with curtain, Speaker, Speaker Diaphragm

Time Schedule

Week 1 – lecture

Week 2 – brainstorming, group discussion

Week 3 – modification and improvement of the project according to suggestions of Jules and Philly

Week 4 – decision on detailed design, material preparation and space lease

Week 5 & 6 – project testing, sound test, AI learning, installation design

Week 7 – testing in room

Week 8 & 9 & 10 – video editing, sound editing, Arduino coding and testing

Week 11 & 12 – complete testing

Week 13 – modification

Week 14 – final composition, video editing

Reference List

【TouchDesignerTutorials|Noise Creative Spin BallLeapMotion Gesture interaction】

https://www.bilibili.com/video/BV1EG4y1g72Z/?share_source=copy_web&vd_source=a8711564270b15c0c52431a25b613a14

【Audio-visual Interactive – Aurora_】

https://www.bilibili.com/video/BV1d84y1k723/?share_source=copy_web&vd_source=a8711564270b15c0c52431a25b613a14

【Processing–Blowing interaction】

https://www.bilibili.com/video/BV1PE411k7bf/?share_source=copy_web&vd_source=a8711564270b15c0c52431a25b613a14

【Plant sensory visualisation/A sketchy interactive installation】

https://www.bilibili.com/video/BV1LL4y1T7oG/?share_source=copy_web&vd_source=a8711564270b15c0c52431a25b613a14

【Interactive artwork AS IF】

https://www.bilibili.com/video/BV1hP4y1c7PN/?share_source=copy_web&vd_source=c0cd555c871b3d6c3923522ce3b0a1a8

【Simple Motion Visualisation –Simple motion visualisation can be done using Kinect】

https://www.bilibili.com/video/BV1Pd4y1o71B/?share_source=copy_web&vd_source=a8711564270b15c0c52431a25b613a14

【Touchdesigner Sound and picture interactive lines】

https://www.bilibili.com/video/BV1BV4y137kL/?share_source=copy_web&vd_source=a8711564270b15c0c52431a25b613a14

【 “Water” ambient light – Arduino design】

https://www.bilibili.com/video/BV18X4y1u7XP/?share_source=copy_web&vd_source=a8711564270b15c0c52431a25b613a14

 

 

Group 2_Design Concept

After the first discussion, we decided to adopt my fifth option as the way forward for our project:

Vibrations of music (amplified) – water/vibrations of lines/line drawings/other media.

Initial Project Concept

As I think the feeling of water dripping down is very beautiful, and referring to the Buddhist concept of “one flower, one world, one leaf, one Bodhi”, the appearance and disappearance of a drop of water can also be the birth and death of a world, a life. After some discussion, we decided on our initial project concept.

A drop of water evaporates for 30 minutes. During the process, the vibration of sound will change the refraction of light, and display wonderful light and shadow exhibition. The process can be divided into three stages, similar to a person’s lifetime. Also, the music will be separated into three steps to cooperate with the changes. At the same time, there will be a projector for casting various environments and animation to create a more immersive experience for the audience.

In terms of sound design, the music is selected to represent the trajectory of different periods in life, using the vibrations of the speaker’s diaphragm to make contact with the wooden board, allowing a drop of water to resonate with the sound, which is perceived visually through the amplitude of the water drop’s rippling vibrations. Over time, the water droplets slowly and unconsciously evaporate, allowing for an association between perception and imagination, engaging the senses, and bringing about an interactive experience of time and space.

Mood Board

Our project visuals, for the time being, refer to some of the project effects I found earlier:

Criticism

However, after our clarification to Jules on Thursday in week 3, Jules pointed out that our discussion was limited in the practical aspects, without the dissection of the concept of “Process” and the inclusion of AI. Therefore, we carried out an analysis of the concept of “Process” in two dimensions: the first one is the dissection of project content and connotation, and the second one is the dissection of the design process.

We analyzed the “Process” based on our initial concept of water projection. The analogy is applied to explain the project objects and connotations.

The water (the project object) is equivalent to me (the analogy object). The dripping speed of the waterdrop represents the speed of my growth. The size of the water in the container represents the extent of my development. The light expresses society. The beam of light striking the water means the influence of culture on us. The vibration of sounds or music represents the influence of our family/others’ evaluation of us. The light and shadow of water projected on the wall represent my image in the eyes of others. The flow of water from less to more to none in this device shows the period of my life from birth to the completion of my sentence.

The Flow Chart

Based on the design concept, I used Xmind to create a flowchart of the project:

(This image was drawn by Rudan Zheng)

I then mapped out our project process, based on a flow chart, to better facilitate understanding of our project process:

(This image was drawn by Rudan Zheng)

The installation includes a distance sensor. Once the audience approaches the sensor, the speaker, the light, and the water-dropping system start running. The speed of water dropping changes according to different stages. Ripples appear, and the speaker’s vibration can influence the water’s morphology and dynamics. Ray thought the laser paper and water project had notable water changes on the wall. When the water is full of the container installation, it will be poured away from the container. The process can be divided into three stages, where music and water-dropping speed continuously change.

The speaker is placed flat under a table with a container, on top of the speaker’s diaphragm, water drops are placed in the container so that the drops create ripples through the sonic Hertzian vibrations of the music. We will take photos of the shadows and lights of the water and record some sounds. Those photos and sounds will be learned by AI, such as Mubert-text-to-music, disco diffusion, and clip interrogator, and then AI will generate some pieces of music. Before each stage starts, we invite an audience to pick one piece of music from our three parts to influence the ripples.

References

– light art – light installations – reflections and shadows: Light art installation, light art, light installation (2023) Pinterest. Available at: https://www.pinterest.jp/pin/731483164496784888/ (Accessed: February 10, 2023).

Ellen Barratt – light work: Light art installation, projection installation, light installation (2023) Pinterest. Available at: https://www.pinterest.jp/pin/731483164496686680/ (Accessed: February 10, 2023).

Momento wonderglass (2023) Pinterest. Available at: https://www.pinterest.jp/pin/731483164496784817/ (Accessed: February 10, 2023).

Mrxccc (2022) TouchDesigner tutorial_bilibili, _bilibili. Available at: https://www.bilibili.com/video/BV1EG4y1g72Z/?share_source=copy_web&vd_source=a8711564270b15c0c52431a25b613a14 (Accessed: February 10, 2023).

Pin On Water (2023) Pinterest. Available at: https://www.pinterest.jp/pin/731483164496784814/ (Accessed: February 10, 2023).

‘under the barbers’ shop’ by Ellen Barratt – 2017, London – Interactive Light Installation: Light Art Installation, light installation, artistic installation (2023) Pinterest. Available at: https://www.pinterest.jp/pin/731483164496686762/ (Accessed: February 10, 2023).

Nbdbbbutme (2022) Touchdesigner_Sound and picture interactive lines_bilibili, bilibili. Available at: https://www.bilibili.com/video/BV1BV4y137kL/?share_source=copy_web&vd_source=a8711564270b15c0c52431a25b613a14 (Accessed: February 10, 2023).

Oldzsir (2022) Simple motion visualisation_bilibili, bilibili. Available at: https://www.bilibili.com/video/BV1Pd4y1o71B/?share_source=copy_web&vd_source=a8711564270b15c0c52431a25b613a14 (Accessed: February 10, 2023).

Qttting_F (2020) “Water” ambient light – arduino_bilibili, bilibili. Available at: https://www.bilibili.com/video/BV18X4y1u7XP/?share_source=copy_web&vd_source=a8711564270b15c0c52431a25b613a14 (Accessed: February 10, 2023).

TEACommunity (2022) Audio and Video Interactive-Aurora_(bileam licensed touchdesigner official Chinese tutorial)_bilibili, bilibili. Available at: https://www.bilibili.com/video/BV1d84y1k723/?share_source=copy_web&vd_source=a8711564270b15c0c52431a25b613a14 (Accessed: February 10, 2023).

幽灵电力 (2022) Interactive artwork as if_bilibili, bilibili. Available at: https://www.bilibili.com/video/BV1hP4y1c7PN/?share_source=copy_web&vd_source=c0cd555c871b3d6c3923522ce3b0a1a8 (Accessed: February 10, 2023).

草学人士 (2022) Plant sensory visualisation / A grass interactive installation哩_bilibili, bilibili. Available at: https://www.bilibili.com/video/BV1LL4y1T7oG/?share_source=copy_web&vd_source=a8711564270b15c0c52431a25b613a14 (Accessed: February 10, 2023).

那我呢没起够 (2019) Processing–interactive_bilibili, bilibili. Available at: https://www.bilibili.com/video/BV1PE411k7bf/?share_source=copy_web&vd_source=a8711564270b15c0c52431a25b613a14 (Accessed: February 10, 2023).

Group 1 Submission 1 – Project proposal

Project Concept Outline

Team members:Dominik Morc、Weichen Li、Yingxin Wu、Boya Chen、Jiaojiao Liang

Formulating the project idea:

Our objective is to present a collection of art design works that resemble video loops, where dynamic image/motion pictures are played with background music and reactive sound effects. The work aims to showcase a series of illustrations of bio morphosis and hybridities that blend together nature and digital entities through interactive media that can be built with a screen-based exhibition. In other words, this is a digital exhibition that presents visual, audio, and interactive techniques.

Our main idea is to demonstrate creative art that image and depict the evolution of plants under the impact of changing climate as so the whole ecological environment, to show the audience our perspective on environmental issues and emphasize the importance of the relationship between us human, living things, and nature in nowadays’ context of interdisciplinary practices of technology and creative art that are popular in environmentalism discourse.

Ideas, inspirations and case studies:

Our pilot study involves the following aspects:

1. Exploration of arts, projects and practices with themes related to environment and human–nature relationships

Some scholars and artists made their standpoints that we need to be aware that AI art generators can draw back our creativity because rather than supporting our imagination, AI is replacing it. In some circumstances, human creative expression is from a spiritual perspective, and the act of creation can be aroused from the flow state. It inspired us that our project could focus on nature and spirits from the living world.

https://thedruidsgarden.com/2022/10/16/ai-generated-arts-and-creative-expression/

In the journal AI Magazine, a paper indicated that a super-intelligent AI may one day destroy humans.

https://greekreporter.com/2022/09/16/artificial-intelligence-annihilate-humankind/

This gives us an idea to explore stories with the narrative of environmental upheaval and makes us think about the prospect one day, more intelligent beings will change the world we live in. The depiction of nature and ecosystems in Hayao Miyazaki’s films particularly appealed to us. There are a lot of environmentalism, nature and human well-being narratives combined with environmental degradation and ecosystem evolution that can be seen in the stories.

https://hashchand.wordpress.com/2021/06/29/nausicaa-exploring-environmental-themes/

In Nausicaa of Valley of the Wind (1984), surviving humans must live in coexistence with the Toxic Jungle in the post-apocalyptic world. A wonderful natural environment is illustrated in this story. The mutated flora and fauna in the Toxic Jungle that lived and took over the world are at the same time decomposing human-caused contamination of the earth.

Living Digital Forest https://www.designboom.com/art/teamlab-flower-forest-pace-gallery-beijing-10-05-2017/

A Forest Where Gods Live https://www.teamlab.art/e/mifuneyamarakuen/

Impermanent Flowers Floating in an Eternal Sea https://www.teamlab.art/e/farolsantander/

Examples of exhibitions – by teamLab

Vertigo plants an experiential urban forest at Centre Point – the installation is mainly working on lighting bars, and the speaker is playing natural sounds.

A project combining art and technology – ecological world building https://www.serpentinegalleries.org/art-and-ideas/ecological-world-building-from-science-fiction-to-virtual-reality/

2. Case studies of AI-assisted design, creative and production practices,

including concepts like Generative Adversarial Networks (GANs, a class of machine learning frameworks that enable models to generate new examples on the basis of original datasets);

Neural style transfer (NST, the technique of manipulating images, and videos and creating new art by adopting appearances, and blending styles. See: http://vrart.wauwel.nl/)

Procedural content generation (PCG, the use of AI algorithms to generate game content such as levels and environments in a dynamic and unpredictable way);

Reinforcement learning (RL, a type of machine learning that allows AI to learn and practice, such as playing complex games) and etc. There is an adventure survival game, No Man’s Sky. Its player explores and survives on a procedurally generated planet with an AI-created atmosphere and flora and fauna. What inspired me in this game is the diversity of flora and fauna built in the ecology of wonder earth, and all of its configurations are simulated by AI algorithms.

3. Study previous studies on generative art and computational creativity

It has been argued that there are problems in generative art and AI-assisted creative practice, such as “the choice and specification of the underlying systems used to generate the artistic work (evaluating fitness according to aesthetic criteria);” and “how to assign fitness to individuals in an evolutionary population based on aesthetic or creative criteria.” McCormack (2008) mentioned ‘strange ontologies’, whereby the artist forgoes conventional ontological mappings between simulation and reality in artistic applications. In other words, We are no longer attempting to model reality in this mode of art-making discipline but rather to use human creativity to establish new relationships and interactions between components. And he argued that the problem of aesthetic fitness evaluation is performed by users in the interactive evolutionary system, who selects objects on the basis of their subjective aesthetic preferences.

McCormack, J. (2008) “Evolutionary design in art,” Design by Evolution, pp. 97–99. Available at: https://doi.org/10.1007/978-3-540-74111-4_6.

 

The following elements can characterise the aesthetics of generative art:

  • Mathematical models & computational process (employing mathematical models to generate visual patterns and forms and transform visual elements). Case: Mitjanit (2017) ’s work of blending together arts and mathematics. “Creating artworks with basic geometry and fractals…using randomness, physics, autonomous systems, data or interaction…to create autonomous systems that make the essence of nature’s beauty emerge…”
  • Randomness (unexpected results and variability in the output): For example, an animated snowfall would generally only play out in one way. When developed by a generative process, however, it might take on a distinct shape each time it is run (Ferraro, 2021).
  • Collaboration between artist and algorithm

In this video, it was mentioned that in the relationship between people and AI in the fields of art and creative works, AI is more likely to emerge as a collaborator than a competitor (see – https://youtu.be/cgYpMYMhzXI).

The use of AI assistance in design fiction creation shows cases of generating texts with “Prompts” (prompt programming). The author claimed that it is no suggested direct solution to make AI perfect despite the in-coherency of AI-generated texts. Research results show that the AI-assisted tool has little impact on the fiction quality, and it is mostly contributed by users’ experience in the process of creation, especially the divergent part. “If AI-generated texts are coherent and flawless, human writers might be directly quoting rather than mobilizing their own reflectiveness and creativity to build further upon the inspiration from the AI-generated texts (Wu, Yu & An, 2022).”

Wu, Y., Yu, Y. and An, P. (2022) Dancing with the unexpected and beyond: The use of AI assistance in Design Fiction Creation, arXiv.org. Computer Science. Available at: https://arxiv.org/abs/2210.00829 (Accessed: February 11, 2023).

4. Search and learn a range of open AI systems, and tools that would be helpful during our development of the project

DALL·E 2 – OpenAI (image)

ChatGPT (searching, text-based prompts)

Midjourney (stylised picture-making)

Prezi (can be used to build a web/interactive interface)

Supercollider, puredata, max/msp (programing system and platforms for audio synthesis and algorithmic composition)

X Degrees of Separation (https://artsexperiments.withgoogle.com/xdegrees) uses machine learning algorithms and Google Culture’s database to find visually relative works between any two artefacts or paintings. What I find interesting about this tool is that if we have a particular preference for a certain visual style of artwork or if we want to draw on a certain type of painting, this tool can help us find a work of art that falls somewhere in between two chosen visual styles.

More can be seen at: https://experiments.withgoogle.com/collection/arts-culture

Amaravati: British Museum’s collaboration with Google’s Creative Lab – An augmenting environment project in a museum exhibition. In this introduction video, users use their phones like a remote control to have mouse-over actions with the interactive exhibition.

This is an example of displaying multimedia work and enabling interaction if we are going to create something with installations, screens and projectors.

AI art that uses machine intelligence and algorithms to create visualizations: How This Guy Uses AI to Create Art | Obsessed | WIRED

https://www.youtube.com/watch?v=I-EIVlHvHRM

Initial plan & development process design:

  1. Designing the concept, background & storyboard of plants → subjecting the plants to be designed → investigating the geographical and biological information of plants as well as cultural background
  2. Collecting materials → capturing photos, footage and audio → sorting original materials
  3. Formulating visual styling → image algorithms/AI image blending tools/style transfer/operate by hand → lining out the concept images by hand → rending images with AI → iteration…
  4. Designing the landscape & environment → geographical/geological condition → climate condition → ecologucal condition
  5. Designing the UI & UX → Creating a flow chart or tree diagram in the interface to demonstrate the plant’s appearance at each stage. Enable the viewer to control the playback of the exhibition.
  6. Exhibition planning → projector & speaker

Prototype interactive touchscreen/desktop/tablet/phone application

https://prezi.com/view/qvtX0GwjkgblZsC7aMt6/ + video to showcase

Equipment needed:

Budget use:

 

Prototype:

Jiaojiao

First, I collected information about bryophytes, learned about the different forms of bryophytes in nature, and chose moss flowering as the initial form. After that, I used chatgpt to ask ai what kind of appearance the bryophytes would evolve in different environments after the ecological environment was destroyed. According to the background provided by the AI, mosses have evolved to withstand the extreme conditions of a post-apocalyptic world, and mutated moss plants can have dark, eerie appearances, with leaves that glow in neon lights.

For example, in the context of nuclear radiation, plants exposed to nuclear radiation may develop genetic mutations that cause changes in their appearance. Change the growth pattern, and they may be stunted or develop differently shaped leaves or other structures. Overall, the evolution of nuclear-contaminated bryophytes is likely to be a complex and ongoing process, as bryophytes continue to adapt to changing environments.

In the context of global heat, for example, some bryophytes may have adapted to extreme heat by developing a thick cuticle, which reduces water loss and helps prevent damage to plant tissues. However, more research is needed to fully understand the evolutionary adaptations of bryophytes to extreme heat.

In the sketch below, I chose a representative bryophyte and hand-drawn the development state of the bryophyte in an extreme environment.

Secondly, I used C4D to make experiments on the appearance changes of different stages of moss plants.

 

Boya:

Design iterations and attempts to use artificial intelligence

I tried to use artificial intelligence to analyse the possible effects of environmental degradation on plants. I chose a cactus as the design object and set some basic directions for environmental deterioration such as lack of water, heat, chemical pollution, radiation etc. I used the AI to try and analyse how these environmental factors could affect the cactus and based on this I generated more design keywords.

I then tried to put the AI-generated descriptive keywords into different AI mapping software to test whether the resulting images would meet the design requirements.

I tried to use Midjourney, Disco Diffusion, DALL-E and other AI-generated images respectively.

  • Midjourney

Direct image generation by description

  • Disco Diffusion

Images generated by code shipping.

  • DALL-E

I also tried to use Processing and Runway to generate an animation of the cactus mutation process.

  • Processing

I have tried to write an interactive cactus model in processing that allows the cactus to undergo different mutations by the user’s clicks.

Click to change the cactus into different forms.

 

Audio:

Yingxin: “I used AI create a series of audios, did some machine learning on them and made some post-production changes to fit our style. But we felt that using only AI would not be able to fully transmit our creativity and our brilliant ideas. So we will reflect our creativity in the final presentation, from the sound material to the finished product. The music style must fit  with the picture.  The second is that I want the interactivity of the music to be reflected in the audience’s choice to influence the direction of the music. As an example, if we provide some options in the video. For example, choose to cut the tree or not to cut the tree. The audience chooses to cut the tree, the form of the plant will be different, it may be bad, then the style of the music may be more gloomy or low. Conversely, the music may be livelier and more cheerful.

This is just a style demo.”

We will recording lost of sound and add effects.

This is a recording log:

https://docs.google.com/spreadsheets/d/1wbFCR_z72PRr57pVdQH0de06Fj1HYXHEKiSr81UX4EM/edit?usp=share_link

Audio Reactive:

backgrounds:

  1. Interpolation or morph animation

Explore in the Latent space.

Through the “interpolation” algorithm, the transition screen will be completed  with programs and AI between the two pictures which could replace manual K-frames.

maybe we can try:

  1. Neural network nvidia provides off-the-shelfStyleGAN has three generations in the past two or three years, and the release speed is faster than learning.
    https://github.com/NVlabs/stylegan
    https://github.com/NVlabs/stylegan2-ada-pytorchhttps://github.com/NVlabs/stylegan3
  2. There are a large number of pre-trained modelshttps://github.com/justinpinkney/awesome-pretrained-stylegan
  3. The AI ​​development environment
  • Runwayml
  • Google Colab
  • Related cloud services

 

Process:

Add audio-visual interaction in the generation process

  1. Basic ideas

The original version was not real-time.

  • Read the audio file first, and extract the frequency, amplitude and other data of the sound according to algorithms such as FFT.
  • Add the above data as parameters to the process of generating interpolation animation, add, subtract, multiply and divide latent.truncate and other values.
  • Synthesize the generated animation (modulated by audio) with the audio file.
  • Google Colab can be used.

At the beginning, I used p5js to interact with Runway (an Ai machine learning tool) to generate an algorithm, which can automatically fill frames based on the volume of the music, and I finished coding.

Then I kept trying to get away from the Runway because we created the plants by ourselves. I soon realized that p5js had to rely on ml for that. I haven’t found an alternative for it, I try some Max and I believe it can, wanted to use google Colab tools at first but I don’t know Python, I want to make and interactive video and provide an option in the video.

P5js coding:https://editor.p5js.org/ShinWu/sketches/EhbdxwUup

Max Project: https://drive.google.com/file/d/1lnRkFSIrbSu4f4HVRWL02StEU7DSXbgO/view?usp=share_link

 

[Real-time] generation + audio and video interaction

Need better computer configuration (mainly graphics card).

Using OSC, audio data is sent from other software to the AI ​​module to modulate the generation of the picture in real time. Al uses Python’s OSC library. (Haven’t try yet!)

 

 – https://prezi.com/view/qvtX0GwjkgblZsC7aMt6/

 

Group 2_Personal Ideas

At the end of the first session, all the members of Process discussed and decided to work in two groups. I decided to form a group with Jin Qin, Yuan Tian and Jeala Wang.

Before the first group discussion, I proposed that we each come up with some options and then pick the most interesting one from our options during the group discussion and use that as the basis to start making our project.

I had discussed with Philly in class about the water, light and ripple design and I thought it was interesting and cool and the idea was approved by Philly. So I tried to use water, light and ripples as elements and came up with the following options:

  1. Sounds of the seasons: capturing the sounds of the seasons and synthesising them into a piece of music using artificial intelligence
  2. A colour-blind world: If I am colour-blind, can I experience a colourful world through sound
  3. Musical Puzzles: exploring whether people who don’t know music can make music through collage (similar to a material puzzle, then with an ai music synthesiser)
  4. Music visualisation installation: figurative music
    Vibrations of music (amplified) – water/vibrations of lines/line drawings/other media
  5. Music and Time: Rhythm and the different frequencies of time, exploring whether music can change one’s perception of time
  6. Astigmatic world: what would the world look like if I blurred your eyes (using ai as an example, blurring images with woolen glass or water and analyzing with ai)

References

– light art – light installations – reflections and shadows: Light art installation, light art, light installation (2023) Pinterest. Available at: https://www.pinterest.jp/pin/731483164496784888/ (Accessed: February 10, 2023).

Ellen Barratt – light work: Light art installation, projection installation, light installation (2023) Pinterest. Available at: https://www.pinterest.jp/pin/731483164496686680/ (Accessed: February 10, 2023).

Momento wonderglass (2023) Pinterest. Available at: https://www.pinterest.jp/pin/731483164496784817/ (Accessed: February 10, 2023).

Pin On Water (2023) Pinterest. Available at: https://www.pinterest.jp/pin/731483164496784814/ (Accessed: February 10, 2023).

‘under the barbers’ shop’ by Ellen Barratt – 2017, London – Interactive Light Installation: Light Art Installation, light installation, artistic installation (2023) Pinterest. Available at: https://www.pinterest.jp/pin/731483164496686762/ (Accessed: February 10, 2023).

css.php

Report this page

To report inappropriate content on this page, please use the form below. Upon receiving your report, we will be in touch as per the Take Down Policy of the service.

Please note that personal data collected through this form is used and stored for the purposes of processing this report and communication with you.

If you are unable to report a concern about content via this form please contact the Service Owner.

Please enter an email address you wish to be contacted on. Please describe the unacceptable content in sufficient detail to allow us to locate it, and why you consider it to be unacceptable.
By submitting this report, you accept that it is accurate and that fraudulent or nuisance complaints may result in action by the University.

  Cancel