Any views expressed within media held on this service are those of the contributors, should not be taken as approved or endorsed by the University, and do not necessarily reflect the views of the University in respect of any particular issue.

Group 1 – Review, further discussions of the production process by utilising ai models

Technical difficulties encountered during development & Review

The limited computing power becomes a boundary to render our concept images and draw a large number of frames to create animation between mutated plants.
In fact, there are many solutions for running the program in a remote environment, such as Google Colab. I followed some tutorials to set up the Stable Diffusion on a remote computer, but the online unit’s storage had an error installing models from my Google Drive. I made it run the Stable Diffusion on a rented machine that has a powerful graphic card and successfully used it to upscale the sequence of one completed version of the video. It takes very much computing power to render 1800 frames from 512px resolution to 1024 px and draw extra details on each frame because the process of these images is to redraw every frame and make everything more intricate.

We have been enlightened by these practices in many ways.

Using reinforcement learning (RL) for Procedural Content Generation is a very recent proposition which is just beginning to be explored. The generation task is transformed into a Markov decision process (MDP), where a model is trained to iteratively select the action that would maximize expected future content quality.

Most style transfer methods and generative models for image, music and sound [6] can be applied to generate game content… Liapis et al. (2013) generated game maps based on the terrain sketches, and Serpa and Rodrigues (2019) generated art sprites from sketches drawn by human.

The tex2img/img2img/Hybrid Video I used in producing morphosis have applied Perlin Noise settings. Perlin Noise has an application to many naturally-occurring phenomenon.

A range of engagement with AI tools and applications enables us to obtain some inspirations and key references to serve us in the development of our concept. They also help us in seeking visual elements to develop the design when we have little knowledge of the basic science of plants. Even if we know nothing about the nature of plants, we can quickly generate hundreds of images (and variants) of cacti, vines and underwater plants.
We can just use our imagination to blend together variants of plants that do not exist, given the established common sense of biology told by AI. For example, by adopting the basic attributes of an aquatic plant, fusing the characteristics of a coral colony with those of a tropical water plant, and adding some characteristics that receive the effects of environmental changes, a new plant is created.

By using functions from generators including Perlin noise, style transfer algorithms (NST) and feature pyramid transformer (FPT), we can quickly appropriate elements from different images and fabricate them together, for example, by imagining a giant tree-like plant, even though I am not a botanist. I can transform the organisation of the leaves, trunk and root parts of the tree. For example:
Change the leafy parts into twisting veins and vines like mycorrhizae.
Recreate the woody texture of the trunk into parts of another plant, making this plant into a mixture of multiple creations.
Then, place it in a body of contaminated land.

After subjective selection and order, the plants’ images were matched to a certain context, and we created a process of variation between the plants’ different stages.
Ai models are engaged in the processes of generating ideas, forming concepts and drawing design prototypes with different degrees of input and output. I used a variety of tools in the design process, and the selection of materials was a time-consuming and active process that required human involvement and modulation. I also had to control many parameters in the process of generating the material to achieve the desired effect.

Prospect

In the future, if a set of interactive media were able to allow audiences to input their ideas to ai and generate animations in real-time through a reactive interface, a complete set of interactive systems would need to be built, and such a generative video-based program would require a great level of computing power.

Many problems that can be encountered in the process of txt2img and img2img generation in which the outcomes vary dramatically and lack proper correlation suggest that more detailed adjustments to the image model are needed to improve.
Because when an audience is in front of an AI model and tells it: “I want a mutant plant in a particular environment”, the AI model may not be able to understand the vague description. To be able to generate and continuously animate our conceptual mutant plants in real time would probably require not only a computer that powerful enough to draw hundreds of frames in seconds but also a model that trained for plants’ morphology accurately and was able to originate new forms itself.

Among the many contents in online forums and communities of Stable Diffusion, LoRA is a series of models with extensive training in generating characters and human models, which have been adapted and trained for many generations in different categories. In the future, there may be models trained on a variety of subjects and objects that may have important applications in the design and art fields.

 

reference:
Adrian’s soapbox (no date) Understanding Perlin Noise. Available at: https://adrianb.io/2014/08/09/perlinnoise.html (Accessed: April 27, 2023).
Liu, J. et al. (2020) Deep Learning for Procedural Content Generation, arXiv.org. Available at: https://arxiv.org/abs/2010.04548 (Accessed: April 27, 2023).
Liu, J. et al. (2020) “Deep Learning for Procedural Content Generation,” Neural Computing and Applications, 33(1), pp. 19–37. Available at: https://doi.org/10.1007/s00521-020-05383-8.
Yan, X. (2023) “Multidimensional graphic art design method based on visual analysis technology in intelligent environment,” Journal of Electronic Imaging, 32(06). Available at: https://doi.org/10.1117/1.jei.32.6.062507.

Group 1 – video production for plants’ morphing

Having been researching how to create morphological variation between our plants’ concept images, I found the Deforum plugin that might be able to achieve the desired effect of bringing the mutated plants to life. This program enables us to morph between text prompts and images using Stable Diffusion.

Deforum Plugin in the process of video generation

There are many parameters in Deforum.
I tried different ways in the process of making our plant mutation footage. At the start, I only used prompts. As a result, it seems that simply using the prompt words leads to the video not drawing the plants correctly at some points of the video where they should have been. In both demos, my video has a camera zoom setting. However, sometimes it appears that the plants disappear from the picture. These attempts demonstrated the limitations of using cues alone to generate animation and proved the uncontrollable outcome of Diffusion’s video generation.

More details in the process

Deforum’s settings

In Stable Diffusion, there is a CFG (classifier free guidance) scale parameter which controls how much the image generation process follows the text prompt (some times it creat unwanted elements from prompts). The same setting apply in Deforum’s video generation work flow, on the other hand, there are Init, Guided Images and Hybrid Video settings I need to figure with to morph between images and prompts.
Under the ‘Motion’ tab, there are settings for camera movements. I applied this setting in many times of attempts to generat videos but at the end I didn’t use it in our video.

an early version of video that generated from text prompts

Example of miss settings resulted in generating tiling pictures

In this sample, the output is slightly better when using both prompts and guided images as keyframes. The process has managed to draw the plant right in the centre of the picture. However, there are still some misrepresentations in the generated video. For example, the morphosis between multiple stages of plant transformation could be better represented but rather switches from one picture to the next like a slideshow. Moreover, As I typed something in prompts like “razor-sharp leaves” and “glowing branches”, the image was drawn incorrectly. For example, artificial razor blades come out on the plant’s leaves.

The setting specification of this parameter is like 0:(x).  If the value x equal to `1`, the camera stays still. The value x in its corresponded function affects the speed of the camera movement. When this value is greater than 1, the camera’s movement is zooming in. And when this value is less than 1, the camera zooms out.

The Zoom setting here is `0: (1+2*sin(2*3.14*t/60))` The effect in its outputted video would be: camera zooms in from frame o to frame 30, zooms out from frame 30 to 6o (and the camera movement speed becomes 0 when it is in frame 30 and frame 60), every 60 frames this camera movent repeat how it moves. The sample video below works then same function but its movement is like: zoom in, stop zoom, zoom in again and stop zoom again.

Changing from one subject to another in the video

When I try to make intended effect of morph between two things I emplied an other type of function. Below is the Prompt I noted in its setting.
{
“0”: “(cat:`where(cos(6.28*t/60)>0, 1.8*cos(6.28*t/60), 0.001)`),
(dog:`where(cos(6.28*t/60)<0, -1.8*cos(6.28*t/60), 0.001)`) –neg
(cat:`where(cos(6.28*t/60)<0, 1.8*cos(6.28*t/60), 0.001)`),
(dog:`where(cos(6.28*t/60)>0, -1.8*cos(6.28*t/60), 0.001)`)”
}
The prompt function in the video is intended to show a cat from the beginning. With the video playing, the cat morphs into something else. When the video is played to frame 60, the cat becomes a dog. And 60 frames later, it changes back to a cat.

example of prompt and guided images input in later video generation

The spacing between the inserted keyframe images and prompt input should be equally away from their previous/next keyframes: e.g. {
“0”: “prompt A, prompt B, prompt C”,
“30”: “prompt D, prompt E, prompt F”,
“60”: “prompt G, prompt H, prompt I”,
}

reference:

https://github.com/nateraw/stable-diffusion-videos

https://github.com/deforum-art/deforum-for-automatic1111-webui

Group 1 – producing samples & concept images

Generate samples using Midjourney AI

From the beginning, I don’t have a good image model for drawing plants. Midjourney supports running its ai model using a tool that is very friendly to beginners. I generate images by just inputting a set of prompts, including the phases of certain plants, certain ecology settings, certain backgrounds, etc. I can get a set of 4 images as a sample and generate another set of its variants if the images are not satisfactory.

I used this tool to get some visualisations and inspirations for our initial concept of forming the plants with Midjourney’s powerful model that trained with enormous data. Most importantly, a very developed model is essential to the outcome of the generation because the capability of the model to provide a precise understanding of what has been inputted determines how well the generated images come out as intended.

I used Midjourney at the beginning because it provides a relatively consistent style and tone in the same sets of generated results from the same sets of prompts. Midjourney provided me with a set of nice concept images in the early stages. However, due to the limited license. We cannot directly use Midjourney’s images in our finished presentation.

Draw concept images using Stable Diffusion Web UI

Later, I set up Stable Diffusion and started to look around how to run it locally to create concept images.

To gain visual references of concept art illustrations and designs, on the other hand, I appropriated elements for generating the appearance of plants that were drawn from a variety of image websites like Pinterest.

In conjunction with some sources on plant studies, their development and ecological evolution, I have drawn up a series of prompts for additional input beyond just the ChatGPT phrases to achieve the ideal output of the text-to-image process. Since many ideas of ‘bio morphosis’ of the plant are non-specific and sometimes can be fantasy in an environmentalism narrative, the process of making it is rather driven by imagination, even human concept artists would take a hard time for representations of the plants and their environments’ imagery because the subject we are creating doesn’t exist in the real word. Thus I modified every source of the prompt by specifying the settings and contextual information to make the prompt more absorbable to image AI tools.

At the very beginning, sample images generated by a default SD model were rather unsatisfactory, so I tried to adjust the prompts to make the images’ quality. If we just input a few prompts into txt2img to create a plant, it does not work.

Take “mutated plants in an underwater environment full of garbage” as an example. In the beginning, I got some photo-like pictures in which the plants and elements were chaotically put together rather than a conceptual design drawing. The shape of the plants is not intricate enough.

As the composition of a plant itself is rather complex, a plant has stalks, leaves, a root system, buds etc. There is no quick way to generate all details of these parts well at the same time. Therefore, I used long strings of prompts to generate single plants in every picture. The number of concept illustrations generated was huge, and only a few quality materials can be selected to take into the following process.

After exploring, I found that if I want to make the generated picture more like a conceptual design, I need to add more specific styling descriptions and mention ‘by certain illustrator/artist’ at some point. Under the prompt words column, the weight of the pre-positioned prompt phases used in the generation is greater than that of the post-position prompt words. Place a prompt before other prompts can let it make a greater impact on the result. This also applies to negative prompts.

I put one of Jiaojiao’s sketches into the Stable Diffusion’s img2img to convert it to new sketches. Different prompts have been input in different steps during the process.

 

 

 

When creating the cactus illustrations, I used Photoshop to remove the background from multiple images, took one/several piece(s) from different plants and patchworked them into a new paint. The samples of plants that were stitched by different parts/tissues of plants were then sent to img2img and redrawn to generate more variants.

In addition, I generated a series of images that depict wasteland and post-apocalyptic landscapes, which would be used as background for the mutating cactus. Then, I removed the background of the cactus concept images, put the cactus in the middle of each scene, and redrawn them. In the last round of redrawing the image, I turned down the CFG scale and refined the prompt so the basic composition and tone of the image would stay consistent.

sample of keyframes for video production

 

Used tools:

https://docs.midjourney.com/docs/midjourney-discord

https://github.com/deforum-art/deforum-for-automatic1111-webui

https://huggingface.co/lambdalabs/image-mixer

https://huggingface.co/spaces/fffiloni/CLIP-Interrogator-2

 

Group 1 – prompt from design ideas and generative content

Design ideas from generative content and environmentally relevant propositions:

There are science videos and documentation on how plants mutate in different environments, from which we can see that the traits of a plant can be inherited from its family and mutate in various ways in different ecological environments. Plants can mutually affect the environment in which it grows and reproduces.
Biology in Focus Ch 26 The Colonization of Land by Plants

Biology in Focus Ch 26 The Colonization of Land by Plants

How did plants Evolve?

For example, some science videos mention a flower that is pink when grown in acidic soil. When grown in neutral or alkaline soil, its petals are purple in colour. Some materials have shown that daisies under the influence of nuclear wastewater have grown misshapen and fissioned open.

mutated daisies near Fukishima nuclear power plant

I utilized ChatGTP to gain some inspiration for our project’s context by asking it about “possible characteristics of mutant plants”, “how plants would evolve with the impact of environmental changes”, and “ideas of plants’ morphosis and changes in a post-apocalyptic world” etc. Furthermore, I collected ideas and inspirations from descriptions in its response.

Examples from ChatGPT’s answers:

Context/imaginary landscapes I used as prompts to the narrative Ideas provided by ChatGPT
Mutant plants in underwater habitat ‘Vines grew to massive lengths’

‘leaves were covered in tiny, hair-like structures’

‘Leaves turned to face the sun like solar panels’

‘Vines grew to massive lengths’

‘Grew wild and untamed, roots delving deep into the rubble’

On wasteland with extreme climate ‘Leaves grew thicker to retain more water’

‘leaves become broader, thicker, and more robust, with a strange iridescence’

‘rough, scaly, or slimy texture’

‘spines’

On lands under the contamination of nuclear radiation ‘stems were able to twist and turn in the direction of the light’

‘Grew tall and twisted, leaves shimmering with an otherworldly energy’

‘grew with giant, fleshy bulbs’

 

Questions: Answers:
imagine how nuclear contamination affects aquatic plant habitats possible negative effects nuclear contamination may have on aquatic plant habitats:

radiation exposure; soil and sediment contamination; disruption of food chains; toxic chemical exposure; changes in water chemistry; reduction in biodiversity; changes in plant community structure…

imagine the appearance/physical characteristics of underwater plants in contaminated environment possible characteristics include:

Dark or unusual coloring (novel pigments); abnormal growth patterns (altered morphology); unusual size; abnormal texture; hybridization; enhanced photosynthesis; enhanced reproductive strategies…

In addition, inspired by these sources, including a collation of ideas drawn by ChatGPT and plant studies, I have listed some phrases to describe the mutation plants. And with the descriptions of environmental change used for pairing background images as well as being input variables, I categorised the conditions and environmental variations into the following areas. Adjustments are made based on the concepts I mentioned in the plant-evolving flowchart.

Negative environment variables:
1. Waste, e.g. emissions of rubbish, industrial garbage- The plant grows thinner, synapses arise, and mycorrhizal-like proliferation grows out
2. Contamination – darkening of the end of the plant’s buds and leaves, causing the structure to a deformity
3. Nuclear radiation – deformity, decay of the peduncles and stems of plants, mutation

Positive environment variables:
Plants grow abundantly, grow faster, higher, and stronger, leaves turning green and thrive…

A constant value:
During the plant model itself animation, the plant grows up and grows bigger and higher
To gain visual references of concept art illustrations and designs, on the other hand, I appropriated elements for generating the appearance of plants that were drawn from a variety of image websites. like Pinterest.

Group 1 Submission 1 – Project proposal

Project Concept Outline

Team members:Dominik Morc、Weichen Li、Yingxin Wu、Boya Chen、Jiaojiao Liang

Formulating the project idea:

Our objective is to present a collection of art design works that resemble video loops, where dynamic image/motion pictures are played with background music and reactive sound effects. The work aims to showcase a series of illustrations of bio morphosis and hybridities that blend together nature and digital entities through interactive media that can be built with a screen-based exhibition. In other words, this is a digital exhibition that presents visual, audio, and interactive techniques.

Our main idea is to demonstrate creative art that image and depict the evolution of plants under the impact of changing climate as so the whole ecological environment, to show the audience our perspective on environmental issues and emphasize the importance of the relationship between us human, living things, and nature in nowadays’ context of interdisciplinary practices of technology and creative art that are popular in environmentalism discourse.

Ideas, inspirations and case studies:

Our pilot study involves the following aspects:

1. Exploration of arts, projects and practices with themes related to environment and human–nature relationships

Some scholars and artists made their standpoints that we need to be aware that AI art generators can draw back our creativity because rather than supporting our imagination, AI is replacing it. In some circumstances, human creative expression is from a spiritual perspective, and the act of creation can be aroused from the flow state. It inspired us that our project could focus on nature and spirits from the living world.

https://thedruidsgarden.com/2022/10/16/ai-generated-arts-and-creative-expression/

In the journal AI Magazine, a paper indicated that a super-intelligent AI may one day destroy humans.

https://greekreporter.com/2022/09/16/artificial-intelligence-annihilate-humankind/

This gives us an idea to explore stories with the narrative of environmental upheaval and makes us think about the prospect one day, more intelligent beings will change the world we live in. The depiction of nature and ecosystems in Hayao Miyazaki’s films particularly appealed to us. There are a lot of environmentalism, nature and human well-being narratives combined with environmental degradation and ecosystem evolution that can be seen in the stories.

https://hashchand.wordpress.com/2021/06/29/nausicaa-exploring-environmental-themes/

In Nausicaa of Valley of the Wind (1984), surviving humans must live in coexistence with the Toxic Jungle in the post-apocalyptic world. A wonderful natural environment is illustrated in this story. The mutated flora and fauna in the Toxic Jungle that lived and took over the world are at the same time decomposing human-caused contamination of the earth.

Living Digital Forest https://www.designboom.com/art/teamlab-flower-forest-pace-gallery-beijing-10-05-2017/

A Forest Where Gods Live https://www.teamlab.art/e/mifuneyamarakuen/

Impermanent Flowers Floating in an Eternal Sea https://www.teamlab.art/e/farolsantander/

Examples of exhibitions – by teamLab

Vertigo plants an experiential urban forest at Centre Point – the installation is mainly working on lighting bars, and the speaker is playing natural sounds.

A project combining art and technology – ecological world building https://www.serpentinegalleries.org/art-and-ideas/ecological-world-building-from-science-fiction-to-virtual-reality/

2. Case studies of AI-assisted design, creative and production practices,

including concepts like Generative Adversarial Networks (GANs, a class of machine learning frameworks that enable models to generate new examples on the basis of original datasets);

Neural style transfer (NST, the technique of manipulating images, and videos and creating new art by adopting appearances, and blending styles. See: http://vrart.wauwel.nl/)

Procedural content generation (PCG, the use of AI algorithms to generate game content such as levels and environments in a dynamic and unpredictable way);

Reinforcement learning (RL, a type of machine learning that allows AI to learn and practice, such as playing complex games) and etc. There is an adventure survival game, No Man’s Sky. Its player explores and survives on a procedurally generated planet with an AI-created atmosphere and flora and fauna. What inspired me in this game is the diversity of flora and fauna built in the ecology of wonder earth, and all of its configurations are simulated by AI algorithms.

3. Study previous studies on generative art and computational creativity

It has been argued that there are problems in generative art and AI-assisted creative practice, such as “the choice and specification of the underlying systems used to generate the artistic work (evaluating fitness according to aesthetic criteria);” and “how to assign fitness to individuals in an evolutionary population based on aesthetic or creative criteria.” McCormack (2008) mentioned ‘strange ontologies’, whereby the artist forgoes conventional ontological mappings between simulation and reality in artistic applications. In other words, We are no longer attempting to model reality in this mode of art-making discipline but rather to use human creativity to establish new relationships and interactions between components. And he argued that the problem of aesthetic fitness evaluation is performed by users in the interactive evolutionary system, who selects objects on the basis of their subjective aesthetic preferences.

McCormack, J. (2008) “Evolutionary design in art,” Design by Evolution, pp. 97–99. Available at: https://doi.org/10.1007/978-3-540-74111-4_6.

 

The following elements can characterise the aesthetics of generative art:

  • Mathematical models & computational process (employing mathematical models to generate visual patterns and forms and transform visual elements). Case: Mitjanit (2017) ’s work of blending together arts and mathematics. “Creating artworks with basic geometry and fractals…using randomness, physics, autonomous systems, data or interaction…to create autonomous systems that make the essence of nature’s beauty emerge…”
  • Randomness (unexpected results and variability in the output): For example, an animated snowfall would generally only play out in one way. When developed by a generative process, however, it might take on a distinct shape each time it is run (Ferraro, 2021).
  • Collaboration between artist and algorithm

In this video, it was mentioned that in the relationship between people and AI in the fields of art and creative works, AI is more likely to emerge as a collaborator than a competitor (see – https://youtu.be/cgYpMYMhzXI).

The use of AI assistance in design fiction creation shows cases of generating texts with “Prompts” (prompt programming). The author claimed that it is no suggested direct solution to make AI perfect despite the in-coherency of AI-generated texts. Research results show that the AI-assisted tool has little impact on the fiction quality, and it is mostly contributed by users’ experience in the process of creation, especially the divergent part. “If AI-generated texts are coherent and flawless, human writers might be directly quoting rather than mobilizing their own reflectiveness and creativity to build further upon the inspiration from the AI-generated texts (Wu, Yu & An, 2022).”

Wu, Y., Yu, Y. and An, P. (2022) Dancing with the unexpected and beyond: The use of AI assistance in Design Fiction Creation, arXiv.org. Computer Science. Available at: https://arxiv.org/abs/2210.00829 (Accessed: February 11, 2023).

4. Search and learn a range of open AI systems, and tools that would be helpful during our development of the project

DALL·E 2 – OpenAI (image)

ChatGPT (searching, text-based prompts)

Midjourney (stylised picture-making)

Prezi (can be used to build a web/interactive interface)

Supercollider, puredata, max/msp (programing system and platforms for audio synthesis and algorithmic composition)

X Degrees of Separation (https://artsexperiments.withgoogle.com/xdegrees) uses machine learning algorithms and Google Culture’s database to find visually relative works between any two artefacts or paintings. What I find interesting about this tool is that if we have a particular preference for a certain visual style of artwork or if we want to draw on a certain type of painting, this tool can help us find a work of art that falls somewhere in between two chosen visual styles.

More can be seen at: https://experiments.withgoogle.com/collection/arts-culture

Amaravati: British Museum’s collaboration with Google’s Creative Lab – An augmenting environment project in a museum exhibition. In this introduction video, users use their phones like a remote control to have mouse-over actions with the interactive exhibition.

This is an example of displaying multimedia work and enabling interaction if we are going to create something with installations, screens and projectors.

AI art that uses machine intelligence and algorithms to create visualizations: How This Guy Uses AI to Create Art | Obsessed | WIRED

https://www.youtube.com/watch?v=I-EIVlHvHRM

Initial plan & development process design:

  1. Designing the concept, background & storyboard of plants → subjecting the plants to be designed → investigating the geographical and biological information of plants as well as cultural background
  2. Collecting materials → capturing photos, footage and audio → sorting original materials
  3. Formulating visual styling → image algorithms/AI image blending tools/style transfer/operate by hand → lining out the concept images by hand → rending images with AI → iteration…
  4. Designing the landscape & environment → geographical/geological condition → climate condition → ecologucal condition
  5. Designing the UI & UX → Creating a flow chart or tree diagram in the interface to demonstrate the plant’s appearance at each stage. Enable the viewer to control the playback of the exhibition.
  6. Exhibition planning → projector & speaker

Prototype interactive touchscreen/desktop/tablet/phone application

https://prezi.com/view/qvtX0GwjkgblZsC7aMt6/ + video to showcase

Equipment needed:

Budget use:

 

Prototype:

Jiaojiao

First, I collected information about bryophytes, learned about the different forms of bryophytes in nature, and chose moss flowering as the initial form. After that, I used chatgpt to ask ai what kind of appearance the bryophytes would evolve in different environments after the ecological environment was destroyed. According to the background provided by the AI, mosses have evolved to withstand the extreme conditions of a post-apocalyptic world, and mutated moss plants can have dark, eerie appearances, with leaves that glow in neon lights.

For example, in the context of nuclear radiation, plants exposed to nuclear radiation may develop genetic mutations that cause changes in their appearance. Change the growth pattern, and they may be stunted or develop differently shaped leaves or other structures. Overall, the evolution of nuclear-contaminated bryophytes is likely to be a complex and ongoing process, as bryophytes continue to adapt to changing environments.

In the context of global heat, for example, some bryophytes may have adapted to extreme heat by developing a thick cuticle, which reduces water loss and helps prevent damage to plant tissues. However, more research is needed to fully understand the evolutionary adaptations of bryophytes to extreme heat.

In the sketch below, I chose a representative bryophyte and hand-drawn the development state of the bryophyte in an extreme environment.

Secondly, I used C4D to make experiments on the appearance changes of different stages of moss plants.

 

Boya:

Design iterations and attempts to use artificial intelligence

I tried to use artificial intelligence to analyse the possible effects of environmental degradation on plants. I chose a cactus as the design object and set some basic directions for environmental deterioration such as lack of water, heat, chemical pollution, radiation etc. I used the AI to try and analyse how these environmental factors could affect the cactus and based on this I generated more design keywords.

I then tried to put the AI-generated descriptive keywords into different AI mapping software to test whether the resulting images would meet the design requirements.

I tried to use Midjourney, Disco Diffusion, DALL-E and other AI-generated images respectively.

  • Midjourney

Direct image generation by description

  • Disco Diffusion

Images generated by code shipping.

  • DALL-E

I also tried to use Processing and Runway to generate an animation of the cactus mutation process.

  • Processing

I have tried to write an interactive cactus model in processing that allows the cactus to undergo different mutations by the user’s clicks.

Click to change the cactus into different forms.

 

Audio:

Yingxin: “I used AI create a series of audios, did some machine learning on them and made some post-production changes to fit our style. But we felt that using only AI would not be able to fully transmit our creativity and our brilliant ideas. So we will reflect our creativity in the final presentation, from the sound material to the finished product. The music style must fit  with the picture.  The second is that I want the interactivity of the music to be reflected in the audience’s choice to influence the direction of the music. As an example, if we provide some options in the video. For example, choose to cut the tree or not to cut the tree. The audience chooses to cut the tree, the form of the plant will be different, it may be bad, then the style of the music may be more gloomy or low. Conversely, the music may be livelier and more cheerful.

This is just a style demo.”

We will recording lost of sound and add effects.

This is a recording log:

https://docs.google.com/spreadsheets/d/1wbFCR_z72PRr57pVdQH0de06Fj1HYXHEKiSr81UX4EM/edit?usp=share_link

Audio Reactive:

backgrounds:

  1. Interpolation or morph animation

Explore in the Latent space.

Through the “interpolation” algorithm, the transition screen will be completed  with programs and AI between the two pictures which could replace manual K-frames.

maybe we can try:

  1. Neural network nvidia provides off-the-shelfStyleGAN has three generations in the past two or three years, and the release speed is faster than learning.
    https://github.com/NVlabs/stylegan
    https://github.com/NVlabs/stylegan2-ada-pytorchhttps://github.com/NVlabs/stylegan3
  2. There are a large number of pre-trained modelshttps://github.com/justinpinkney/awesome-pretrained-stylegan
  3. The AI ​​development environment
  • Runwayml
  • Google Colab
  • Related cloud services

 

Process:

Add audio-visual interaction in the generation process

  1. Basic ideas

The original version was not real-time.

  • Read the audio file first, and extract the frequency, amplitude and other data of the sound according to algorithms such as FFT.
  • Add the above data as parameters to the process of generating interpolation animation, add, subtract, multiply and divide latent.truncate and other values.
  • Synthesize the generated animation (modulated by audio) with the audio file.
  • Google Colab can be used.

At the beginning, I used p5js to interact with Runway (an Ai machine learning tool) to generate an algorithm, which can automatically fill frames based on the volume of the music, and I finished coding.

Then I kept trying to get away from the Runway because we created the plants by ourselves. I soon realized that p5js had to rely on ml for that. I haven’t found an alternative for it, I try some Max and I believe it can, wanted to use google Colab tools at first but I don’t know Python, I want to make and interactive video and provide an option in the video.

P5js coding:https://editor.p5js.org/ShinWu/sketches/EhbdxwUup

Max Project: https://drive.google.com/file/d/1lnRkFSIrbSu4f4HVRWL02StEU7DSXbgO/view?usp=share_link

 

[Real-time] generation + audio and video interaction

Need better computer configuration (mainly graphics card).

Using OSC, audio data is sent from other software to the AI ​​module to modulate the generation of the picture in real time. Al uses Python’s OSC library. (Haven’t try yet!)

 

 – https://prezi.com/view/qvtX0GwjkgblZsC7aMt6/

 

css.php

Report this page

To report inappropriate content on this page, please use the form below. Upon receiving your report, we will be in touch as per the Take Down Policy of the service.

Please note that personal data collected through this form is used and stored for the purposes of processing this report and communication with you.

If you are unable to report a concern about content via this form please contact the Service Owner.

Please enter an email address you wish to be contacted on. Please describe the unacceptable content in sufficient detail to allow us to locate it, and why you consider it to be unacceptable.
By submitting this report, you accept that it is accurate and that fraudulent or nuisance complaints may result in action by the University.

  Cancel