Any views expressed within media held on this service are those of the contributors, should not be taken as approved or endorsed by the University, and do not necessarily reflect the views of the University in respect of any particular issue.

Group 1 – Submission2

Preface

Once upon a time, Edinburgh was a bustling city filled with people and buildings. But after a catastrophic event, the city was abandoned, and nature began to reclaim its land. Years went by, and Edinburgh transformed into a lush, green jungle. The streets were overgrown with vines and tall trees, and the buildings were slowly crumbling away. But amidst the chaos of nature’s takeover, a new form of life emerged. Plants, once relegated to the parks and gardens of Edinburgh, now thrived in every corner of the city. Giant flowers bloomed in the ruins of buildings, and trees reached towards the sky, creating a lush canopy above.

Animals, too, had returned to the city. Deer and foxes roamed the streets, and birds nested in the trees. The sound of their calls filled the air, a symphony of life that had been missing from Edinburgh for so long. But those who ventured into the city had to be careful, for the plants had grown wild and unpredictable. Some had evolved to defend themselves, and their thorns could prove deadly. But for those brave enough to venture into the heart of Edinburgh, they were rewarded with stunning sights of nature’s beauty. And so, the city of Edinburgh became a place of mystery and wonder, a testament to the power of nature and the resilience of life. The ruins of the past served as a reminder of what once was, while the thriving jungle was a symbol of hope for the future (ChatGPT, 2023).

Introduction

In our Digital Media Studio project, we embarked on a captivating journey to investigate the realm of potential plant mutations and their prospective transformations in the future. Our team collaborated with a range of AI tools, such as ChatGPT, Midjourney, and DALL-E, to generate a series of enthralling sequences that offer unique insights into the possible evolutionary pathways of various plant species.

At the heart of this project lies the core theme of “Process,” which emphasizes the significance of human-AI collaboration in the creative journey. By leveraging the power of these cutting-edge tools, our team was able to explore and expand upon the intricate possibilities that lie within the plant kingdom. This submission highlights the various steps undertaken to bring these extraordinary visions to life, showcasing the seamless integration of human intuition and artificial intelligence.

The project is showcased using a diverse range of media, reflecting our commitment to innovation and providing an immersive experience for our audience. By incorporating projectors, tablets, Arduino-based systems, and Looking Glass panels, we have harnessed the potential of these new-coming technologies and furthermore emphasize interactivity to foster an engaging and memorable experience for our visitors. The combination of hands-on learning opportunities and interactive components encourages attendees to actively participate in the exploration of plant mutations and their potential future developments. This approach not only deepens their understanding of the subject but also sparks curiosity and fosters a greater appreciation for the complexities and beauty of the natural world.

Our process

Our project started by initiating an in-depth engagement with the advanced language model ChatGPT, to explore and better understand the potential ways in which plant species might evolve and undergo genetic mutations in a diverse range of environmental conditions. We sought to gain insights into the complex adaptive mechanisms that plants may employ to ensure their survival and prosperity in the face of ever-changing ecosystems and environmental pressures.

Following the creation of a compelling narrative with the help of ChatGPT, our team delved into the exciting process of visualizing and presenting the plant mutations in the most engaging and thought-provoking manner. We aimed to capture the essence of the story and the unique characteristics of the mutated plants through various visual concepts and media.

After carefully considering the numerous artistic styles and media options, we narrowed down our selection to several distinct methods of presentation, each chosen to best encapsulate the plant mutations and the narrative that accompanies them:

    • Projector
    • Tablets
    • Looking Glass
    • TV
Softwares and Platforms:
TouchDesigner
Maxmsp
Unity

Final materials

AI Sequences and interactivity setup

Interactive Interface in OSC Pilot
Touchdesigner setup

 

Further Reflection

Under the relationship between people and the environment deduced by artificial intelligence Chat GPT, other members of our team also conducted in-depth exploration from another perspective. Using a flower as a medium, think about the relationship between people and the environment.

Background

Design by Yingxin

Plant selection

design by Yingxin

 

Output direction

Through a series of interactive exhibits and activities, it aims to demonstrate the ways in which humans affect plants and promote a greater sense of responsibility for plant care and conservation.

Research process

Sound collection stage: (Chen Boya/Wu Yingxin/Liang Jiaojiao/Li Weichen)

In the early stage of design, our team rented sound collection equipment, went out to Dean Village and around Edinburgh to collect sound materials, and used the sound materials in the sound design part.

Device:
AAA Rechargeable Batteries
Zoom - H6 
Zoom - H6 Accessory Pack
3.5mm Male to 1/4" Female Jack Adapter - 22698
Beyerdynamic - DT 770 PRO 250 OHMS - 6912
Beyerdynamic - DT 770 PRO 250 OHMS - 34224

In the process of going out to explore, we observed the growth status of different plants in the natural environment, and used sound collection equipment to capture the sounds of different plants and environments. From the initial sound of wind and water, birdsong, and tree branches rubbing, to vehicles, buildings, etc., they all have sounds. Inanimate or animate, all are making their own voices. We also use sound as an important sense to perceive the world.

In this process, our team members not only gained a lot of new knowledge, such as sound collection. At the same time, after being exposed to natural sound collection, we have re-recognized nature and are constantly thinking about the connection between man and nature.

For other details, please move to the personal report of the sound engineer. 》》》

 

Interactive Exploration Stage: (Chen Boya/Liang Jiaojiao/Wu Yingxin)

Device: 
Looking Glass
Leap Motion Controller
Performance Laptop
Photos of our studies at Ucreate (Subjects: Boya, Jiaojiao, Photographer: Yingxin)
It is the first time to import a model with Looking glass. It is very cute (Photographer: Yingxin)
Discuss Arduino and frame models with Ucreate teachers (participants: Boya, Jiaojiao, Yingxin)
Learning to use Arduino (Photographer: Yingxin)
Learning to bind leapmotion (participants: Boya, Jiaojiao, Yingxin)
The experiment of successfully binding leap motion (participants: Boya, Jiaojiao, Yingxin)
Flower model making (Jiaojiao)

 

Materials and production: (Chen Boya/Wu Yingxin/Liang Jiaojiao/Li Weichen/Dominik)

Purchase materials (Boya, Jiaojiao)
Making grass (Subject: Creative Production: Boya, Jiaojiao, Drying: Weichen, Photographer: Yingxin)
Making the wooden frame (sketch: Jiaojiao, cutting: Weichen, Dominik)

 

Visualization(Yingxin Wu)

Maxmsp: I have taken a course about max which made me have a basic understanding of the function of max, while listening to a course about maxmsp taught by my professor of this course,  I first considered using max at the beginning.

The iteration of the visualization has gone through countless versions. This report mainly explores the parts that have been modified after the pre》

Then I realized that it is difficult to  modeling  a program programmatically which is not be visualized enough. Especially when the teacher suggested that we could use the same element such as a flower to connect our logic to make our exhibition clearer.

So how to create a flower?

I refer to the following video:

To be honest, I didn’t know anything about Touchdesigner before.

The first step in generating a flower is of course generating a petal. The shape of the petals is not complicated and has symmetry, which can be regarded as a geometric facet> There are many ways to generate such a patch geometry in Touch. Here, the spline is used to generate the patch outline, and then the poly algorithm is used to generate the patch.

Through this attempt of procedural modeling in TouchDesigner, I success. Components such as Copy SOP, Chop to SOP, Point SOP, and Group SOP are commonly used components for procedural modeling. Of course, there are other components that are not involved in this case.

Back to the flower itself, the growth of the plant and the opening of the flower are extremely delicate and wonderful, and the geometric structure of the plant is also very fascinating.

In terms of visualization, I used a more usable component that was modified by someone else authorized.

https://blogs.ed.ac.uk/dmsp-process23/2023/04/26/interactive/ ‎

Others Sounds(Yingxin Wu

devices:
Microphone Stand (Low) 
Schoeps - MK4 + CMC1L
H6

Group 1 – Sound Effects

Music(Yingxin Wu)

The music this time is also a very bold attempt. It was not done with traditional daw but with maxmsp which has never been used before. Because at the beginning, I wanted to use a max to control all processes. But then we had more and more ideas, and a max can no longer satisfy us.

Speaking of music, the innovation of changing the software (daw-max) first brought many difficulties. For example, the modulation of timbre took a lot of time. When combined, it becomes more abstract due to non-linear reasons

Pre for Sound(Yingxin Wu)

devices:
Microphone Stand (Low)
Schoeps - MK4 + CMC1L
USB 2.0 - A to B 
XLR Male to XLR Female * 8
Genelec - 1031A - Active Monitor * 2
Genelec - 8030A - Speaker * 2
K&M - Mic Stand 
Mackie - ProFX8

Group 1 – Music and Pre for Sound

Ideation: sketch

Sketch (Jiaojiao)
Steps to visit the exhibition

1. Step on the lawn and observe the plant growth process
When people walk into the venue and put their feet on the lawn, the video on the device in front of them will be triggered and played simultaneously.

  • Meaning: Think about the connection between human activities and plants

2. Try to wave your hands

Use gestures to control different materials and states of flowers. Open hands: flowering; clapping hands: changing flower material; tilting hands: watering.

  • Meaning: Different human activities have different impacts on the environment.

3. Feel the changes of Electronic Flower through drum changes
When people walk into the venue, their eyes will definitely be attracted by this beautiful visualized electronic flower. The flowers will dissipate as long as the sound or music. Among the changes are: coil change, blur change, rotation speed change, zoom in, color change.

  • Meaning: This part represents the influence of sound, and different sounds will bring about different changes in the image. The effect of sound on plant growth has always been a very interesting subject. For years, scientists have been exploring how to use sound to make plants grow better, or make them produce more. It is also good for everyone to have a look at the beautiful electronic flowers and have an understanding of the sound from abstract to concrete.
Improve

After the exhibition, we made improvements to the previous design, including the improvement of the user’s tour of the exhibition, rich experience functions and integrity.

  • STEP1: Sort out the logic, remake the plant model video, and optimize the experience process.
  • STEP2: Add flower interaction methods to enrich user experience.
  • STEP3: The visual replacement software is remade, the model is changed from the earth to a flower, parameter settings are added, and colors, lights, and materials are newly created or modified.
1.Vision (sketch/video/Unity)

Visual design plays a vital role in exhibition design, as it not only catches the viewer’s eye but also conveys information and emotion. Our design concept is to create an environment full of life and vitality by using C4D flower animation and Unity animation to guide the audience to start their thinking and exploration in the exhibition space.

We chose to use flowers as the theme of visual elements. By using C4D flower animation, we can present three different types, colours and shapes of flowers, so that the audience can feel the richness and diversity in nature. We can also express the flow and change of life through the dynamic movement and change of flowers.

In the exhibition space, we will also use Unity animations to create an environment that echoes the flower theme. We put flowers into Lookingglass, and the experience can explore different gestures to correspond to different flower changes. These design elements are strongly interactive to capture the attention and interest of the audience.

Group1—plant animation production 1.0(C4D)

Group1—plant animation production 2.0(C4D)

Group1—plant animation production 3.0(C4D)

Group1—Rose model(Unity)

 

2. Interaction (Arduino/ Lookingglass/ leap motion)     (Boya Chen )

In the choice and design of interaction methods, we wanted to create interaction methods that gave real-time feedback and were relevant to the theme of the design, rather than just simple audio and video control.

  • For the first part of the exhibition, we wanted to combine the sounds previously captured outdoors with the animations we generated to create a playful interactive installation. As our theme is related to plants and the environment, we wanted to simulate the natural environment during the user experience, interacting with the animated sound effects through the user’s actions. Ultimately we used the touch designer to connect the interaction process. We used an arduino pressure sensor to connect to the touch designer to control the rhythm of the video and audio playback, and in the final exhibition we placed the pressure sensor underneath the lawn model so that the interaction process would be a rhythm of the user stomping on the lawn to control the natural variation of video and sound. In this way we wanted the user to feel the impact of their actions on the environment.

STEP

1.Connect the Arduino board and pressure sensor: Connect the pressure sensor to the Arduino board, and then connect the Arduino board to the computer.

 

2.Write an Arduino program: Use the Arduino development tool to write a program that reads data from the pressure sensor and transmits the data to the computer.

 

3.Install Serial DAT: In TouchDesigner, install the Serial DAT component, which can receive data sent by the Arduino board.

4.Configure Serial DAT: In TouchDesigner, configure the Serial DAT component to communicate with the Arduino board. The serial port and baud rate parameters need to be specified.

5.Create a trigger: Use the CHOP component in TouchDesigner to create a trigger that can receive data sent by the Serial DAT component and convert the data into a signal that can be used to control the speed of video playback.

6.Control the speed of video playback: Use the Movie File In component in TouchDesigner to import a video file, and use the trigger to control the speed of video playback based on the data received from the pressure sensor.

  • In the second part of the exhibition, in order to allow the user to better experience the process of plant change, we placed the first stage of the 3d model into a holographic looking glass and used leap motion gesture recognition to control the change of the model. In addition to simple functions such as changing the perspective of the model, we have also designed gestures and signs to help the user interact with this part. For example, the gesture of watering a flower to make it start to grow, or the gesture of clapping a hand to change the external shape of the flower. leapmotion and looking glass not only make the visuals more intuitive, but also create a more natural and flexible way of interacting with the design content.

Steps

  1. Connect the Leap Motion and the Looking Glass
  2. Install the Leap Motion SDK: Download and install the Leap Motion SDK, which will provide useful libraries and sample code.
  3. Install the Looking Glass Unity SDK: Download and install the Looking Glass Unity SDK, which includes the necessary tools and assets for developing Looking Glass applications in Unity.
  4. Set up the scene in Unity: Create a new scene in Unity and add a Looking Glass plane to the scene. Import 3D model into the scene and place it in front of the Looking Glass plane.
  5. Write the Leap Motion control code: Use the Leap Motion SDK to write a code that reads the Leap Motion hand gesture data and converts it into a control signal for the 3D model in Unity.
  6. Map the hand gestures to model controls: Map the Leap Motion hand gestures to control the 3D model’s rotation, translation, and scaling in Unity.

 

  • The following are references to the interactive section

 

Collection of user experience feedback

1.“This is the first time I have participated in a plant-themed exhibition, and this exhibition has given me a deeper understanding of plants. I have never thought that there can be so many connections between people and plants. Now, I am beginning to realize that each There is also a close relationship between plants and humans. I also found the interactive parts in the exhibition very interesting, which was a great experience!”

2.“I find this exhibition very inspiring. It shows us the importance of plants in our lives and to reflect on the relationship between man and nature. I like the interactive experience in the exhibition, such as using lookingglass, and using different gestures to Flowers achieve different effects. It’s really interesting. So, I’m glad I participated in this exhibition, it gave me a deeper understanding of the relationship between nature and human beings.”

3.“I think the theme of this exhibition is very good. However, I still encountered some problems during the experience. For example, when I stepped on the lawn, the interactive device was not triggered in time, and the response was a bit slow, so I think I can continue to optimize it. Experience the process.”

4.“The visual effects and sound effects of this exhibition are very good. I feel that I am really in a plant world, and the various sounds in the exhibition make me feel that I am in a real environment, allowing people to understand It gave me a better understanding of the relationship between plants and nature, and also made me feel that I need to pay more attention to our natural environment. Overall, it was a very memorable experience.”

 

 

Conclusion

This innovative exploration of potential plant mutations and future transformations serves as a testament to the power of interdisciplinary collaboration and the limitless potential of human-AI partnerships. By harnessing the capabilities of advanced AI tools and working in synergy, we have successfully created an engaging and thought-provoking exhibition that invites audiences to ponder the fascinating intricacies of the natural world.

 

References

https://github.com/ultraleap/UnityPlugin/releases/tag/com.ultraleap.tracking/6.6.0

https://docs.lookingglassfactory.com/developer-tools/unity

https://docs.lookingglassfactory.com/developer-tools/unity/prefabs#holoplay-capture

https://wiki.seeedstudio.com/Grove-Touch_Sensor/

https://developer.leapmotion.com/unity

https://www.nts.org.uk/stories/the-thistle-scotlands-national-flower

https://1.share.photo.xuite.net/ngcallra/1148daa/16615727/894216827_m.jpg

https://www.bilibili.com/video/BV1he4y1B79Z

https://cowtransfer.com/s/725c23cf2d7f47

 

 

 

 

 

https://www.picturethisai.com/image-handle/website_cmsname/image/1080/154238468984143882.jpeg?x-oss-process=image/format,webp/resize,s_422&v=1.0

https://www.bilibili.com/video/BV1rr4y1Q714

https://www.bilibili.com/video/BV1he4y1B79Z

https://www.haohua.com/upload/image/2019-08/04/27b56_41fd.jpg

 

 

 

 

 

 

Group 1 – Preparation & Presentation

Preparation

In the days leading up to our highly anticipated presentation, our team faced a distinctive challenge: the creation of a tablet stand in the form of a tree. This particular design held significant importance, as it represented the underlying principles and themes of our project.

Working meticulously we ensured that the tree-shaped tablet stand was not only structurally stable but also visually additive to our project’s core values.

Upon completion of the tablet stand, our focus shifted to the essential stage of preparing the presentation venue. The day prior to the event, we convened in room CC02 to conduct a comprehensive assessment of the equipment and facilities. Our objective was to create an optimal setting that would facilitate a professional, academically rigorous presentation that would engage audience.

During this process, we tested out aspects of the room setup, including the audio system, visual displays, or interactive elements. We devoted considerable attention to ensuring that every detail was adjusted as needed, aiming to provide a seamless experience for our audience.

Presentation

One of the key components of our exhibition was a series of video sequences, which demonstrated the potential evolution of various plant species in the future, utilizing advanced artificial intelligence algorithms. These fascinating visuals offered a unique glimpse into how plants might adapt and grow over time, considering current environmental and biological factors.

To make the experience more engaging and immersive for our audience, we created a user-friendly control panel with a range of settings that allowed attendees to personalize the visual display of the plants’ evolutionary journey. This interactive feature encouraged viewers to dive deeper into the subject matter, sparking their curiosity and fostering a better understanding of the complexities involved.

The control panel included several adjustable settings such as speed, hue, rotation, brightness, gamma, and contrast. These options enabled users to tweak the video sequences to their liking, resulting in a customized viewing experience. For example, adjusting the speed allowed attendees to control the rate at which the plants evolved on screen, while changes to the hue, brightness, and contrast offered various visual interpretations of the plants’ evolutionary paths.

Feedback and possible improvements

After presenting our project and receiving feedback, we’ve taken the time to reflect on the key areas where we can improve. As we move forward, we’d like to address these points to create a more cohesive, engaging, and immersive experience for our audience:

  • Create a more cohesive experience by connecting the different components and presenting them as a single experience. This can be achieved through better signposting, creating a narrative, or showing more intention in the presentation.
  • Focus on conveying a clear message about your chosen theme and make sure that all components contribute to this message.
  • Enhance the AI video and Max/MSP musical portion by introducing interactivity between the two, creating a more immersive audio experience.
  • Improve the Max/MSP musical patch by adding more variation, timbral and dynamic range, and possibly interactivity.
  • Enhance the Leap Motion/Looking Glass piece by connecting it to the Max patch and creating a more interactive sonic component.
  • Tie all components into the same sonic landscape to create a unified experience.
  • Improve user experience by refining or signposting interactive gestures and ensuring the piece can speak for itself without requiring explanation.
  • Solve issues with the iPads and TV screen components

Group 1 – Interactivity Tablet Tests

In the ever-evolving digital world, interactivity plays a pivotal role in captivating audiences and crafting unforgettable experiences. As tablets continue to gain traction as a popular interactive medium, understanding how to effectively test and implement these elements is paramount.

Resolume Arena

Resolume Arena is a formidable VJ software that empowers users to create live visuals, projections, and interactive experiences. Its comprehensive features make it the perfect choice for developing engaging content tailored to any audience. By incorporating Resolume Arena with a tablet, we can offer our audience the opportunity to interact with the visuals in real-time, providing an immersive experience that can be adapted to any event or installation.

OSC/Pilot

OSC/Pilot is an adaptable control software that enables users to craft custom interfaces for managing various applications, including Resolume Arena. When used in conjunction with a tablet, OSC/Pilot becomes an outstanding tool for facilitating public interaction with our visuals. However, we encountered a challenge: OSC/Pilot requires a license to save, load and modify workspaces directly on a tablet.

To overcome the free licensing restrictions associated with editing OSC/Pilot workspaces on a tablet, we found a clever solution: TeamViewer. By installing TeamViewer on both our laptop and tablet, we can remotely access and edit the workspaces on the tablet without the need for an OSC/Pilot license.

Here’s how we made it work:

  1. Install TeamViewer on both our laptop and tablet.
  2. Establish a connection between the devices by entering the TeamViewer ID and password.
  3. Connect Resolume Arena OSC controls with OSC/Pilot
  4. Once connected, remotely access our laptop’s screen from the tablet, allowing us to edit OSC/Pilot workspaces as if we were working directly on the laptop.

Edit: Since Resolume Arena shows a watermark when importing own video, we decided to use TouchDesigner along with the OSC/Pilot for controls.

TouchDesigner

TouchDesigner is a versatile visual programming environment that enables artists, designers, and developers to create real-time interactive multimedia projects. With a node-based interface, it allows users to quickly build complex visuals by connecting various components, making it perfect for live performances, installations, and interactive experiences.

This workaround has been instrumental in providing us the flexibility to create engaging interactive experience.

 

Test setup & videos:

Group 1 – Art Projects utilizing Projectors

Art projects that incorporate projectors as their medium often belong to the realms of installation, video, or projection mapping. These forms of art are marked by their ability to transform physical spaces and the intriguing interplay between light and surfaces.

  • Tony Oursler’s Video Projections – an American multimedia artist, is celebrated for his inventive use of projectors in his artwork. He projects distorted, surreal, and uncanny images of human faces and body parts onto a variety of objects and surfaces, such as dolls, fabrics, and walls. Oursler’s work delves into themes of identity, psychology, and the influence of technology on human consciousness. By generating unsettling, dream-like environments that merge the physical and digital worlds, his installations provoke viewers to question their perceptions of reality.

  • Krzysztof Wodiczko’s Public Projections – Polish artist Krzysztof Wodiczko is renowned for his politically charged public projections on buildings and monuments. By projecting images and messages onto historical landmarks, Wodiczko confronts social issues like homelessness, war, and human rights. His projections encourage viewers to re-evaluate the meaning and significance of public spaces, while fostering dialogue around critical social and political issues.
Krzysztof Wodiczko on his 1988 Hirshhorn Museum projection - YouTube
https://wamu.org/story/18/02/13/startling-public-art-display-coming-hirshhorn-mean/
  • Pipilotti Rist’s Video Installations – Swiss artist Pipilotti Rist is known for her vibrant, immersive video installations that blend projections, sound, and sculpture. Rist’s work frequently explores themes of gender, sexuality, and the human body through dreamy, abstract visuals. By transforming galleries into multi-sensory environments, her installations challenge traditional notions of space and invite viewers to experience art in novel and unexpected ways.

  • Rafael Lozano-Hemmer’s Interactive Projections – a Mexican-Canadian artist, creates large-scale, interactive projections that involve audience participation. Incorporating elements of surveillance, biometrics, and robotics, his work examines the relationship between technology, power, and human connection. Lozano-Hemmer’s installations often require viewers to engage with the projections using their bodies or voices, which blurs the distinction between spectator and performer.

  • TeamLab’s Digital Art Installations – a Japanese art collective, specializes in crafting immersive, large-scale digital installations that integrate projections, sensors, and computer-generated imagery. Their work often showcases breathtaking visuals of natural phenomena, such as waterfalls, forests, and flowers. These immersive environments respond to viewers’ movements in real-time, creating a unique, interactive experience. TeamLab’s installations push the boundaries between art, technology, and nature.
teamLab Borderless Odaiba (Closed) Official Site: MORI Building DIGITAL ART MUSEUM
https://www.teamlab.art/e/borderless_odaiba/

 

Group 1 – Interactivity – Exploring Different Methods to Enhance Audience Engagement

In recent years, adding interactivity to projected content has become increasingly popular in various fields and revolutionized the way audiences interact with the subject, opening new possibilities for creativity and innovation.

The primary goal is to create a more engaging and memorable experience for the audience by allowing them to actively participate in the content being presented. This can be achieved through a variety of methods, ranging from touch-sensitive projection surfaces and motion tracking to mobile device interaction and augmented reality.

General interactivity

Touch-sensitive projection surface – Use an interactive whiteboard or touch-sensitive screen overlay to turn projection surface into a giant touchscreen. This allows users to interact with the content directly, such as clicking buttons, manipulating objects, or drawing on the surface.

Motion tracking – Integrate motion tracking technology like the Microsoft Kinect or Leap Motion to track user movement and gestures. This allows users to interact with the content without touching the projection surface, for example, swiping through slides or controlling a character with body movements.

Mobile device interaction – Create a companion app or web-based interface that audience members can use on their smartphones or tablets to interact with the projected content. This can include voting, answering questions, or controlling aspects of the video.

Augmented reality – Use AR technology to overlay digital content onto the physical environment, enhancing the user experience. Users can interact with the content through their mobile devices, creating a more immersive experience.

Voice recognition: Incorporate voice recognition software to enable users to interact with the content using voice commands. This can be done using platforms like Amazon Alexa, Google Assistant, or Apple’s Siri.

Physical controllers Provide users with physical controllers, such as a gamepad, joystick, or custom hardware, to interact with the content. This can add a tactile element to the experience and allow for more precise control.

Social media integration Incorporate social media platforms like Twitter, Facebook, or Instagram to enable users to participate in real-time discussions or share their thoughts about the content.

Gamification Introduce game elements like challenges, quizzes, or leaderboards to encourage user engagement and interaction.

Real-time data visualization Display real-time data, such as audience opinions, poll results, or sensor data, to create a dynamic and engaging experience that changes based on audience input.

Collaborative tools: Provide tools that allow users to work together, such as shared drawing boards or document editors, to encourage collaboration and interaction between audience members.

Video Interactivity

Interactive video software There are several interactive video platforms available, such as HapYak, Wirewax, or Kaltura, that allow for creation of clickable hotspots, branching narratives, or embed quizzes and polls within the video. Users can then interact with the video content itself by clicking or tapping on the screen.

Custom web-based video player Develop a custom HTML5 video player that allows users to manipulate parameters of the video, such as speed, color, or filters. This can be done using JavaScript libraries like Video.js or Plyr, which provide APIs to control various aspects of the video playback.

Real-time video processing Implement real-time video processing techniques using tools like MSP, OpenCV, WebGL, or Three.js to apply effects, filters, or transformations to the video based on user input. This can create a more dynamic and interactive experience, as users can see the changes they make to the video in real-time.

Video control interface Develop a separate user interface (UI) that allows users to control different aspects of the video, such as volume, playback speed, or scene selection. This can be done through a web-based interface or a companion mobile app that communicates with the video player.

Interactive overlays Add interactive overlays on top of the video, such as buttons, sliders, or dials, that allow users to control the video parameters. This can be achieved using web technologies like HTML, CSS, and JavaScript, or by using specialized tools like Adobe After Effects or Vuforia for augmented reality experiences.

Data-driven video content Create a data-driven video experience where the video content changes based on user input or external data sources. This can be done using tools like D3.js for data visualization or custom software development to manipulate video content in real-time.

Tablet interactivity

Remote control interface Develop an app or web-based interface for the tablet that functions as a remote control for the video. Users can control playback, volume, scene selection, or other parameters of the video directly from the tablet.

Data input and visualization Allow users to input data or make selections on the tablet, which then update the video being projected in real-time. This can include adjusting parameters like color, filters, or playback speed.

Augmented reality (AR) interaction Develop an AR app that uses the tablet’s camera to recognize the projected video and overlay digital content on the tablet screen. Users can interact with this digital content, which can then affect the projected video.

Multi-device synchronization – If multiple users have access to tablets, you can create a synchronized, interactive experience where users can collaborate, participate in quizzes, or control different aspects of the video simultaneously.

Motion or gesture control Use the tablet’s built-in sensors, like the accelerometer and gyroscope, to enable users to control the video through motion or gestures. For example, tilting the tablet could change the playback speed, or shaking it could trigger a specific event in the video.

Social media integration Incorporate social media platforms within the tablet app or interface, allowing users to participate in real-time discussions, share their thoughts, or submit feedback about the video.

Specific applications

TouchOSC  a modular OSC (Open Sound Control) and MIDI control surface app for iOS and Android devices. You can use it to send control messages to your video player or custom software, which then needs to be programmed to respond to those messages. This can help you control the video parameters or trigger specific actions using the tablet. https://hexler.net/products/touchosc

Vuforia an augmented reality platform that allows you to create AR experiences for iOS, Android, and Unity. You can use it to develop an AR app that recognizes your projected video and overlays digital content or interactive elements. Users can then interact with these elements using the tablet. https://developer.vuforia.com/

Resolume Arena a live video mixing software that supports OSC and MIDI control. You can use a tablet app like TouchOSC or Lemur to send control messages to Resolume Arena, allowing you to manipulate video parameters, effects, and layers in real-time. https://resolume.com/

Watchout a multi-display production and presentation system. You can use it to control and manipulate video content on multiple screens or projectors. Watchout supports external control through protocols like TCP/IP, DMX, and MIDI, which means you can use a tablet to control the video parameters remotely. https://www.dataton.com/products/watchout

QLab a live show control software for macOS that supports video, audio, and lighting control. You can use an iOS app like “QLab Remote” to control QLab from your iPad or iPhone. This allows you to trigger video cues and adjust parameters remotely using the tablet. https://figure53.com/qlab/

Isadora a media server and visual programming environment for macOS and Windows. It supports OSC and MIDI control, allowing you to use a tablet app to send control messages and manipulate video parameters in real-time. https://troikatronix.com/

 

 

 

Group 1 – Generative art, computational creativity and specific AI tools

Generative art and computational creativity

Generative art and computational creativity are fascinating fields that explore the intersection of art and technology. They involve using computer algorithms to create art that is dynamic, unpredictable, and interactive. When it comes to creating art that blends nature and digital entities, there are many techniques and tools used in generative art and computational creativity that can be useful. Some of these include:

  • Mathematical models and computational processes These involve using mathematical models to generate visual patterns and forms and transform visual elements. For example, artist Mitjanit’s work of blending together arts and mathematics uses randomness, physics, autonomous systems, data, or interaction to create autonomous systems that make the essence of nature’s beauty emerge.
  • Randomness This involves introducing unexpected results and variability in the output. For example, an animated snowfall would generally only play out in one way. When developed by a generative process, however, it might take on a distinct shape each time it is run.
  • Collaboration between artist and algorithm This involves working closely with computer algorithms to create art that is dynamic and interactive. In this relationship, the artist provides direction and feedback to the algorithm, while the algorithm generates new ideas and possibilities.
  • Interactive evolutionary systems These involve using AI algorithms to generate artwork and allowing users to select objects on the basis of their subjective aesthetic preferences. This allows users to actively participate in the creative process and contribute to the evolution of the artwork.

While generative art and computational creativity offer many benefits, there are also challenges and limitations to be aware of. One of the main challenges is how to evaluate the fitness of an artistic work based on aesthetic or creative criteria. There is also the issue of how to assign fitness to individuals in an evolutionary population based on aesthetic or creative criteria. These challenges underscore the importance of considering the role of human creativity in generative art and computational creativity.

Generative art and computational creativity are exciting fields that offer many opportunities for creating art that blends nature and digital entities. By incorporating mathematical models, randomness, and interactive evolutionary systems, artists can generate unique and compelling works of art that challenge traditional notions of art-making. However, it is important to be mindful of the challenges and limitations of these approaches and to consider the role of human creativity in the creative process.

AI systems and tools

When it comes to creating art that blends nature and digital entities, there are many open AI systems and tools that can be helpful during the development process. These tools allow artists to incorporate AI into their work, resulting in unique and compelling pieces of art. Some of the most useful open AI systems and tools include:

  • DALL·E 2 – OpenAI This tool uses machine intelligence and algorithms to create visualizations. It can generate high-quality images from textual descriptions, making it an excellent tool for artists who want to explore new visual concepts.
  • ChatGPT This text-based prompt generator can be used to generate a variety of creative writing ideas. It can also be used to generate text-based artwork, such as poetry or prose.
  • Midjourney This tool uses AI to create stylized pictures. It can be used to create unique and eye-catching visuals that combine digital and natural elements.
  • Prezi This tool can be used to create web-based interactive interfaces. It is an excellent option for artists who want to create interactive exhibitions or installations.
  • Supercollider, Puredata, Max/MSP These programming systems and platforms are useful for audio synthesis and algorithmic composition. They can be used to create unique and compelling soundscapes that complement visual elements.
  • X Degrees of Separation This tool uses machine learning algorithms and Google Culture’s database to find visually relative works between any two artifacts or paintings. It can help artists find inspiration and explore new visual concepts.

One of the most significant benefits of using open AI systems and tools is the ability to incorporate AI into the creative process. By using these tools, artists can generate new ideas, automate repetitive tasks, and explore new visual and auditory concepts. However, it is essential to keep in mind the limitations and potential biases of AI when using these tools. Additionally, it is important to consider the ethical implications of using AI in art and design, such as issues related to data privacy and algorithmic bias.

Group 1 – AI and the Creative Process

AI and the Creative Process: Exploring the Intersection of Technology and Art

As artificial intelligence (AI) becomes more advanced and accessible, artists and creatives are beginning to explore its potential as a tool for generating art and enhancing the creative process. From generative adversarial networks (GANs) to procedural content generation (PCG), AI offers a range of techniques that can be used to create unique and complex works of art.

Case Studies of AI-Assisted Design and Production Practices

One of the key benefits of using AI in the creative process is its ability to generate new ideas and designs. GANs, for example, enable models to generate new examples based on original datasets, allowing artists to explore new forms and patterns in their work. Neural style transfer (NST) is another technique that can be used to manipulate images and videos, creating new art by adopting appearances and blending styles.

Procedural content generation (PCG) is another area where AI is being used to generate game content such as levels and environments in a dynamic and unpredictable way. Games like “No Man’s Sky” have used PCG algorithms to generate diverse flora and fauna on procedurally generated planets, creating a rich and immersive gaming experience.

Reinforcement learning (RL) is a type of machine learning that allows AI to learn and practice complex tasks, such as playing games. This technique can be used to create interactive artworks that respond to the viewer’s actions or to generate music and soundscapes that evolve over time.

In addition to these techniques, there are many other examples of AI-assisted design and production practices. For example, Google’s DeepDream algorithm can be used to generate trippy, surreal images, while the AIVA AI can generate original music compositions.

Drawbacks of AI-Assisted Design and Production Practices

While AI offers exciting opportunities for creatives, it also presents several challenges and limitations. One issue is the need to evaluate fitness according to aesthetic or creative criteria when generating art through AI. This can be a difficult task, as subjective aesthetic preferences can vary widely among viewers. Another issue is the potential for AI to replace human creativity, rather than enhancing it. Some scholars have argued that AI may draw back human creativity by replacing it, rather than supporting it (Druid’s Garden, 2022).

Conclusion

As AI continues to advance, it offers increasingly powerful and versatile tools for artists and creatives. From GANs to PCG, these techniques can be used to generate new ideas and designs, and to create unique and complex works of art. However, integrating AI into the creative process also presents challenges and limitations, including the need to evaluate fitness according to subjective aesthetic preferences and the potential for AI to replace human creativity. By understanding these benefits and limitations, artists can make informed decisions about when and how to incorporate AI into their creative practices.

Druid’s Garden. (2022, October 16). AI Generated Art and Creative Expression. Retrieved February 18, 2023, from https://thedruidsgarden.com/2022/10/16/ai-generated-arts-and-creative-expression/

Group 1 – Exploring the Intersection of Art and the Environment

Exploring the Intersection of Art and the Environment: Insights and Inspirations from Scholars and Artists

As the world grapples with the complex and urgent issue of environmental degradation, artists and scholars have increasingly turned to the intersection of art and the environment to convey the gravity of the situation. Through works of art that showcase the impact of human actions on the natural world, these creators hope to inspire change and foster a deeper appreciation for the environment.

Scholars’ Viewpoints
The relationship between humans and the environment is a multifaceted and complex one, and scholars have emphasized the role that art can play in conveying the nuances of this relationship. By exploring themes related to the environment, artists can create works that highlight the complexity of environmental problems and communicate their urgency to the public. According to Rebecca Anweiler, an environmental studies scholar, “art can provide a medium for understanding the complexity of environmental problems and can help in communicating the importance of environmental issues to the public” (Anweiler, 2014).

Artists’ Viewpoints
Artists have used a variety of mediums and techniques to explore environmental themes in their works, from photography and sculpture to soundscapes and multimedia installations. Through these works, artists hope to create an emotional connection with the viewer and inspire them to take action to protect the environment. For example, Olafur Eliasson’s installation “Ice Watch” featured melting icebergs in public spaces, serving as a powerful reminder of the urgent need to address climate change.

Art of Change and Olafur Eliasson's Ice Watch: “Art has great potential for changing the world” - Artichoke

Artworks with Environmental Themes:
The intersection of art and the environment has led to a wealth of powerful and thought-provoking works that explore environmental themes in a variety of ways. For example, Maya Lin’s “What is Missing?” is a multimedia installation that showcases the impact of species extinction, habitat loss, and climate change. The installation includes videos, sculptures, and soundscapes that aim to create a sense of urgency about the need to protect the environment.

What is Missing? – a memorial by Maya Lin – Remembrance Day For Lost Species

Another example of artwork with environmental themes is the “Forest Symphony,” a musical composition that uses sounds from the rainforest to create an immersive and emotionally resonant listening experience. The composition aims to raise awareness about deforestation and the importance of preserving natural habitats.

Through the intersection of art and the environment, scholars and artists have found a powerful way to engage with the complex and urgent issue of environmental degradation. Through works that showcase the impact of human actions on the natural world, these creators hope to inspire change and foster a deeper appreciation for the environment. From multimedia installations to musical compositions, the artworks highlighted in this blog post demonstrate the rich diversity of approaches that can be used to explore environmental themes in art.

References:
Anweiler, R. (2014). The Role of Art in Environmental Education. The Journal of Environmental Education, 45(2), 85-101. doi: 10.1080/00958964.2013.803352

Shaw, N. (2019) Art of change and Olafur Eliasson’s ice watch: “Art has great potential for changing the world”, Artichoke. Available at: https://www.artichoke.uk.com/art-change-olafur-eliassons-ice-watch-art-great-potential-changing-world/ (Accessed: March 1, 2023).

What is missing? – a memorial by Maya Lin (2018) Remembrance Day For Lost Species. Available at: https://www.lostspeciesday.org/?p=681 (Accessed: March 3, 2023).

Forest Symphony allows humans to hear photosynthesis (video) – INHABITAT (2014). Available at: https://inhabitat.com/forest-symphony-allows-humans-to-hear-photosynthesis-video/ (Accessed: March 1 2023).

Group 1 – Equipment testing

Equipment testing

In preparation for our final presentation on plant hybridization, we conducted a test run of a projector to ensure that our video presentation would be displayed with optimal quality. Our test video showcased a compilation of plants changing and was designed to show how our final designs will be conveyed to the audience.

We connected the projector to a laptop using an HDMI cable and tested the display on a flat surface. During the testing process, we evaluated the projector’s ability to display colors and details of the plants with clarity and vibrancy. The results of the test were impressive, as the projector was able to render high-quality images that effectively conveyed the beauty and intricacy of each plant in the video.

In addition to evaluating the quality of the images, we also experimented with different placement options for the projector to determine the best angle and distance for optimal video quality. This allowed us to adjust the projector to display the video in a larger size, enhancing the viewing experience for our audience.

Our test run of the projector proved to be a success, as the device was able to display our video sequences with optimal quality.

Our equipment for final presentation:

  • NEC HD Projector ME382U
  • Microlab Speakers
  • Project Space C02 (ECA)
css.php

Report this page

To report inappropriate content on this page, please use the form below. Upon receiving your report, we will be in touch as per the Take Down Policy of the service.

Please note that personal data collected through this form is used and stored for the purposes of processing this report and communication with you.

If you are unable to report a concern about content via this form please contact the Service Owner.

Please enter an email address you wish to be contacted on. Please describe the unacceptable content in sufficient detail to allow us to locate it, and why you consider it to be unacceptable.
By submitting this report, you accept that it is accurate and that fraudulent or nuisance complaints may result in action by the University.

  Cancel