Any views expressed within media held on this service are those of the contributors, should not be taken as approved or endorsed by the University, and do not necessarily reflect the views of the University in respect of any particular issue.

Week10 Research on AI, Voice and Memory

Voice is an important part of memory, especially including the emotions of the speaker. It helps people recall memories of their loved ones. When using TTS models, we find that at the current level of development, AI-generated audio can already capture the emotions contained in the voice very well. This is also why we decided to use AI-generated voice. Below are some research topics about AI, voice, and memory.

The relationship between human voice and memory is profound, serving as a crucial element in both preserving personal histories and evoking emotional responses. Studies and projects, such as those explored by the Oral History Society and in digital archives like Voice Gems, illustrate the emotional and historical significance of recorded voices. They underline how voices not only act as a personal echo from the past but also as an emotional trigger that can bring memories vividly back to life.

Scientific research supports this emotional connection. Studies on recognition memory for voices highlight that the human voice can act as a powerful memory cue, influencing how memories are encoded and recalled. These findings are crucial for understanding how auditory elements of memories affect our recall and emotional responses. This interplay between voice and memory is not only significant for personal reminiscence but also plays a vital role in creative expressions, where voice recordings are used to create impactful art and preserve cultural heritage.

For further insights:

Week8 Find inspiration with AI music tools

AI Music Tool: Mubert | AI Music Generation

Idea
I’m working on developing a system for dynamically playing MIDI clips to provide a constantly evolving musical backdrop throughout our project. To build up a diverse library of MIDI clips, I’ve employed Mubert’s Text To Music tool. This tool allows me to generate numerous loop clips that may be thematically relevant to my project. From this pool, I carefully choose clips below that align with the atmosphere and tone of the existing parts of our project.
Generated Clips

Week2 Idea generation and theme identification

#Memory

Idea1:Memories trapped in time

Process refers to the process of slowly forgetting as time passes.

Background : Alzheimer’s disease is a neurodegenerative disease that is common in the elderly, but most people do not have a deep understanding of Alzheimer’s disease and even call it “dementia” and stay away from it. The memory of AD patients is as elusive as dust. We could treat the patient’s memory as a speck of dust. Use music visualization to express the plight of Alzheimer’s patients at each stage of the disease,hoping to create empathy for Alzheimer’s disease.

Input: Interview several elderly people to learn their stories and the objects that are important to them. Using AI generation, unreal objects and scenes are slowly generated from real memories, hinting at the memory changes of Alzheimer’s patients. Users can watch the memory change through an interactive slider

Output:An old object corresponds to a memory and story, which slowly disappears under the influence of music.

Technology:Touchdesigner point cloud effect

Idea 2:Travel memory

Process refers to the traces that people leave behind when they travel to different places. Although the trace is slightly unaware, the careful AI uses the mood and destination scenery at the time to help users generate unique travel songs as surprises during the trip.

Background: When traveling, everyone wants to leave some special memories and traces, some record with images, some mark with feelings. If each person’s footprints can be recorded with the mood and scenery at that time, an M song or a rhythm belonging to travel can be generated. As the number of trips continues to increase, the user’s music library has more and more music and rhythms, which also represent different travel memories.

Input: Travel destination pictures、Real-time mood (optional)

Output: Generate a rhythm (single destination) or music (if the destination is very diverse) based on keywords such as destination and mood

#Nature

Idea3:Cyberbonsai

Process refers to the process of growth and activity.
Under the premise of the plant-dominated Cyberplanet hypothesis, Cyberbonsai are all self-aware and play different roles to maintain the operation of the Cyberplanet

Background [Cyberplanet]:Many plants are endangered in the future, assuming the existence of a plant-dominated cyberworld, plants have their own division of labor (depending on sound) and color (depending on mood).

Input: Collect some endangered plant prototypes and model them, and finally use AI to simulate the growth process

Output: The AI generates different kinds of cyberplants and follows the music to generate self-awareness

 

Idea 4:Data garden

Process is when people are constantly browsing data and think they are getting closer to nature, but they are really just trapped in a data garden.

Background [Nature-deficit disorder]:Nature-deficit disorder is a phenomenon proposed by the American writer Richard Love, that is, the complete separation of modern urban children from nature. Some people have given their own explanation for it: “Some kind of The desire for nature, or ignorance of nature, is caused by a lack of time to go outdoors, especially in the countryside.” In real life, the number of people with “nature deficit disorder” has expanded from children to adults.

Input: Users browse or search image data for plants

Output:Different plants composed of data

 

 

 

Week2_Echoes-of-Ocean_YixuanYang

Project Introduction

Researching + photographing the historical morphological evolution of existing marine life (e.g. jellyfish) and collecting data on climate change and marine pollution, and then allowing AI machines to learn and speculate on the future ecosystems of the UK in the context of global warming and marine pollution, creating new forms of future creatures with echoes of the creatures’ distress signals, to create an interactive holographic installation of Ai-generated creatures. By modelling environmental factors and potential adaptations of species, we can gain insight into the possible ecological consequences of global warming and the emergence of new life forms, providing a unique perspective on how ecosystems may evolve in a future where these issues continue to worsen.

Process Idea

Tell the AI historical data on UK marine life (every 50 years), such as historical seawater temperatures, pollution levels, PH changes and image changes, and let the AI measure future changes in biomorphology.

Detect differences in the sound of organisms in different ocean data and let the AI mimic and predict.

Finally, we collect the morphology and sound of the future creatures generated by Ai, make secondary creation of image and sound through software such as touch designer, and design an interactive holographic device with Arduino or kinect, so that the user can observe the different reactions of the future creatures under different years of ocean environment through touching, waving, talking, and other interactions.

 

Week2_Idea_HanZou

Explanation of Process

To show world development process using AI processed images.

Project Introduction

The high technology represented by artificial intelligence is always in opposition to natural elements such as plants, ecosystems. However, Human beings are part of the ecological environment. While developing science and technology, we must pay attention to our relationship with nature.
The project aims to present a fantasy world through an interactive installation that combines futuristic visuals and sounds, arousing people’s connection with nature.

Possible Approach

– AI generated videos, as the foundational visual component of the installation.
– AI generate music, as a cue for the musical style within the installation. Using this as a foundation and build a generative music system via Max(https://mubert.com/render)
– Interaction design based on Touchdesigner and/or Max, corresponding to visual and audio effect changing.
– Utilizing Arduino or Kinect to achieve more interactions.

 

Week2_Wake-up-flower-installation_JiayiSun

Wake up flower installation.

Project introduction: Some withered and dying flowers in daily life are awakened with some simple actions through an interactive device. For example, add a drop of water, touch gently, etc. to wake up the flowers. When the flowers are awakened, they will make a sound (such as a melody, a sentence, etc.).

Project inspiration: How many bouquets of flowers have you ever bought? Did you end up throwing them away? In fact, people and flowers are very similar. Flowers will bloom and wither, and so will people. Everything in the world is like this, from new to old, from young to old. I want to express that everyone should better accept the withering of flowers and thus the aging of people.

Installation design: withered flower model (can be made by hand, adding some spots or rough cracks, etc., as shown in the picture below). After waking up the flower, you can add lights, change the movement of the flower, etc., so that the flower model can be reborn. It is also equipped with speakers to produce sound. The final display can be displayed outdoors to better integrate with nature, as shown below.

 

css.php

Report this page

To report inappropriate content on this page, please use the form below. Upon receiving your report, we will be in touch as per the Take Down Policy of the service.

Please note that personal data collected through this form is used and stored for the purposes of processing this report and communication with you.

If you are unable to report a concern about content via this form please contact the Service Owner.

Please enter an email address you wish to be contacted on. Please describe the unacceptable content in sufficient detail to allow us to locate it, and why you consider it to be unacceptable.
By submitting this report, you accept that it is accurate and that fraudulent or nuisance complaints may result in action by the University.

  Cancel