Any views expressed within media held on this service are those of the contributors, should not be taken as approved or endorsed by the University, and do not necessarily reflect the views of the University in respect of any particular issue.

Mid-term Project Report

Group members:
Xinxuan Zheng (s2506313)
Han Zou (s2505422)
Yixuan Yang (s2503597)
Jiayi Sun (s2506655)


I. Initial Concept Generation

Brainstorming

When brainstorming, we initially branched out around two keywords, nature and memory, generating some rough themes.
Nature Theme: Jellyfish Crisis / Awakening Flower Device / Cyber Garden
Memory theme: Alzheimer’s disease/Cyberworld/Travelling memories

Secondly, each team member independently researched two potential project topics in greater depth: Alzheimer’s disease and the jellyfish crisis. After reviewing the references individually, the team voted to deepen the Alzheimer’s disease-centred idea during the first group discussion meeting.

Meeting Minutes

Concept Expansion

In the course of our research, we watched a number of Chinese and Western documentaries on Alzheimer’s disease, including A Marriage to Remember [1], Please remember me [2], and the high-profile film The Father [3], and found that this is a globalised phenomenon that deserves to be brought to the forefront. We were all deeply moved and solidified the idea of the project.

A Poster of THE FATHER

Since Alzheimer’s disease is a progressive neurological disorder, symptoms that evolve over time include memory loss, language deficits, and decreased physical functioning. Among them, we are in agreement about memory loss being the most brutal and helpless part of the disease. Therefore, we focused on memory changes over time to design an immersive experience.

Research

During the initial research, we mainly adopted the desktop research method to understand the basic characteristics of Alzheimer’s disease as well as the pathological features. After our discussion, we chose to develop the project based on the different stages of memory changes of Alzheimer’s patients. Meanwhile, we also critically analysed relevant art cases as well as technical feasibility.

Alzheimer
  1. Fundamental characteristics and data of Alzheimer’s disease

    Causes and Principles
  2. The progression of the illness
The process of memory deterioration

Related Works Analysis

Case1: ROBOTIC VOICE ACTIVATED WORD KICKING MACHINE[4]

Advantages: literal visualisation of sound, as well as generating interaction through the loudspeaker installation, enriching the interactive experience.
Weaknesses: The metaphors of some of the installations are too subtle
Benefits: The connection between text and music can be borrowed and the effect of text scattering and stacking can be used in the prologue of this project.

Case2: Remember me[5]

Advantages: using jigsaw puzzles to express the memory disorientation of Alzheimer’s patients, visually striking series presentation
Weaknesses: confined to the graphic realm of the poster, no interaction
Lesson: Similar objects can be borrowed as a metaphor for memory change and applied to the memory disorientation part of this project.

Case3: NMIXX DASH MV[6]

Advantages: The effect of using text to flow backwards and accumulates gives the audience a good visual experience and is very creative.
Weaknesses: The form is too neat and cannot express confusion and chaos well.
Benefits: We can learn from the effect of text accumulation, and then change the effect of text appearing so that it flies in from all directions.

Possible Technologies

  • Touchdesigner: visual effects involving image and model processing in projects
  • Runway: AI-generated videos of projects
  • AE: video editing
  • Max: real-time interactive audio and video processing
  • Speechify: AI vocal generator
  • Ableton Live: Sound effects and music production

II. Project Brief

Definition of Process

In the context of this project, process refers to the process by which memories slowly become blurred and confused until they dissipate as time fades away

Purpose: 

The purpose of this project is to use an immersive experience to demonstrate to the public the challenges faced by people with Alzheimer’s disease by combining art and technology, thereby arousing social attention and understanding of this group.

Project Name

The Long Farewell

Project Content

This immersive installation guides viewers through an experiential journey depicting the stages of memory decline in Alzheimer’s patients. Divided into four stages — Prologue, Blur, Fade, and Vanish — the installation sequentially leads participants through each phase using one integrated device.

Stage 1: Prologue

In the initial stage, visitors will physically walk into the center of the black screen. At this time, in terms of sound, voices came from all directions and as time passed, the speaking speed became faster and faster, accompanied by the overlay of different human voices. On the visual side, textual visualizations float into projected picture and pile higher and higher. This stage presents a tense atmosphere where participants have no time to react. Visitors can experience from a first-hand perspective the overload of memory information in the brain of an Alzheimer’s patient during the disease.

Stage 2: Blur

The blur stage involves confusion in recognizing faces and memorizing daily objects. Regarding face recognition, the project will select portrait pictures depicting individuals from four different races and age groups and use AI to generate more similar faces. Visual processing will then be conducted in TouchDesigner. For object recognition, the project will choose close-up images of family scenes from various countries, applying visual effects such as blurring to objects within the scenes using TouchDesigner. This stage aims to deliver a striking visual impact to participants based on the global prevalence of Alzheimer’s disease.

Stage 3: Fade

In the Fade stage, participants will be prompted to take a photograph of themselves in front of a camera, maintaining the visual impact of the previous phase’s abundance of facial images. Afterward, participants can pronounce their own names. Depending on the volume of their voice, their image on the screen will change in various ways like blurring. When participants stop speeking, the faces on the screen will gradually fade away. 

Here, we simulate the perspective of individuals around Alzheimer’s patients by calling out their names, prompting the patients to recall their own appearance. The fading face on the screen mirrors the cognitive response of the patient to external stimuli. This interactive experience is implemented via Max/MSP/Jitter or Vizzie.

Stage 4: Vanish

In the Vasnish stage, visitors will immerse themselves in the third perspective of the final stage faced by Alzheimer’s patients as they confront the dissolution of self-awareness. The screen will present a 3D-scanned digital model dissipating like particles blown away by the wind synchronized with music and sound. Participants will activate this effect by pressing a button. Finally, the screen will fade to black, accompanied by a line of text reflecting on life: Perhaps memories will fade away with time, but love is eternal.

Significance: 

  1. Improve public awareness of Alzheimer’s disease: Through immersive experiences, visitors can more intuitively feel the difficulties experienced by Alzheimer’s patients and increase their knowledge and understanding of this disease.
  2. Reverse society’s prejudice and stigma against Alzheimer’s disease: Use art and technology to show the lives of Alzheimer’s patients in a more warm and humane way, reduce discrimination and rejection of them, and promote social understanding of Alzheimer’s disease. Their love and support.
  3. Arouse society’s attention and respect for the elderly: By paying attention to the lives of Alzheimer’s patients and their families, we can arouse public attention and respect for the elderly and promote society to build a more friendly and inclusive environment for the elderly.
  4. Exploring the application of art and technology in solving social problems: This project combines art and technology, using AI animation, Touch Designer, Max and other technical means to provide new ways and approaches for the presentation of social problems, and explores the role of art and technology in solving social problems. Innovative applications in social issue communication.

III. Prototype

Prologue (Jiayi)

Initial Idea

This part is the background part of the entire project, and its purpose is to introduce the protagonist of this project to the user—the old woman suffering from Alzheimer’s disease. Being tortured by the disease every day often caused her life to become chaotic, unable to distinguish her relatives, forgetting important things in life, etc.

Practices

Initially, we planned to show this part in the form of AI-generated animation. Here are a few attempts I made.

Problems and Improvements

However, there are some drawbacks to AI-generated animations:
In terms of emotional expression, AI cannot fully simulate real human emotions.
At the technical level, although AI technology is constantly developing, there are still some problems with the smoothness of movements.

After the group discussion, we set out to create a new way of telling the story of everyday life with Alzheimer’s disease.
We plan to make a video with conversational sentences from the daily life of Alzheimer’s patients floating in all directions on the screen, such as: Mom, have you eaten? I am your daughter! Why did you soil the bed again? etc. When each piece of text comes out of the air, it will be accompanied by a corresponding audio sound. Finally, the text is piled up little by little at the bottom of the screen, as shown in the figure below:

Screenshot from mv DASH

In this part, we plan to use projections on the wall. The experiencer will stand in the middle and feel the words from all directions. At the same time, various sounds will linger in the ears, allowing the experiencer to experience the helplessness and confusion of Alzheimer’s patients.

AI Generation

Blur (Xinxuan / Yixuan)


Xinxuan

Initial Idea:
This part shows the character’s memories of people getting confused and not being able to tell who the people they are facing are. We propose to collect real portrait images from open source websites[7], train them into models through AI, and then continuously generate similar faces to simulate the confusing memories in the minds of characters.

Practices
In the beginning stage, I started by collecting 30 portraits of different ages and applying the white background process to them.

Secondly, I trained the database myself with Runway’s AI and generated similar faces. Then I used cutout video software to make a fusion effect, blurring real people into the converging faces generated by the AI. Finally worse, making the confusion grow from one person’s recognition difficulties to multiple people’s confusion.

Problems and Improvements

  1. Suggest a broader globalised view of the disease, e.g. include more ethnicities
  2. Make image effects more abstract

Yixuan

Initial Idea

In this section I selected close-up images of family scenes from different countries and processed the objects in the scenes with visual effects such as blurring in touchdesigner, with the aim of simulating the memory and cognitive aspects of the objects that the patients felt when they were looking for them.

Practices

In the beginning stage,I focus on the patient’s memory of everyday life that is gradually blurring, especially everyday objects.I used Touchdesigner to create fluid or mosaic effects on old everyday objects to simulate the process of fading memories. Thus, I found four representative elderly objects, namely a teapot, an old telephone, a loudspeaker and a pair of old glasses.

Problems and Improvements

We are suggested that the disease be looked at in a broader globalised perspective, for example by including footage of Oriental families and objects. I can’t just focus on a single object so that it doesn’t connect with the condition or our subject matter. After the group discussion, we set out to demonstrate the audio-visual interaction between the audience and the camera by modelling the physical scene + props.

Firstly, I built a small home scene through Rhinoceros modelling software and added a little colour to the chair, book, hat and plant respectively to make it easier for the patient to find the items.

Next, I switched the camera over to a single object and used touchdesigner to generate dots + lines + blur + distortion kinematics to help the experiencer feel the emotional ups and downs of finding the object.

Interaction 1: Give me the hat! Where’s the hat?

 

Interaction 2: Could you pass me the yellow book, please?

 

Interaction 3: Have you seen the white desktop bins?

 

Interaction 4: Pick me a yellow petal, please. Thank you!

Finally, I edited the four video files into one in PR and added screen motion effects (warping and distorting as well as cross-cutting, etc) to create the feeling of scrambling to find items and increase the immersion of the experience.

Fade (Han)

Initial Idea

Compared to other sections, this part was designed in the fourth week after considering Jules’s feedback on the project, primarily based on the following two suggestions:

Complex interaction methods like Leap Motion is unnecessary, which can opted for using a standard webcam and utilizing Max’s Vizzie for interaction design.
Incorporating bystander perspectives is essential, as Alzheimer’s disease significantly impacts the people surrounding the patient, which is an indispensable aspect.

Therefore, we plan to bring visitors into the perspective of the patient’s relatives and friends, calling the patient, and the images presented on the screen simulate the changes in the patient’s memory of the patient’s face when he hears them speak.

Technically, we first capture participants’ photos using a camera. Then, voices are picked up via the microphone, and the sound data is transferred to Max patcher and affect visual processing consequently. The visual changes depends on voices level. When visitors call out loudly, the image becomes relatively clear, and as the sound diminishes, the image blurs. If the calling voice dissipates, the screen gradually fades to blank.

Practices

Screen Recording of testing video effects in Max. The audio sample in this test demo is generated by speechify[9].

Problems and Improvements

At present, we have only experimented with the feasibility of image processing in Max. However, the detailed adjustment of picture processing effects still needs to be completed, and mapping the modulation parameters of visual effects to sound data will require extensive adjustments.

Vanish (Xinxuan)

Initial Idea

This section is really a dialogue between the characters and their selves and their lives, and it relates to existence. We want to convey the value of Perhaps memories will fade away with time, but love is eternal. Therefore, we scan a scene of an old man sitting on a chair and import the model into touchdesigner for particle effects. Then we will design a hand waving gesture interaction through leapmotion or kinect. When the visitor walks in front of the screen and waves his hand, the particles in the screen will dissipate with the gesture and a sentence of truth will appear.

Practices

First, I tried a mobile APP called Scaniverse to scan the scene, but found that the model accuracy was limited. Secondly, due to limited time, I switched to using pictures to try the more difficult particle dissipation effect in touchdesigner.

Problems and Improvements

  1. Remove unnecessary hand interactions.
  2. The last part can be interacted with by the visitor, e.g. allowing the visitor to experience the feeling of dissipation from the first point of view.

Audio (Han)

Music

During the initial meeting discussion, the team proposed a concept: The installation is divided into four area. As visitors navigate different areas of the installation, they wear headphones to experience corresponding sound feedback along with visual effects. However, recognizing the continuity of the memory forgetting theme, we abandoned this idea and decided to merge the four parts into one whole. 

Therefore, there will be a total of four pieces of music in this project. These four paragraphs will be based on the same motive and have a similar style. In addition, the four parts will be distinguished by the use of different instrumentations, and the mood will gradually build up to match the narrative clue of the worsening of the disease.

Here is a demo illustrate the desired music style, primarily piano sounds. Further refinement is adding more instrument and combining closer to the visual style in each part.

Vocal

To evoke participants’ sense of empathy, we have decided to incorporate more human voice materials to improve the effect of story-telling. Moreover, the treatment of human voice effects varies in each section. In the initial Prologue section, human voices will be played in dry sound, with some simple adjustments such as volume and panning. In the second and third sections, the human voices will add some distortion effects, while in the final stage, the voices will be processed to the point of being completely unrecognizable.

The audio below is some attempts at special effects with vocal samples.

Regarding the source of human voice audio materials, we originally planned to find actors that fit the vocal characteristics of Alzheimer’s patients and enter into the studio to record the sound materials. However, due to time constraints and difficulty in finding suitable resources, we decided to use AI voice model instead to complete the production of raw vocal materials.

Here is a AI generated voice samples.[9]

Other Sounds

In addition to the music and vocals as the main body, to match the visual effects, some special sound effects will be added to those generated videos, which mainly appear in the second and fourth stages.

This project will also pick up some live sounds. Details on this part is demonstrated in the description of the third Fade stage.

IV. Others Preparations

Materials

  • Projector
  • Speakers
  • Arduino button

Project Mood Board

[8]

Timetable

Week 1: Lecture
Week 2: Brainstorming, Group Discussions
Week 3: In-depth Research
Week 4: Revision and Improvement of Project based on Jules and Philly’s Suggestions
Weeks 5-6: Concurrent Practice and Sound Design for Various Parts of the Project
Week 7: Indoor Testing
Weeks 8-10: Completion of Testing
Weeks 11-12: Modifications
Week 13: Final Exhibition Preparation, including Venue and Equipment Rental
Week 14: Video Shooting and Editing

V. Appendix

[1] A Marriage to Remember | Alzheimer’s Disease Documentary | Op-Docs | The New York Times

[2] Alzheimer’s documentary in China: Please remember me

[3] The Father

[4] ROBOTIC VOICE ACTIVATED WORD KICKING MACHINE

[5] Remember me

[6] DASH

[7] Open source Character materialsPortrait pictures sources

[8]Images Sources:

https://www.pinterest.co.uk/pin/91901648638671628/
https://www.pinterest.co.uk/pin/1618549862356870/
https://www.pinterest.co.uk/pin/1109785533161637939/
https://www.pinterest.co.uk/pin/73465037663450281/
https://www.pinterest.co.uk/pin/391179917643670352/
https://www.pinterest.co.uk/pin/823806956845791472/

[9]Speechify: AI text to speech tool

Group Meeting_Project Planning and Collaboration

Meeting Minutes: 05-02-2024

Attendees: 

  • Han Zou
  • Yixuan Yang
  • Xinxuan Zheng
  • Jiayi Sun

Note-Taker: Han Zou

I. Decision Making

Before the meeting, each team member independently researched two potential project topics: Alzheimer’s Disease and Ocean Life ( Jellyfish). After reviewing the reference materials individually, the team collectively made a decision through a voting process. The consensus was to pursue the idea centered around Alzheimer’s Disease.

Shared-Document

To facilitate later discussions, we recorded the web links of the materials gathered during the research in the shared document linked below. Additionally, we provided brief annotations highlighting key points for each material.

Within the document, Xinxua took the initiative to craft summaries and outline her initial thoughts on the project’s specific implementation. These summaries and thoughts served as focal points for our discussion during the meeting, providing a foundation for further exploration of the chosen project direction.

II. Outline of the Project:

In the meeting, we discussed some relatively detailed practical methods based on the content of the shared document. We decided to present a final installation divided into three main parts, and we write background story of the project as follows.

A woman, once vibrant and full of life, was diagnosed with Alzheimer’s disease at the age of 60. In the initial stage of Alzheimer’s, her grasp on the world slowly started slipping away. Details faded, and the once-familiar surroundings became a puzzle of forgotten images. She struggled to recall where she was, and the clarity of her understanding began to blur. As the disease progressed into its second stage, the people she held closest to her heart became strangers. Even the faces of her dearest family members eluded recognition, and her memories intertwined into a complex and bewildering tapestry. In the third and most challenging stage, her memory changed like wind, unpredictable and fleeting. Moments that once formed the fabric of her identity seemed on the verge of vanishing at any given time.

Part1: Vague

Concept explanation:

In portraying the early stage of blurred memory in Alzheimer’s patients, the visual part will use of a mosaic or fluid effect. The mosaic or scattered fragments, reflects the initial stages of cognitive deterioration.

Implementation method:

First, through ChatGPT, improve the settings of the protagonist in the overview and generate some stories (text). According to the story, use DALL-E, Mid-Journey, etc. to generate some images that match the background of the story. Then use Touchdesinger to create fluid effects on text or images and export video files. Finally involves sound design to complete this part, and connect all of them to the screen for playback.

Feasibility:

There are TouchDesinger tutorial resources available on the Internet to help us achieve fluid or mosaic effects. Moreover, because the main purpose of this part is to tell the story, the text or image is finally linear presented on the screen in 2D, and the hardware requirements are general, so the feasibility is relatively high at present.

Part2: Chaotic

Concept explanation:

Through ML, different faces are combined together, and AI is used to generate videos creating a gradually changing effect between different faces, aiming to portray the disorientation and confusion associated with memory impairment.

Implementation method:

There is an interaction in this part of the plan: a person stands in front of the camera and moves left and right to control the movement of video time nodes. The video is projected on the screen, and the faces displayed on the screen gradually change. Face materials are generated through AI and ML, and then AI is used to generate effects that slowly change between different faces (about 3-4 pictures are planned). Then use Max or Touch Designer to recognize some movement through camera and control the exact displaying frame in the video and the sound effects.

Feasibility:

The ML/AI tool for generating faces is available, and one of our team members have production experience in this area. When it comes to the equipment, the key device in this part is the camera which can be a normal built-in computer camera. Besides, Vizzie objects can be used in  Max for image processing, and TD might have some similar function.

Part3: Vanish

Concept explanation:

Through TouchDesigner’s particle effects and audience interaction via waving, a 3D-scanned digital model is depicted dissipating like particles blown away by the wind. This simulation aims to mirror the gradual disappearance of memories in patients‘ brains.

Implementation method:

Begin by selecting a model and utilizing a mobile phone (or better equipment if available) to make a full-body 3D scan. Import the resulting 3D model into TouchDesigner to craft particle visual effects. For interaction, incorporate Kinect to recognize audience waving movements, thus controlling the playback of visual effects in TouchDesigner along with pre-made sound effects.

Feasibility:

One of our team members has previously experimented with the particle effects of the 2D model in TD, as well as the images presented by 3D scanning on the mobile phone. In terms of hardware, Kinect can be borrowed from the Music Store.

Note: The presentation of this section is not yet completely confirmed. Current plans are presented via projection or screen. But if equipment conditions permit, we hope to achieve the effect of holographic projection.

III. Sound / Music

During the initial meeting discussion, the team proposed a concept: as visitors navigate different areas of the installation, they wear headphones to experience corresponding sound feedback along with visual effects. However, recognizing the continuity of the memory forgetting theme, our sound designer opposed that individual headphone use might split the experience. To address this, she suggested a rough idea. Music will play in the background, as a bed, and the display room should equipped with multiple speakers playing the music bed. Additionally, three small speakers beneath each part’s screen would serve as audio objects and play feedback sound or some interactive effects when visitors reach specific areas. This holistic approach aims to maintain a cohesive experience throughout the evolving narrative.

The Sound Designer has created a demo to illustrate the desired music style, primarily featuring piano sounds. During the meeting, team members suggested expanding on this concept by incorporating different instruments for each part, adding diverse sound elements. Further refinement is planned in subsequent discussions.

In addition, regarding the vocal chop (demo shown below) effect that was planned to be used on the audio during last week’s discussion, it was temporarily shelved due to the difficulty in obtaining audio material, which is a recording of an Alzheimer’s patient telling a story.

IV. Roles and Responsibilities

In this meeting, we clarified the division of roles and responsibilities.

  • Yixuan Yang:Part 1 and Part 3 visual and installation
  • Xinxuan Zheng:Part 2 visual 
  • Jiayi Sun:Part 1 and Part 3 visual and installation
  • Han Zou: Sound Design
css.php

Report this page

To report inappropriate content on this page, please use the form below. Upon receiving your report, we will be in touch as per the Take Down Policy of the service.

Please note that personal data collected through this form is used and stored for the purposes of processing this report and communication with you.

If you are unable to report a concern about content via this form please contact the Service Owner.

Please enter an email address you wish to be contacted on. Please describe the unacceptable content in sufficient detail to allow us to locate it, and why you consider it to be unacceptable.
By submitting this report, you accept that it is accurate and that fraudulent or nuisance complaints may result in action by the University.

  Cancel