Any views expressed within media held on this service are those of the contributors, should not be taken as approved or endorsed by the University, and do not necessarily reflect the views of the University in respect of any particular issue.
Firstly, I ordered the videos and photos so that it would be a sequence of Chihuahua, Irish Wolfhound, Chihuahua, Irish Wolfhound… and another one for just the deaf dog and another for the blind dog.
After many trials, I could say that I mastered the art of creating a deuteranopia effect in Photoshop. Sadly, we decided to add some videos to the mix, which meant switching to Premiere Pro for editing, as Photoshop does not support videos.
The difference between Photoshop and Premiere is that in Photoshop, there is a filter you can use to help with the process of converting a photograph into deuteranopia. Meanwhile, in Premiere Pro, there is none, so I had to create my own.
This meant adjusting the Lumetri colours myself until I could create a yellow and blue scale image; luckily, I found a way to achieve it.
Once I got the colour down, I played with the brightness and darkness of every photo or video to ensure they were as similar as possible, once that was done, I proceeded to work on the blind dog files, I started blurring the image, then added a dark vignete in the center of the eye to simulate cataracts, after I darkend the image altogether.
In the following images, you can see the change that the images underwent:
After testing the videos with the phone and the VR headset I realized we needed to make some changes as it was a dizzy and nausea inducing experience, for that we took a screenshot of the actual vr headset example experience and used it to scale our own images.
After adjusting the images, the videos were ready to go.
P.S. What I want to talk specifically about is how the sound of the plane has been made. It was initially started with a pure Brownian noise from Audacity, then processed with Reapitch on Reaper (first 15 seconds pitched up, latter 15 seconds pitched down, on automation envelope), and so was the volume. Plus, other used plug-ins are listed below for reference.
Plug-ins for a plane flying overhead
Acknowledgement:
The sound of a dog barking, the camera shutter is from open sound resource website, whose resource websites are listed below:
The sound of the street people talking comes from my friend Yu
The sound of the forest comes from my friend Shi
The sound of the bird’s wings comes from my friend Xue
All the sound used here is with permission.
Carly:
In my case I didn’t source elements outside of the videos and photos we took, but what I did however do was research how to create a Deuteranopia effect in premier pro as I already knew how to do it in photoshop, as there was a filter, but in premier pro it was a whole different deal.
The places where I obtained useful information are the following:
How To Use The Hue Saturation Curves In Premiere Pro CC 2023: https://youtu.be/9i47z_3JmMQ?si=Yah5CRqqrFrRiCTj
Design for color blind users: https://blog.eppz.eu/design-color-blind-users/
Simulate Color-Blindness in Final Rendered Project: https://community.adobe.com/t5/premiere-pro-discussions/simulate-color-blindness-in-final-rendered-project/m-p/12136770
How to Simulate Color Blindness in a Video?: https://www.reddit.com/r/premiere/comments/oqhv4r/how_to_simulate_color_blindness_in_a_video/?rdt=48169
The Lumetri colour controls ended up as this result:
Experiencers head to the spots near the brink of the hill — not quite falling off, but close enough for a great view of the city and Arthur’s Seat. They plop down on the blanket, put on some cardboard VR headsets, and dive into the world as seen and heard by an Irish Wolfhound and a Chihuahua — one tall, one tiny, both dramatic. Next, they move a bit closer to the hill’s edge — still safe — and can choose to sit down again or stay standing, depending on how brave or cold they’re feeling. Then, it’s headset time again, this time to experience the world like a slightly confused Labrador: vision a bit blurry, hearing a bit off (or on), but vibes? We think it is matched.
⭐️Blanket (For experiencers to sit/ kneel down to fully be engaged in the dogs’ perspectives)
⭐️Dog chewer (For immersive decoration)
⭐️Dog treat ball (why we choose the blue one—blue and yellow are the colours that dogs see most clearly. )
⭐️Laminated cue boards
3. Presentation Purpose
In this experience, people will see the world through the eyes (and ears) of three very different breeds of dogs. Ever wondered how a tiny Chihuahua, a giant Irish Wolfhound, or a slightly confused Labrador makes sense of things? Well, now’s your chance.
4. How Human Beings Are Connected to the Dogs?
5. Lovely Moment—Warm Group Photo
Zixuan:
Today, we officially held our immersive exhibition on Calton Hill. The event went smoothly overall, and we received many valuable pieces of feedback that gave us a lot to reflect on.
This exhibition was divided into two main sections:
The first section focused on the auditory differences between large and small dogs. We set this part up on the grass, where we laid out a picnic blanket so participants could choose to sit or lie down—bringing their physical perspective closer to that of a dog. To enhance immersion, we also placed real props—the same balls and dog toys shown in the video—on the lawn. This allowed participants to see the objects from both a dog’s perspective (in VR) and their own, deepening the contrast between the two and reinforcing the immersive effect.
The second section took place near the edge of the hill, where participants had a clear view of Arthur’s Seat and the surrounding landscape. The scenery was stunning, and we found that the natural beauty of the environment made the contrast with the “sensory impairment” content (blind/deaf dogs) even more striking and poignant.
To maintain immersion, we scheduled the presentation at nearly the same time of day as when we shot our footage. This ensured that the lighting and sun angle matched, providing a consistent and believable visual experience.
One fun and unexpected moment during the exhibition was when a dog that happened to be playing on the hill ran off with one of our toy props—a treat ball from the setup. It seemed to really enjoy it, and the moment brought a smile to everyone’s face, making the experience feel even more alive.
🎧 Feedback & Reflections
After the participants finished the experience, we had short conversations with many of them to gather feedback. Based on their responses, I reflected on a few aspects that could be improved:
VR Headset and Accessibility Some participants had vision issues, such as nearsightedness or astigmatism, which made it uncomfortable for them to use the cardboard VR headset. Those with astigmatism saw double images, while nearsighted users reported dizziness. This made me realize that we hadn’t fully considered the diversity of user needs during the design phase, and future iterations should take this into account.
Video Transition Breaks Immersion In the second part of the experience, participants had to manually switch videos between two segments. This broke the sense of immersion for a few people. I’m considering changing it to automatically play both videos in sequence, which would create a smoother, uninterrupted experience.
Headphone Isolation Issue Our headphones weren’t effective in blocking out ambient noise. Some of the more delicate audio details we designed were lost in the background noise. In the future, we may need to use more closed or noise-cancelling headphones to preserve the full quality of the soundscape.
Insightful Feedback from Professor Jules Professor Jules offered some thoughtful suggestions after experiencing the work. He recommended experimenting with bone conduction audio to simulate a more realistic, first-person auditory experience. He also noted that the sense of depth in the hearing experience—especially between large and small dogs—could be more pronounced. This was a detail I hadn’t fully considered before, and I plan to improve this in the next version of the sound mix.
Carly:
On the 2nd of April, we presented our project to the world at the top of Calton Hill. We met 2 hours before the presentation so we could have enough time to get up there, reserve the specific spot and set everything up for the presentation. When 16.00 arrived, I went to the top of the stairs to meet Jules and Andrew. Jules was the first one to experience our presentation, while Andrew checked the other group’s presentation. When finished, they exchanged, and Andrew experienced our presentation.
We got some great feedback from our professors, classmates, colleagues and friends. Some said they felt like a dog, some would have loved more movement, some said that it was really interesting that the sun placement was the same as the one we were showing them…
Before our final presentation, we assembled the VR headset in advance, and I drafted part of a short written introduction for our project. The aim was to help the audience quickly understand the core concept and structure of our work before engaging with the immersive experience.
In the introduction, we first explained the main theme of the project: an immersive auditory experience from a dog’s perspective. We then guided the audience through the two main sections of the piece:
The first part explores the auditory differences between large and small dogs, comparing how they perceive spatial sound and frequency sensitivity within the same environment, highlighting how body size can affect hearing.
The second part focuses on the auditory perception differences between blind and deaf dogs, using specially designed soundscapes to simulate how sensory loss might lead to compensation or disorientation in their hearing experience.
After I finished the first draft of the text, Carly edited and refined the wording, selected appropriate images, and arranged the content into a visually coherent layout. Finally, we printed and laminated the introduction as a physical handout, which was displayed next to our setup during the presentation for visitors to read before engaging with the work.
Carly:
I started from Zixuan’s draft and curated the rest of the text. With the approval of both Zixuan and Ruiqi, I selected the images and created the presentation in Canva. I printed it at my accommodation, and we decided to laminate it to make it more durable and resistant to the weather. We also considered that laminating is available for free at the library.
After laminating, we went to Calton Hill with the VR Headset, headphones, and dog toys to test if it worked. Glad to say it did.
In the “large dog vs small dog” section of the video, since the camera perspective switches back and forth between the two dogs, I applied a similar approach in the sound design. Specifically, I alternated between the two pre-recorded and processed environmental ambience tracks, matching the shift in perspective. This allows the audience to clearly perceive the difference in spatial hearing between the two dogs.
Beyond the environmental ambience, my main focus during the sound design process was adjusting all sound elements except for the dogs’ own vocalisations, especially the EQ and tonal treatment of sound effects and human speech.
From a dog’s perspective, language is not fully comprehensible—what they pick up on are mainly tones, short commands, and key phrases. So, I used AI to generate a segment of human dialogue. I preserved the parts that sounded like clear commands or recognisable short phrases, while processing the rest to obscure the words. The result is a voice that maintains intonation and emotional tone, but becomes unintelligible, simulating how a dog might hear someone speaking without understanding the language.
Additionally, I made a clear distinction between the owner’s voice and the voices of other people in the environment. In a dog’s world, the owner’s voice holds unique emotional weight and should sound different from everyone else.
For the owner’s voice, I used a combination of Phat FX and Step FX. This blend created a sound that is partially unintelligible yet emotionally expressive, preserving the rhythm and tone without full clarity. It contrasts with the later segments where commands are delivered unprocessed, helping to distinguish the emotional impact of meaningful phrases.
For ambient crowd voices and general human chatter, I applied only Phat FX. This gives the sound a more distorted, less emotionally direct quality, where the language becomes vague and the tone more abstract, creating a sonic contrast to the owner’s voice.
Finally, I adjusted the EQ of all non-dog-originated sounds (environment, effects, and speech) based on the dog’s size and presumed hearing characteristics:
For larger dogs, I boosted low frequencies and reduced highs, creating a broader, fuller sense of hearing.
For smaller dogs, like Chihuahuas, I enhanced the high frequencies and cut some lows, narrowing the sound field to make it sharper and more focused.
Through all of these audio decisions, my goal was to ensure that the audience not only sees the world through each dog’s eyes but also hears the world as each dog might—highlighting how size, focus, and emotional connection shape the canine listening experience.