Any views expressed within media held on this service are those of the contributors, should not be taken as approved or endorsed by the University, and do not necessarily reflect the views of the University in respect of any particular issue.
Firstly, I ordered the videos and photos so that it would be a sequence of Chihuahua, Irish Wolfhound, Chihuahua, Irish Wolfhound… and another one for just the deaf dog and another for the blind dog.
After many trials, I could say that I mastered the art of creating a deuteranopia effect in Photoshop. Sadly, we decided to add some videos to the mix, which meant switching to Premiere Pro for editing, as Photoshop does not support videos.
The difference between Photoshop and Premiere is that in Photoshop, there is a filter you can use to help with the process of converting a photograph into deuteranopia. Meanwhile, in Premiere Pro, there is none, so I had to create my own.
This meant adjusting the Lumetri colours myself until I could create a yellow and blue scale image; luckily, I found a way to achieve it.
Once I got the colour down, I played with the brightness and darkness of every photo or video to ensure they were as similar as possible, once that was done, I proceeded to work on the blind dog files, I started blurring the image, then added a dark vignete in the center of the eye to simulate cataracts, after I darkend the image altogether.
In the following images, you can see the change that the images underwent:
After testing the videos with the phone and the VR headset I realized we needed to make some changes as it was a dizzy and nausea inducing experience, for that we took a screenshot of the actual vr headset example experience and used it to scale our own images.
After adjusting the images, the videos were ready to go.
P.S. What I want to talk specifically about is how the sound of the plane has been made. It was initially started with a pure Brownian noise from Audacity, then processed with Reapitch on Reaper (first 15 seconds pitched up, latter 15 seconds pitched down, on automation envelope), and so was the volume. Plus, other used plug-ins are listed below for reference.
Plug-ins for a plane flying overhead
Acknowledgement:
The sound of a dog barking, the camera shutter is from open sound resource website, whose resource websites are listed below:
The sound of the street people talking comes from my friend Yu
The sound of the forest comes from my friend Shi
The sound of the bird’s wings comes from my friend Xue
All the sound used here is with permission.
Carly:
In my case I didn’t source elements outside of the videos and photos we took, but what I did however do was research how to create a Deuteranopia effect in premier pro as I already knew how to do it in photoshop, as there was a filter, but in premier pro it was a whole different deal.
The places where I obtained useful information are the following:
How To Use The Hue Saturation Curves In Premiere Pro CC 2023: https://youtu.be/9i47z_3JmMQ?si=Yah5CRqqrFrRiCTj
Design for color blind users: https://blog.eppz.eu/design-color-blind-users/
Simulate Color-Blindness in Final Rendered Project: https://community.adobe.com/t5/premiere-pro-discussions/simulate-color-blindness-in-final-rendered-project/m-p/12136770
How to Simulate Color Blindness in a Video?: https://www.reddit.com/r/premiere/comments/oqhv4r/how_to_simulate_color_blindness_in_a_video/?rdt=48169
The Lumetri colour controls ended up as this result:
Experiencers head to the spots near the brink of the hill — not quite falling off, but close enough for a great view of the city and Arthur’s Seat. They plop down on the blanket, put on some cardboard VR headsets, and dive into the world as seen and heard by an Irish Wolfhound and a Chihuahua — one tall, one tiny, both dramatic. Next, they move a bit closer to the hill’s edge — still safe — and can choose to sit down again or stay standing, depending on how brave or cold they’re feeling. Then, it’s headset time again, this time to experience the world like a slightly confused Labrador: vision a bit blurry, hearing a bit off (or on), but vibes? We think it is matched.
⭐️Blanket (For experiencers to sit/ kneel down to fully be engaged in the dogs’ perspectives)
⭐️Dog chewer (For immersive decoration)
⭐️Dog treat ball (why we choose the blue one—blue and yellow are the colours that dogs see most clearly. )
⭐️Laminated cue boards
3. Presentation Purpose
In this experience, people will see the world through the eyes (and ears) of three very different breeds of dogs. Ever wondered how a tiny Chihuahua, a giant Irish Wolfhound, or a slightly confused Labrador makes sense of things? Well, now’s your chance.
4. How Human Beings Are Connected to the Dogs?
5. Lovely Moment—Warm Group Photo
Zixuan:
Today, we officially held our immersive exhibition on Calton Hill. The event went smoothly overall, and we received many valuable pieces of feedback that gave us a lot to reflect on.
This exhibition was divided into two main sections:
The first section focused on the auditory differences between large and small dogs. We set this part up on the grass, where we laid out a picnic blanket so participants could choose to sit or lie down—bringing their physical perspective closer to that of a dog. To enhance immersion, we also placed real props—the same balls and dog toys shown in the video—on the lawn. This allowed participants to see the objects from both a dog’s perspective (in VR) and their own, deepening the contrast between the two and reinforcing the immersive effect.
The second section took place near the edge of the hill, where participants had a clear view of Arthur’s Seat and the surrounding landscape. The scenery was stunning, and we found that the natural beauty of the environment made the contrast with the “sensory impairment” content (blind/deaf dogs) even more striking and poignant.
To maintain immersion, we scheduled the presentation at nearly the same time of day as when we shot our footage. This ensured that the lighting and sun angle matched, providing a consistent and believable visual experience.
One fun and unexpected moment during the exhibition was when a dog that happened to be playing on the hill ran off with one of our toy props—a treat ball from the setup. It seemed to really enjoy it, and the moment brought a smile to everyone’s face, making the experience feel even more alive.
🎧 Feedback & Reflections
After the participants finished the experience, we had short conversations with many of them to gather feedback. Based on their responses, I reflected on a few aspects that could be improved:
VR Headset and Accessibility Some participants had vision issues, such as nearsightedness or astigmatism, which made it uncomfortable for them to use the cardboard VR headset. Those with astigmatism saw double images, while nearsighted users reported dizziness. This made me realize that we hadn’t fully considered the diversity of user needs during the design phase, and future iterations should take this into account.
Video Transition Breaks Immersion In the second part of the experience, participants had to manually switch videos between two segments. This broke the sense of immersion for a few people. I’m considering changing it to automatically play both videos in sequence, which would create a smoother, uninterrupted experience.
Headphone Isolation Issue Our headphones weren’t effective in blocking out ambient noise. Some of the more delicate audio details we designed were lost in the background noise. In the future, we may need to use more closed or noise-cancelling headphones to preserve the full quality of the soundscape.
Insightful Feedback from Professor Jules Professor Jules offered some thoughtful suggestions after experiencing the work. He recommended experimenting with bone conduction audio to simulate a more realistic, first-person auditory experience. He also noted that the sense of depth in the hearing experience—especially between large and small dogs—could be more pronounced. This was a detail I hadn’t fully considered before, and I plan to improve this in the next version of the sound mix.
Carly:
On the 2nd of April, we presented our project to the world at the top of Calton Hill. We met 2 hours before the presentation so we could have enough time to get up there, reserve the specific spot and set everything up for the presentation. When 16.00 arrived, I went to the top of the stairs to meet Jules and Andrew. Jules was the first one to experience our presentation, while Andrew checked the other group’s presentation. When finished, they exchanged, and Andrew experienced our presentation.
We got some great feedback from our professors, classmates, colleagues and friends. Some said they felt like a dog, some would have loved more movement, some said that it was really interesting that the sun placement was the same as the one we were showing them…
Before our final presentation, we assembled the VR headset in advance, and I drafted part of a short written introduction for our project. The aim was to help the audience quickly understand the core concept and structure of our work before engaging with the immersive experience.
In the introduction, we first explained the main theme of the project: an immersive auditory experience from a dog’s perspective. We then guided the audience through the two main sections of the piece:
The first part explores the auditory differences between large and small dogs, comparing how they perceive spatial sound and frequency sensitivity within the same environment, highlighting how body size can affect hearing.
The second part focuses on the auditory perception differences between blind and deaf dogs, using specially designed soundscapes to simulate how sensory loss might lead to compensation or disorientation in their hearing experience.
After I finished the first draft of the text, Carly edited and refined the wording, selected appropriate images, and arranged the content into a visually coherent layout. Finally, we printed and laminated the introduction as a physical handout, which was displayed next to our setup during the presentation for visitors to read before engaging with the work.
Carly:
I started from Zixuan’s draft and curated the rest of the text. With the approval of both Zixuan and Ruiqi, I selected the images and created the presentation in Canva. I printed it at my accommodation, and we decided to laminate it to make it more durable and resistant to the weather. We also considered that laminating is available for free at the library.
After laminating, we went to Calton Hill with the VR Headset, headphones, and dog toys to test if it worked. Glad to say it did.
In the “large dog vs small dog” section of the video, since the camera perspective switches back and forth between the two dogs, I applied a similar approach in the sound design. Specifically, I alternated between the two pre-recorded and processed environmental ambience tracks, matching the shift in perspective. This allows the audience to clearly perceive the difference in spatial hearing between the two dogs.
Beyond the environmental ambience, my main focus during the sound design process was adjusting all sound elements except for the dogs’ own vocalisations, especially the EQ and tonal treatment of sound effects and human speech.
From a dog’s perspective, language is not fully comprehensible—what they pick up on are mainly tones, short commands, and key phrases. So, I used AI to generate a segment of human dialogue. I preserved the parts that sounded like clear commands or recognisable short phrases, while processing the rest to obscure the words. The result is a voice that maintains intonation and emotional tone, but becomes unintelligible, simulating how a dog might hear someone speaking without understanding the language.
Additionally, I made a clear distinction between the owner’s voice and the voices of other people in the environment. In a dog’s world, the owner’s voice holds unique emotional weight and should sound different from everyone else.
For the owner’s voice, I used a combination of Phat FX and Step FX. This blend created a sound that is partially unintelligible yet emotionally expressive, preserving the rhythm and tone without full clarity. It contrasts with the later segments where commands are delivered unprocessed, helping to distinguish the emotional impact of meaningful phrases.
For ambient crowd voices and general human chatter, I applied only Phat FX. This gives the sound a more distorted, less emotionally direct quality, where the language becomes vague and the tone more abstract, creating a sonic contrast to the owner’s voice.
Finally, I adjusted the EQ of all non-dog-originated sounds (environment, effects, and speech) based on the dog’s size and presumed hearing characteristics:
For larger dogs, I boosted low frequencies and reduced highs, creating a broader, fuller sense of hearing.
For smaller dogs, like Chihuahuas, I enhanced the high frequencies and cut some lows, narrowing the sound field to make it sharper and more focused.
Through all of these audio decisions, my goal was to ensure that the audience not only sees the world through each dog’s eyes but also hears the world as each dog might—highlighting how size, focus, and emotional connection shape the canine listening experience.
Today, we recorded environmental sound using the Sennheiser AMBEO VR microphone. We captured three separate recordings at the same location, with the only difference being the recording height, to simulate how dogs of different sizes perceive their surroundings.
Since the Chihuahua is very small, we couldn’t find a mic stand low enough to match its ear level. So we rested the microphone directly on the mic stand at a low angle to approximate its actual height.
For the Labrador and the Irish Wolfhound, we recorded at approximately 60 cm and 120 cm from the ground, respectively, to match their standing ear positions.
After recording, I processed the environmental sound recordings for the large dog and small dog perspectives, making adjustments based on their body size and likely auditory characteristics.
Larger dogs (such as the Irish Wolfhound) have larger body sizes and ear membrane areas, which make them generally more sensitive to low frequencies and less responsive to high frequencies. So, in post-processing, I boosted the low frequencies and slightly reduced the highs while also widening the stereo image to create a broader, fuller auditory space.
Smaller dogs (like the Chihuahua) are typically more sensitive to high frequencies but less responsive to lows. Therefore, I enhanced the high frequencies, reduced some of the lows, and narrowed the overall sound field to create a more focused, sharper listening perspective.
With these adjustments, we aim to authentically simulate how dogs of different sizes hear the world, enhancing the immersive quality of the experience and reinforcing the concept of “listening from a dog’s perspective.”
Ruiqi:
That was what I did on atmos for Labrador. Dogs typically hear frequencies from 40 Hz to 45,000 Hz, way far exceeding human hearing (20 Hz–20,000 Hz). They are most sensitive to higher frequencies (2k–45 kHz), which are critical for detecting sounds like prey movements, high-pitched whistles and orders. And I think amplifying 4k Hz can make sounds like human footsteps and verbal commands more perceptible.
Carly:
When recording the the ambience I made sure of having a measuring tape so we could get the correct size, the thing is that while recording the ambience it was still the plan to use the photos taken with the camera rather than the phone, which is the reason we had the measuring tape too so we could have the correct height.
Today we tested how our images and video content perform in a VR environment. Ruiqi and I went to the library and picked up a set of free VR headsets, then began a full round of testing. It was our first time viewing the project content inside VR, and while we ran into a few issues, we also gained some very helpful insights.
We started by testing the video Carly had created. Right away, we noticed a major issue: there was a thick black border surrounding the video in VR, which seriously disrupted the sense of immersion. It felt like we were watching the content through a “window” instead of being inside the scene.
To solve this, we tried enlarging the image to remove the black edges. While this did fill the screen, it introduced a new problem: the content became blurry and hard to focus on, and there was noticeable ghosting and double vision. It made the experience uncomfortable to watch.
So, I decided to open the VR headset’s built-in testing app to study what properly formatted images for VR should look like. As expected, there were clear standards for image proportions and layout. I sent one of the reference images to Carly, and together we adjusted our content based on that template. It worked—the focus issue was completely resolved, and the visuals looked much more natural and immersive.
We also tried adding some explanatory text about our project during the black screen sections, but in VR it was impossible to view the full text properly, so we eventually decided to abandon that idea.
In the final stage, I added a small emotional touch to the video: every time a dog hears a positive word from its owner, I subtly increased the brightness of the screen to represent the dog’s happiness and excitement. This gentle lighting shift adds emotional depth without distracting from the experience.
After final testing, everything ran smoothly, and the VR playback now works perfectly. It feels like a huge step forward, and we’re excited to let others try it—to finally experience what the world might look and sound like from a dog’s point of view!
Aha! We used the Play-Doh to make three dog heads in different sizes.
Just look at how much Play-Doh’s been consumed (And that’s just a start)
We’ve recorded what dogs may notice or do in a binaural format, including human footsteps, dog’s footsteps, toy ball rolling (in various perspectives), squeezing the toy ball, dog sniffing (actually Ruiqi…) and collar shaking and so on.
Zixuan:
Today, Ruiqi and I went to the studio to record sounds using our dog head microphone setup. Since it’s just before our presentation, it was really difficult to book a recording space—but luckily, we managed to find an available slot and got in!
Our main recording equipment was a pair of AKG C414 XLS microphones, chosen for their excellent sensitivity and clarity, perfect for capturing the subtle environmental sounds we need for this simulation project. To make the recordings feel as close as possible to a dog’s hearing experience, we tried to replicate the physical characteristics of a dog’s head and ear position as accurately as possible.
One challenge we faced was with the Chihuahua model. It’s such a small dog with a very low shoulder height, and we couldn’t find a regular mic stand that worked at that level. In the end, we placed the Chihuahua model on a flat trolley, which turned out to be the perfect height, around 15 cm, very close to a real Chihuahua’s ear position.
Another issue was that the dog heads couldn’t be mounted directly onto a mic stand. So, we borrowed a speaker stand tray from the music store and used it to support the dog head models. This worked really well, keeping everything stable and secure during the recording.
We recorded at different heights according to the breeds:
– Chihuahua: about 15 cm
– Labrador: about 60 cm
– Irish Wolfhound: about 120 cm
These heights correspond roughly to each dog’s natural ear position when standing, helping us better simulate spatial hearing differences between breeds.
One problem with the studio environment was the flooring. Our video scenes are set on grass, but the studio had a carpeted floor. To recreate the sound of footsteps on grass, we improvised: we layered a sheet of hard plastic underneath a sheet of soft plastic, then placed both under the carpet. The result was surprisingly convincing—when stepped on, the layered surface produced a sound quite similar to walking on grass.
As for sound content, we followed the details from our storyboard and recorded specific elements, including:
– Human footsteps (to simulate off-screen presence)
– Dog footsteps (running, turning, stepping on grass)
– Dog tag jingling sounds
– Toy ball rolling sounds
– Toy ball being squeezed or bitten
Originally, we had also hoped to record background crowd noise and bird sounds to enrich the ambient layers. However, since the dog head models are fragile and not very portable, we decided to skip outdoor recording for now.
All in all, the recording session was really productive. Despite some limitations in space and materials, we managed to recreate the environment and capture the sounds we needed. Once the recordings are sorted, we’ll move on to editing and mixing. I can’t wait to hear how the world sounds from inside a dog’s head!