Any views expressed within media held on this service are those of the contributors, should not be taken as approved or endorsed by the University, and do not necessarily reflect the views of the University in respect of any particular issue.

Visualizing Industrial Space Through 3D Scanning and Real-Time Point Cloud Interaction

Subtitle:
— A Visual Documentation of Spatial Reconstruction Using Leica G1,  TouchDesigner, and Audio-Driven Interaction

Ren Yuheng | s2593996

1. Scanning the Paper Factory

1.1 Preparing and Learning Leica G1

Before fieldwork, I spent time learning how to operate the Leica BLK G1 scanner, especially in conjunction with the Field 360 iPad app. This included planning scans, understanding bundle optimisation, using Quick-Link features, and managing live positioning feedback on-site.

1.2 Scanning Strategy Design

The scanning site—a decommissioned paper factory—comprised two interconnected rectangular volumes with varied interior heights and several hidden corners.
To minimise blind spots and ensure comprehensive spatial data capture, I designed a closed-loop scanning path focused along the longer edge of the building.
We decided on 13 scan positions, arranged in a clockwise circuit, each overlapping with at least two neighbours to ensure successful registration.
Choosing scan positions involved balancing structural visibility, minimal obstructions, and alignment feasibility.

1.3 On-Site Execution

On location, I carried out the scanning process with both precision and flexibility.
For each station, I carefully placed the scanner at optimal height, avoiding glass surfaces, light reflections, or metal interference.
After each scan, the Field 360 app allowed me to visually confirm if links were successful.
If the overlap was too low or bundle errors occurred, I manually removed the failed scan and rescanned the segment.
This hands-on control significantly reduced errors in post-registration

 

2. Point Cloud Processing and Data Export

2.1 Import and Registration

Following data capture, I transferred all .blk files into Cyclone Register 360, Leica’s dedicated point cloud processing software.
There, each scan was imported as a node in a global project and automatically aligned using visual linking.
Although the software provides semi-automated tools, I manually refined several connections, especially in corners where reflections or low overlap caused slight drift.
I also ensured that each scan had at least three strong links to neighbouring positions, meeting Leica’s recommended bundle criteria.

2.2 Quality Review with Leica Report

The Leica system generates a detailed registration report.
For our factory scans, the bundle had 13 setups and 12 links, with an overall strength of 84% and overlap of 32%.
Though a few links showed slightly high errors (0.04–0.06m), the final result was within acceptable bounds.

2.3 Cleaning and Exporting

Point cloud data, while rich in detail, often includes unwanted geometry: moving objects, noisy surfaces, or repeated elements.
I used Cyclone’s edit tools to segment the environment into logical areas—floors, ceilings, support beams—and removed unnecessary background data.
This cleaning process improved clarity, reduced processing load, and prepared the model for visualisation.
Finally, I exported the processed cloud in .XYZ format, which preserved each point’s XYZ coordinates and colour values (RGB) for later use.

3. Visualisation in TouchDesigner

3.1 Building a Modular Visual System

Given the density of the point cloud, I created two main patches in TouchDesigner:

  • One to manage spatial position data (XYZ)

  • One to control colour and brightness attributes (RGB)
    This modular approach allowed for better real-time control and visual efficiency.

3.2 Connecting with Audio Input

Although I did not build the audio system itself (which was handled by teammates in MAX/MSP), I created a responsive bridge between their output and my visuals.
Using audiodevin CHOP in TouchDesigner, I received real-time frequency and amplitude data from a shared channel.
I then mapped frequency bands (bass, mids, treble) to visual parameters:

  • Bass influenced brightness intensity

  • Mids controlled Z-axis jitter

  • Highs added subtle hue shifts

This allowed the point cloud to react to sound in atmospheric ways—flickering gently to ambient tones or pulsing rhythmically to sharper audio triggers

4. Final Installation & Collaboration

In the final group exhibition, my TouchDesigner system was fully integrated with Jiaming’s projection mapping and ambient lighting setup.
Together, we built an immersive installation that projected the reconstructed factory onto a dark-walled room using high-resolution projection.
The point cloud floated in mid-air, responding to sound through subtle shifts, flickers, and spatial modulation.
Visitors were invited to walk around or sit inside the space, listening to the soundscape while watching the virtual factory respond in real time.

5. Reflection

This project offered me not only a technical learning experience but also a new way to reimagine industrial space through sensory transformation.
I began the journey thinking I would merely replicate a building’s geometry—but I ended up building a spatial interface that interacts with sound and presence.

Some key takeaways:

  • Scanning requires not only precision but creative planning—balancing coverage with speed

  • Visualisation is not about showing data, but communicating spatial feeling

  • Modular thinking in tools like TouchDesigner allows for flexibility across different outputs (installation, screen, or AR)

From Leica hardware to software pipelines, from raw data to immersive media, this process taught me how space can be sensed, shaped, and shared—not only as structure, but as experience.

Written by Ren Yuheng S2593996

Submission2 Resources

Video Link:

https://media.ed.ac.uk/media/DMSP+Vedio/1_nekq0tsr

 

Project Link:

Sound design:

https://uoe-my.sharepoint.com/:f:/g/personal/s2700229_ed_ac_uk/EldIrpZrYT5OhhMMhrSMqXYBBk7K_Zl9ONaM2GPjBbAV2A?e=0nR23N

 

Sound Recording:

https://uoe-my.sharepoint.com/:f:/g/personal/s2747746_ed_ac_uk/EkblH1PcgedFhN0L2hOpGKoBMp0i05aWRks0VtjByzeztQ?e=6PsIj9

 

Visual Design:

https://uoe-my.sharepoint.com/:f:/g/personal/s2593996_ed_ac_uk/Erl5bMLMhixKqrG_2J6h3xoBGi7YToT258sg1UuAlsLAVQ?e=13uQVB

 

https://uoe-my.sharepoint.com/:f:/g/personal/s2658695_ed_ac_uk/Ev2EchL5zURFp9MvBrttJ1oBhUOZJbzwYlGM75gBizTlWA?e=XC1dUo’

The Interaction Between Sound and Vision in the Scene – Creating Visuals of Different Eras Through Software Technology Jiaming Li& Siming Shen

When we first arrived at the paper mill, we were immediately struck by the vastness of the space. Seeing so many pieces of equipment arranged throughout the site, all covered in layers of dust, left a powerful impression on us. It made us start thinking: could we use TouchDesigner to recreate and convey that initial feeling of awe we experienced upon entering the site?

Now Factory

Thus, at the beginning of learning and using TouchDesigner, we clearly defined our goal—to fully integrate visuals and sound through this software in order to express the thoughts and emotions we wanted to convey. To ensure the efficiency of our work, we conducted a field investigation of the paper mill and clarified the direction we wanted to pursue. By combining the imagery captured with a 3D scanner, we ultimately decided to structure the project around three themes: the past, the present, and the future.

Subject: After establishing the three themes of the past, present and future, we began to engage in divergent thinking and open creation based on the current situation of the paper mill. First of all, the current paper mills are undoubtedly a part that has gradually been overlooked due to the progress of The Times. In the past, if we convey the prosperity of the past paper mills through visual content, I think the key word of this theme is prosperity, and the interaction is carried out through relatively bright visual effects.

Interact with max’s voice

Image

Complete engineering screenshot

In the “Now” section of the project, audio data is used as a driving force to animate a geometric array in real time. This is achieved through the use of several key operators in TouchDesigner, including line, copy, replace, and choptol. These tools work together to translate sound frequencies into visual distortions, allowing the geometry to react dynamically to the characteristics of the audio. The audio input is brought into the system via the audiofilein node, from which a specific channel is selected for analysis. Spectral data is then extracted using the audiospectrum node, and resampled to ensure a smooth and coherent visual output that flows naturally with the rhythm of the sound.

Now

This section was developed through a creative collaboration between Li and TuTu. The background of the scene is particularly striking—it consists of a 3D scan of an abandoned paper mill, which adds a strong sense of place and texture. This scanned environment is rendered using the geo2 node along with a constant material, resulting in a visually immersive setting that enhances the overall depth and spatial atmosphere of the piece.

 

Future:Transitioning into the next section, the “Future” segment presents a different yet complementary visual approach, designed entirely by Shen. In this portion of the project, TouchDesigner is used to visualize audio input through a dynamically transforming 3D wireframe sphere. Real-time audio data is continually analyzed and mapped onto a sphere modulated by procedural noise, resulting in organic, flowing deformations that echo the intensity and tempo of the sound.

To deepen the visual complexity and enhance motion continuity, a feedback loop is introduced, which creates delicate motion trails that follow the sphere’s surface dynamics. The final effect is a pulsating and reactive visual form that synchronizes with sound in real time, blending spatial geometry, kinetic movement, and rhythmic flow into a seamless, immersive audiovisual experience. This section not only showcases Shen’s technical skills in real-time processing, but also reflects a strong artistic sensitivity to the relationship between audio and visual expression.

Future

Designing Interactive Sound in Max/MSP

For this project, I wanted to create a sound environment that feels alive — something that responds to the presence and actions of people around it. Instead of just playing back fixed audio, I built a system in Max/MSP that allows sound to shift, react, and evolve based on how the audience interacts with it.

The idea was to make the installation sensitive — to motion, to voice, to touch. I used a combination of tools: a webcam to detect movement, microphones to pick up sound, and an Xbox controller for direct user input. All of these signals get translated into audio changes in real time, either directly in Max or by sending data to Logic Pro for further processing via plugins.

In this blog, I’ll break down how each part of the Max patch works — from motion-controlled volume to microphone-triggered delay effects — and how everything ties together into a responsive, performative sound system.

Motion-Triggered Volume Control with Max/MSP

One of the interactive elements in my sound design setup uses the laptop’s built-in camera to detect motion and map it to volume changes in real-time.

Here’s how it works:
I use the Vizzie GRABBR module to grab the webcam feed, then convert the image into grayscale with jit.rgb2luma. After that, a series of jit.matrix, cv.jit.ravg, and jit.op objects help me calculate the amount of difference between frames — basically, how much motion is happening in the frame.

If there’s a significant amount of movement (like someone walking past or waving), the system treats it as “someone is actively engaging with the installation.” This triggers a volume increase, adding presence and intensity to the sound.

On the left side of the patch, I use jit.3m to extract brightness values and feed them into a scaled line and ctlout, which eventually controls the volume either in Logic (via MIDI mapping) or directly in Max.

This approach helps create a responsive environment: when nobody is around, the sound remains quiet or minimal. When someone steps in front of the piece, the sound blooms and becomes more immersive — like the installation is aware of being watched.

Microphone-Based Interaction: Controlling Delay with Voice

Another layer of interaction I built into this system is based on live audio input. I used a microphone to track volume (amplitude), and mapped that directly to the wet/dry mix of a delay effect.

The idea is simple: the louder the audience is — whether they clap, speak, or make noise — the more delay they’ll hear in the sound output. This turns the delay effect into something responsive and expressive, encouraging people to interact with their voices.

Technically, I used the peakamp~ object in Max to monitor real-time input levels. The signal is processed through a few math and scaling operations to smooth it out and make sure it’s in a good range (0–60 in my case). This final value is sent via ctlout as a MIDI CC message to Logic Pro, where I mapped it to control the mix knob of a delay plugin using Logic’s Learn mode.

So now, the echo reacts to the room. Quiet? The sound stays dry and clean. But when the space gets loud, delay kicks in and the texture thickens.

Real-Time FX Control with Xbox Controller

To make the sound feel more tactile and performative, I mapped parts of the Xbox controller to control effect parameters in real time. This gives me a more physical way to interact with the audio — like an expressive instrument.

Specifically:

  • Left Shoulder (lt) controls the reverb mix (Valhalla in Logic).

  • Right Shoulder (rt) controls the PitchMonster dry/wet mix.

  • Left joystick X-axis (lx) is used to pan the sound left and right.

These values are received in Max as controller input, scaled to the 0–127 MIDI CC range using scale, and smoothed with line before being sent to Logic via ctlout. In Logic, I used MIDI Learn to bind each MIDI CC to the corresponding plugin parameter.

The result is a fluid, responsive FX control system. I can use the controller like a mixer: turning up reverb for space, adjusting pitch effects on the fly, and moving sounds around in the stereo field.

Layered Soundtracks for Time-Based Narrative

To support the conceptual framing of past, present, and future, I created three long ambient soundtracks that loop in the background — one for each time layer. These tracks serve as the atmospheric foundation of the piece.

In Max, I used three sfplay~ objects loaded with .wav files representing the past, present, and future soundscapes. I mapped the triggers to the Xbox controller as follows:

  • Left Trigger (lt): plays the past track

  • Right Trigger (rt): plays the future track

  • Misc (Menu/Back) Button: plays the present track

  • Start Button: stops all three tracks

Each of these is routed to the same stereo output and can be layered, looped, or faded in and out depending on how the controller is played. This gives the audience performative agency over the timeline — they can “travel” between sonic timelines with the press of a button.

Randomized Future Sound Triggers

To enrich the futuristic sonic palette, I created a group of short machine-like or electronic glitch sounds, categorized as “future sounds”. These are not looped beds, but rather individual stingers triggered in real time.

Each directional button on the Xbox controller (up, down, left, right) is assigned to trigger a random sample from a specific bank of these sounds. I used the random object in Max to vary the output on every press, creating unpredictability and diversity.

  • Up (↑): triggers a random glitch burst

  • Down (↓): triggers another bank of futuristic mechanical sounds

  • Left (←) and Right (→): access different subsets of robotic or industrial textures

The sfplay~ objects are preloaded with .wav files, and each trigger dynamically selects and plays one at a time. This system gives the audience a sense of tactile interaction with the “future,” as each movement sparks a unique technological voice.

Past Sound Triggers

To contrast the futuristic glitches, I also created a bank of past-related sound effects that reference the tactile and physical nature of the paper factory environment.

Four Xbox controller buttons — A, B, X, and Y — are each mapped to a different sound category from the past:

  • A: triggers random mechanical elevator sounds

  • B: triggers various recorded footsteps

  • X: triggers short fragments of worker chatter

  • Y: triggers paper-related actions like cutting or rolling

Each button press randomly selects one sound from a preloaded bank using the random and sfplay~ system in Max. This randomness gives life to the piece, allowing for non-repetitive and expressive interaction, as each visitor might generate a slightly different combination of past memories.

This system works in parallel with the future sound triggers, offering a way to jump across time layers in the soundscape.

Real-Time Effect Control with Max and Logic Pro

To make the sound experience more responsive and interactive, I connected Max/MSP with Logic Pro using virtual MIDI routing. This lets Max send real-time control data straight into Logic, so audience actions can actually change the sound.

Here’s what I’ve mapped:

  • ValhallaVintageVerb: Mix – This reverb mix is controlled by the Left Shoulder button on the Xbox controller. The more it’s pressed, the wetter and spacier the reverb becomes.

  • H-Delay Mix – This delay’s wet/dry ratio is controlled by how loud the environment is, using a microphone input. When the audience makes louder sounds, the delay becomes more pronounced.

  • Track Volume – This is controlled by the amount of motion in front of the camera. I use a computer webcam and optical flow to detect how much people are moving — more movement means higher volume.

  • Pan (Left-Right balance) – Controlled by the left joystick’s X-axis on the Xbox controller. Pushing left or right shifts the sound accordingly in the stereo field.

All these parameters are connected using Logic’s Learn Mode, which allows me to link MIDI data from Max (via from Max 1 port) directly to plugin and mixer parameters. That way, everything responds live to the audience — whether it’s someone waving, talking, or using the controller.

Written by Tianhua Yang(s2700229) & Jingxian Li(s2706245)

Constructing a Soundscape: Reconstructing the Sounds of Paper Mills Across Eras Through Audio Technology

——Exploring the Process of Sound Restoration Through Field Recording, Sound Effects Editing, and Audio Synthesis

Dimple He & Jieqiong Zhang

1. Recording Blog

In our recording work, precision and efficiency are goals we constantly pursue. To ensure the perfect presentation of every scene and sonic detail, we adopted various professional recording devices combined with unique technical methods to establish a refined recording setup.

First, through a multi-point layout strategy, we planned each recording position in advance based on the spatial structure of the paper mill, ensuring comprehensive and multi-layered sound capture. This layout allowed us to create a wide-ranging and rich sound map, providing abundant material for later sound synthesis.

To adapt to changes in the on-site environment and real-time needs, we used dynamic on-site adjustments. During recording, we monitored and adjusted microphone positions and parameters in real time to respond to ambient noise and special sound effect requirements. This approach not only stabilized recording quality but also enhanced the expression of on-site sound effects.

In terms of equipment, we carefully selected high-performance devices to meet the demands of different scenarios. The Zoom H6 multi-track recorder, one of our go-to tools, is widely used for film sync sound and environmental sampling. Its multi-track recording capability allows simultaneous capture of multiple sound sources, ensuring clarity and detail, and played a major role in our recording work at paper mills and libraries.

To capture 360° ambient sounds at the paper mill, we used the F8 spatial recording device. This recorder captures the full panoramic soundscape, helping us reconstruct the overall atmosphere and providing valuable material for surround mixing in post-production. This gave our recordings stronger spatiality and immersion.

For specific detailed sounds, we relied on the Sennheiser MKH416. This highly directional and interference-resistant microphone is especially suited for capturing specific sounds in noisy environments, such as worker activity, machine startups, or chimes. It ensures these subtle sounds are recorded clearly and authentically.

Finally, we used the C-Ducer Contact Microphone to record sounds that are difficult to capture with conventional mics. For example, the alarm sound of fire detectors or the electrical noise of printers—vibrations caused by mechanical operation—can be picked up by this contact mic, offering richer sound information and ensuring no detail is overlooked.

Through these refined technical approaches and the integration of professional equipment, we were able to comprehensively and accurately capture and reconstruct various environmental sounds, laying a solid foundation for subsequent sound creation. Every step reflects our pursuit of technical excellence and extreme attention to sound effects, ensuring the final work delivers a realistic and vivid sonic experience.

1.1 Four Recording Devices

Key Recording Techniques:

Multi-point layout: Plan recording positions in advance based on the paper mill’s spatial structure to form a sound map with wide coverage and rich layers.

Dynamic on-site adjustment: Monitor recording in real-time, adjusting mic positions and parameters according to ambient noise and specific sound effect needs.

1.1.1 Zoom H6 Multi-Track Recorder

The Zoom H6 Multi-Track Recorder is used for multi-track synchronous recording, suitable for film sync sound, environmental sampling, and more. Nearly all the sounds we recorded at paper mills and libraries involved the H6.

1.1.2 F8 Spatial Recording Device

The F8 Spatial Recording Device was used to capture the 360° ambient sound of the paper mill, recording the full soundscape to support surround mixing in post-production.

1.1.3 Sennheiser MKH416

The 416 directional microphone was used to precisely capture detailed sound effects (worker voices, machine startups, chimes, etc.). Its high directivity and resistance to interference made it suitable for specific sound collection in noisy environments.

1.1.4 C-Ducer Contact Microphone

The contact mic was used to capture vibrations and mechanical movement sounds (e.g., fire alarms, printer electrical noise), which are difficult to record with conventional mics.

1.2 Three Recording Locations

1.2.1 Library

In our library recordings, we focused on reconstructing the paper mill’s sonic landscape through systematic collection of multi-dimensional environmental and object sounds. By simulating the handling of paper, electrical sounds, water flow, and low-frequency mechanical noises, we aimed to not only recreate sonic textures but also explore how sound serves as a medium of memory and spatial representation. This process reflects creative strategies for sampling sound in constrained spaces and aligns with sound art aesthetics—”building scenes with sound” and “creating environments through objects”—offering expressive sound material for post-production. 

 

 

1.2.2 ECA

In the ECA campus recordings, we recontextualized everyday objects to simulate the sonic textures of paper processing and metallic machinery in old industrial environments. Metallic resonance from touching electric poles, the rhythmic sounds of paper cutters and staplers, the friction of wrapping paper and magazine stacks—these all evoke the tactile memory of past factory spaces. This method of “foley” in non-industrial settings demonstrates the mimetic strategies and symbolic coding in sound design. It also represents a kind of sound archaeology and artistic reconstruction of “lost industrial contexts,” transforming small contemporary sonic actions into reenactments of historical soundscapes.

 

1.2.3 Paper Mill

During on-site recording at the paper mill, we focused on the physical interactions between space and structure, reconstructing industrial memories through sound. By capturing natural ambient sounds outside the plant, we built an open, time-sensitive outer boundary for the overall soundscape. Inside the plant, the collisions and frictions—like sliding rails, iron gates, rolling shutters, carts, and aluminum strips—created a sense of mechanical texture and age. Footsteps echoing through factory corridors offered auditory cues for simulating the rhythm of daily worker activity. These sounds go beyond physical documentation to form an auditory archive, allowing present-day listeners to “hear” a vanishing industrial memory field through spatial acoustic detail.

2. Sound Library

2.1 Audio Editing

To prepare for randomized audio processing in Max/MSP and Logic Pro, and to provide trigger signals for random visual generation in TouchDesigner, we initially selected and trimmed the recorded audio. Based on the three main scenes and their respective environments and impulse responses (IR) defined in the project script, we categorized the organized audio as foundational resources for further creation.

2.2 Sound Effects Design

Our goal in sound design was to present each detail flawlessly using precise techniques and creative design, creating atmospheres tailored to specific environments. By artistically processing the recorded audio, we produced various sound effect fragments across categories, scenes, and objects. This process involved not only precise editing but also layering creativity and emotion, ensuring each effect possessed expressive power in its intended context.

We performed full audio processing on the recordings. EQ adjustments allowed us to selectively boost or reduce frequency components, ensuring each sound remained clear and balanced in the mix. Low frequencies emphasized machine operations or factory rumble, while high frequencies highlighted mechanical details. Compression controlled dynamic range, avoiding distortion and ensuring every detail was clear. Reverb and delay enhanced spatial and temporal depth, enriching the interplay between ambient and detail sounds.

To further enrich sound effects creatively, we used acceleration and pitch-shifting techniques. These added rhythm and emotional tone—e.g., acceleration simulating rapid machinery, pitch-shifting conveying deformation or emotion.

We also used tools like the Serum wavetable synthesizer to generate some sound elements. With its advanced synthesis capabilities, Serum enabled precise creation of scene-specific sounds. We used it to simulate machine startups, metallic friction, and air vibration—particularly effective for large industrial environments like paper mills.

By adjusting Serum’s details, we generated time-specific industrial sounds. For example, modern equipment sounds were high-pitched and metallic, while older equipment produced deep, booming rumbles—adding a temporal dimension that linked sound evolution with historical changes in the factory.

Serum’s multidimensional modulation offered extensive sound transformation options. By tweaking oscillators, we simulated various paper mill sections: the pulper’s low hum, the high-frequency friction of the paper line, even steam and compressed air hiss—each presented through Serum’s modulation tools and merged with original audio for richer texture.

For background ambiance, we used Serum to synthesize low and mid-frequency environmental noise, simulating factory operational background. This avoided awkward silences and added fullness, enriching the overall soundscape without relying solely on field recordings.

This creative combination of synthesizers and field recordings broke through traditional design limitations, expanding possibilities in complex industrial sound design. The flexible, artistic approach captured unique and intricate sound elements, maintaining both technical accuracy and artistic freedom.

As our sound material grew, we began creative layering, the core step in our process. We organically merged ambient and detailed sounds, enhancing expression. Through multi-track mixing and layering, each segment retained presence and responded dynamically to scene needs. This process made our sound design more refined and compelling.

Finally, we integrated the sound effects with visual tools like TouchDesigner, ensuring smooth, natural interaction between sound and visuals. Sound wasn’t merely atmospheric—it functioned as a core element complementing the visuals. We fine-tuned every scene’s soundscape for emotional and expressive unity.

Conclusion

This series of sound design and processing efforts helped us construct a multi-dimensional sound world, giving each scene depth and dynamism. Through careful creativity and technical methods, we not only recreated the reality of paper mills but also delivered a deeply immersive audiovisual experience for the audience.

Attached is our produced sound library:

https://www.dropbox.com/scl/fo/m2gh2j3mftyjro77jamkz/AFxAjVl7-BHb-4ljF3-JY7w?rlkey=ctj1rn6buuhqodnqdqdtufsg7&st=fochaibl&dl=0

css.php

Report this page

To report inappropriate content on this page, please use the form below. Upon receiving your report, we will be in touch as per the Take Down Policy of the service.

Please note that personal data collected through this form is used and stored for the purposes of processing this report and communication with you.

If you are unable to report a concern about content via this form please contact the Service Owner.

Please enter an email address you wish to be contacted on. Please describe the unacceptable content in sufficient detail to allow us to locate it, and why you consider it to be unacceptable.
By submitting this report, you accept that it is accurate and that fraudulent or nuisance complaints may result in action by the University.

  Cancel