Any views expressed within media held on this service are those of the contributors, should not be taken as approved or endorsed by the University, and do not necessarily reflect the views of the University in respect of any particular issue.

Quick trip to Glasgow Gallery of Modern Art

I was in Glasgow for the day and had time to kill so decided to check out the local Gallery of Modern Art. I was pleasantly surprised to find that there was an ongoing digital exhibition that looked interesting.

It was called SLOW DANS and was a cycle of three 10-screen videos – KOHL, FELT TIP, and THE TEACHERS. Artist Elizabeth Price showcases three works that present a “fictional past, parallel present, and imagined future, interweaving compact narratives that explore social and sexual histories and our changing relationship with the material and the digital.” (Price, 2023).

Upon entering the space itself, it was pitch-black, save for the light created by the screen. Unfortunately, as you had just entered from the bright room before, your eyes have not adjust whatsoever and you cannot see where you are stepping. Vaguely aware of the layout thanks to the museum staff and their small flashlight, myself and a friend clung to each other as we attempted to find a seat in the dark without stepping on anyone. After your eyes adjusted to the light, it was fine, but before then… perhaps the museum staff should have walked people to their seats…

You were met with an interesting mix of visuals and what I would describe as punctual and timed bits of sound. There was an underlying soundtrack of background music, but on top, corresponding to typing appearing on the screen was the clacking of keys, or laser sounding effects as rays of light danced across the space.

Small video I took: https://www.youtube.com/shorts/zkdAmPS0iZs

Molly MUNRO. (2023). Image of KOHL exhibition.[photograph].
Molly MUNRO. (2023). Image of KOHL exhibition.[photograph].

Molly Munro

Price, E. (2023). Elizabeth Price: SLOW DANS. [online] Glasgow Life. Available at: https://www.glasgowlife.org.uk/event/1/elizabeth-price-slow-dans.

An exploration of Kinect, TouchDesigner and Max data transfer

Recently, I was trying to find ways to use the Kinect as a sensor to trigger the interactive sounds in max. It would be interesting to let the audience trigger some sound when doing the action, and I heard from the DDM group-mates that they are also trying to use Kinect to trigger some changes in the visual, and it would be better if the changes in the visual are accompanied by the sound trigger. So I looked up a lot of information and videos, but I found that most of the cases of directly connecting Kinect data to Max are very old, and many of them use software called SYNAPSE for Kinect (link to SYNAPSE webpage below), but it doesn’t support Kinect for Windows. Some later cases also have different problems for me,   they do not support Max8 or Mac, making it difficult for me to proceed.

https://www.tumblr.com/synapsekinect/6307790318/synapse-for-kinect

At this point, I changed my way of thinking, because I heard that Kinect can access touch designer smoothly, so I can try to make touch designer as a relay station to transfer data from Kinect to Max to achieve interactive sound triggering, and in this way, the relationship between visual and sound is also closer. After some research, I found a related tutorial (link below) that makes transferring data from Touch Designer to Max very easy.

At the same time, I’m also curious about if I can change the direction of the data transport, which is from max to touch designer, since visual part also need the data from ultrasonic and we don’t have to use 2 sensors if we can send the data from max to touch designer. Meanwhile,I heard my teammates found a way to control the video speed by the amplitude of the audio wave, and my opinion was why not combine it with the interactive sound of going up and down stairs, using it as the file to control the visual, then it will be a great linkage of  the sound and the visual.

In odder to achieve that, I learned the OSC in protocol and after some research and practice,  it succeeded and worked very smooth.

So far, we can achieve at least two things:
Kinect -Touch Designer-Max: to achieve the sound triggered by the audience’s movements.But since our sound was already rich enough, we didn’t end up triggering the sound by this method.

Max -Touch Designer: We have used Max and ultrasonic sensors to precisely implement the interactive sound is triggered whenever the audience goes up and down the steps, so we can use these triggered sound effects(use a snapshot to get the level between ±1) to control the speed of the video in Touch Designer.

Chenyu Li, 3.29

Test records and logs:

3.28 find out the method and test on one computer(TD and Max in same IP address)

3.29 communicate with group mates on touch designer, confirmation of feasibility and preparation of experiments

3.31 Test out with touch designer group mates with different computer, after succeed,  communicate with David and put it in to the whole max patch, and test out again

4.3 scale the number of output with touch designer group mates for a better visual effect

If you’re reading this in order, please proceed to the next post: ‘Sonification #1 – Data Collection, Treatment and Processing’.

Graphic Identity

 

When creating promotional materials for the exhibition, our goal was to convey the essence of the project in a way that was engaging and accessible for potential visitors, we opted to use the rawest form of visual data available, in our case it was the images generated through cloud compare when we first started working with the cloud points. 

Our goal was to visually convey the key ideas of Echoes and Absences to potential visitors, using imagery that was both captivating and informative. To achieve this, the team chose to create posters showcasing some of the images generated during the process. In addition to the posters, we also designed postcards for visitors to take home with them. These postcards feature some of the images generated during the development of the project, as well as additional information about the project and a QR code to the blog that shares our work. The postcards are intended to serve as a reminder of the exhibition, as well as a way for visitors to further learn about the project after their visit. 

Overall, the team’s approach to designing promotional materials was to focus on the visual representation of the project’s core concepts, using captivating imagery to engage potential visitors and encourage them to learn more about the exhibition, the materials created were intended to serve as effective tools for promoting the exhibition and generating interest among the public.

Final Poster Design

For the posters we printed both of them in A1, and once we saw the printed results, we decided to use the black and white one for the exhibition as it looks with more quality. While the blue one we used as a supporting poster at the entrance of the room, and decided that this should be used for the digital posters, as the quality digitally is amazing.

Final Postcard Designs

The postcards will have a selection of images on one side, our manifesto on the back, along with the team’s members’ names, and a QR code to the home page of our blog. On the exhibition day, the postcards received great feedback from the audience, as it allowed them to take a bit of the exhibition with them, and gave them the opportunity to learn more in-depth about the project.

If you’re reading this in order, please proceed to the next post: ‘Sound Installation Setup in Exhibition’.

Daniela M

More than just an exhibition

 

We have deiced to create something more than just an immersive digital experience. We want to showcase our whole design process and share it with the public, the entire workflow process, the ups and downs we had, and the exciting bits we realized while doing this project.

We decided to turn our project into an exhibition, where we included the process we went thru while working. We can show the representation of each person in the team of what our database is, a way to show a bit of essence from each one of us as individuals before we merged our ideas together.

As far as what we are looking to exhibit, so far we have the following list:

  • Immersive digital experience (main event)
  • Lidar thru our mind (shows a version of the cloud points from the eyes of each of the team members)
  • Fieldwork records (showing our process while gathering data in the field, video/pictures…)
  • 3D visualization of the steps (3d printing pieces of the stairs, one or two variations of them)
  • Sound that was developed from our material but not used for the immersive experience.

 

Daniela M

The New Steps 3D Model

Once we saw the 3d model of the stairs in Blender, with our predetermined shape in each point, making it look like something out of a pixel world dream, we asked ourselves how it would look if we created a physical model of this digitalized version of a physical place? A bit weird for sure, but what better than to try it ourselves.

Why do we want to do this?

The main reason behind 3d printing our scans is to show the representation and distortion of a place after it goes thru such a process.

From collecting our cloud points from the real physical place we are talking about to working the data in Cloud Compare to subdivide the amount of data we got,  importing it to Blender, giving it a shape and a digital physical form again, finding a way to express this transformation the data has gone thru by printing it and showing it to the public.

Of course, we have our limitations with 3D printing as with any technology, but I find it extremely interesting the aesthetically pleasing way the 3D models came to be, as it is a physical representation of the process we put the data thru. One way to further show this is if we can print with a white filament (or we can spray paint the model) and place it on a black background. So it’s a representation of how we are working with our data digitally, white data on a black background, just like we worked most of the time in CloudCompare and TouchDesigner.

Choosing a cube as the form to use for this, using Blender I deleted most of the points that were floating in space and not really part of the primary stairs model. After delimitating the data that was gonna be part of the model, I created a simple structure to hold the model together “the bones of the earth the stairs are on.” Once the model was finished on Blender, I exported a .stl so I could import the model to PrusaSlicer, to slice the model and give it the determined setup needed for 3D printing.

PursaSlicer software

We purposely choose to leave some holes in the structure, as it’s an essential part that shows where our scanner was placed. For this version of the 3D model, we believe it will show up with many imperfections, but for us, that is part of its beauty of it.

We will try to create a second 3D model using one of the  Ultimaker 3 printers, which will allow us to print with two different filaments. Using a water-based filament for the parts that support the structure while printing, and those will be dissolved with water, creating a more detailed 3D model. For this print, we are using Ultimaker Cura software for the development of the printed model.

The Printing will be made on Thursday the 29 in the uCreate maker studios.

After getting all the files ready, I decided to go to uCreate before printing to ask them about my files and if they were viable for printing. In there the head of the 3D printing department looked at my model and told me that it was very likely to fail due to the complex geometry, another thing was that I was printing the model in one piece, and this created great waste of material.

After their feedback, I decided to change the model, creating a mesh and subdividing it to create a softer surface. After this, I decided to split the model into two pieces, this way I was able to print them horizontally and save material.

I printed the first part of the stairs without a problem, and it came out looking really interesting because you could also see all the supporting filament that gave it more character, so we decided to keep it. Printing the second part was a bit more difficult, I sent the files the first time, and while it did print, there was a corner of the stairs that was deformed. So I decided to print it again, only this time I stayed checking on the print, and after around an hour, it failed again. Feeling desperate as to why I was having trouble printing it, I asked for help and then learned that the reason it was not printing was that the machine was not calibrated correctly, once I learned how to do it, I started printing again. This print was supposed to be finished around midnight one day before the exhibition. So I went to pick it up and then found a Post-it not from someone in uCreate telling me that the print moved, and filed. I was not gonna give up, so I calibrated once again and sent the print once again, it was supposed to take around 8 hours to print. I had another submission the next day that I had to work on, so I stayed up all night working on that, and when 8am came, I went to check on the 3d printed model, and there it was, perfectly done.

Once back in Aslion house I simply joined both pieces together using superglue. And we exhibited the final model, as well as the previous failed attempts (and the post-it note) in a stand with a spotlight. During the exhibition, the audience took a great interest in the 3d model, as it was the physical representation of what they were seeing on the main screens.

If you’re reading this in order, please proceed to the next post: ‘Behind the Scenes’.

Daniela M

TouchDesigner Point Cloud Camera Animation#1

For Scene 1, we want to design a first-person perspective roaming animation of climbing stairs to simulate the experience of climbing stairs as a pedestrian. The first half of the stairs are very winding, and we hope to use this camera animation to provide users with an immersive experience.

Developing the Animation

In the video uploaded on YouTube by The Interactive & Immersive HQ(2022), it was mentioned that the Animation CHOP can be used in TouchDesigner to create camera animations. Considering that the camera can move and rotate in the X, Y, and Z directions, we need to add 6 channels in sequence in the animation editing window to control the camera’s rotation and position in the X, Y, and Z axes.

Before starting to edit the animation, you need to set the length of the animation, i.e., the number of frames, in both the Channel Editor and the “Range” parameter of the Animation CHOP.

Next, add keyframes to the corresponding curve positions in each of the 6 channels and adjust the curvature of the curves to make the camera motion smooth. In practical operations, to make the animation more natural and easier to adjust, it is recommended to add keyframes to the same position in all 6 channels when creating a keyframe at a specific time. This method can greatly improve work efficiency, especially for longer animations.

If you’re reading this in order, please proceed to the next post: ‘TouchDesigner Point Cloud Camera Animation#2’.

Reference list

The Interactive & Immersive HQ (2022). Generative Camera Paths in TouchDesigner – Tutorial. [online] www.youtube.com. Available at: https://www.youtube.com/watch?v=5EyN_3vIqys&t=14s [Accessed 26 Apr. 2023].

Yuxuan Guo

What we should book for the surround sound system

Plan A 

Mixer

DiGiCo SD11 (1)

Send email to Roderick to book it.

Interface

DiGiCo UB MADI Interface (1)

Send email to Roderick to book it.

(Don’t forget to download the driver.)

Speakers

Genelec 7060 – Active Subwoofer (1)

https://bookit.eca.ed.ac.uk/av/wizard/resourcedetail.aspx?id=6927

Genelec 8030A or Genelec 8040 (7)

https://bookit.eca.ed.ac.uk/av/wizard/resourcedetail.aspx?id=1727

or

https://bookit.eca.ed.ac.uk/av/wizard/resourcedetail.aspx?id=6918

Speaker Stands

Genelec Poles for Speaker Stand (7)

https://bookit.eca.ed.ac.uk/av/wizard/resourcedetail.aspx?id=6116

Audio Cables

XLR Male to XLR Female (16)

https://bookit.eca.ed.ac.uk/av/wizard/resourcedetail.aspx?id=929

Extension Cables

Extension Cable – 7m, 4 Socket (Ext 16) (9)

https://bookit.eca.ed.ac.uk/av/wizard/resourcedetail.aspx?id=6875

15m Cable Reel (3)

https://bookit.eca.ed.ac.uk/av/wizard/resourcedetail.aspx?id=5887

 

Plan B

Mixer

MIDAS – Venice F (1)

https://bookit.eca.ed.ac.uk/av/wizard/resourcedetail.aspx?id=1180

Interface

RME – FireFace UCX (1) and RME – FireFace UC (1)

https://bookit.eca.ed.ac.uk/av/wizard/resourcedetail.aspx?id=2635

and

https://bookit.eca.ed.ac.uk/av/wizard/resourcedetail.aspx?id=1147

(Don’t forget to download the driver. If we use an analogue mixer, we would face the problem of needing two audio interface to achieve eight channels of input signal. I have checked all the audio interface available for loan at the music store and there is no audio interface that has eight output signals. This also leads to the idea that if we want to achieve 7.1 surround sound we may need two computers to support it.)

Speakers

Genelec 7060 – Active Subwoofer (1)

https://bookit.eca.ed.ac.uk/av/wizard/resourcedetail.aspx?id=6927

Genelec 8030A or Genelec 8040 (7)

https://bookit.eca.ed.ac.uk/av/wizard/resourcedetail.aspx?id=1727

or

https://bookit.eca.ed.ac.uk/av/wizard/resourcedetail.aspx?id=6918

Speaker Stands

Genelec Poles for Speaker Stand (7)

https://bookit.eca.ed.ac.uk/av/wizard/resourcedetail.aspx?id=6116

Audio Cables

8 way TRS Jack to XLR Male – Loom (3)

https://bookit.eca.ed.ac.uk/av/wizard/resourcedetail.aspx?id=4532

XLR Male to XLR Female (16)

https://bookit.eca.ed.ac.uk/av/wizard/resourcedetail.aspx?id=929

Extension Cables

Extension Cable – 7m, 4 Socket (Ext 16) (9)

https://bookit.eca.ed.ac.uk/av/wizard/resourcedetail.aspx?id=6875

15m Cable Reel (3)

https://bookit.eca.ed.ac.uk/av/wizard/resourcedetail.aspx?id=5887

 

 

Sonfication #5 – 7.1 Panning and Surround Sound Layout

The two last posts describe the signal flow of these sonification projects. We first reviewed its sonified granular re-synthesis and later the sonified heavy loop-processing performed in parallel. The repetition of these systems across the eight channels of a 7.1 surround sound layout was mentioned in both posts, a topic we shall now cover. Such concept, “7.1 surround sound”, not only describes the number of discrete output channels needed for this set-up but also includes a notion of spatial relationships between the qualities of these eight channels and the nature of their respective audio signals.

To successfully create a surround audio soundscape environment, breaking down the relationship between the listener and the soundscape is essential. The idea of soundscape positions any listener within itself, submerged in all directions, leading to the notion that the multitude of sources does not exclusively define the positional nature of a soundscape but, moreover, by the listener too. Even though the sense of direction, within the scope of the individual sonic event, comes from a single surrounding position, the grand composition of a soundscape is far more dependent on the listener’s location and orientation. The sonic place exists without the listener, yet to some extent, the listener defines how this is positionally perceived. On this note, it was mentioned in previous posts how the recording technique (2 pairs of A-B, facing opposite directions) of this soundscape defined a set of fixed positions, all with the same orientation. For each moment of the “stepped” recording, four positionally related audio files were produced together (Front-Left, Front-Right, Rear-Left and Rear-Right). Now the distribution across the eight channels of the 7.1 layout followed as such:

    • Front-Left –> Recording: Front-Left
    • Front-Right –> Recording: Front-Right
    • Centre –> Recording: Front-Left + Front-Right (MONO)
    • Side-Left –> Recording: Front-Left
    • Side-Right –> Recording: Front-Right
    • Rear-Left –> Recording: Rear-Left
    • Rear-Right –> Recording: Rear-Right
    • Subwoofer –> Recording: Front-Left + Front-Right (MONO)

Although the audio source layout was thought to be as described above, such a solution would still feel unnatural due to the discreetness per output channel. It would cause too harsh of a distinction between the different audio content. For example, the side channels would exclusively reproduce front-left/right recordings and no in-between with the rear ones. For a better solution, some 7-channel panning had to be designed to successfully mix gain values for each signal channel into correctly panned-out ones. As such, the design found a suitable answer by using node panels that can quickly provide a mix of values according to the relative position of a pointer, a solution found research on Cycle’74 Forums (Woodbury 2016). These panning systems were created for each of the 7.1 outputs using the same node layout with a respective pointer position. These panning units input an audio-file channel, like the ones listed above, and split into seven tracks of this audio signal with the correct gain values for each speaker position, providing an accurate, correct, smoother panned mix.

Figure 1- audio file partition followed by 7 channel Panning system

Once tuned to the room and the speaker set-up, this layout provided a surround sound environment emphasising a wide sonic front while keeping surrounding panoramic qualities with two dedicated rear channels. The output of these discrete channels was achieved using a [~dac 1 2 3 4 5 6 7 8] object, followed by dedicated volume faders.

With this 5th blog post, there is not much more on Sonification to document. However, the Max/MSP project designed for this effect later integrates the works of “interactive audio” and interfacing with visual components on a TouchDesigner domain. In these notes, more posts will follow through on these developments.

If you’re reading this in order, please proceed to the next post: ‘Sonification #6 – Project Files and Soundscape Demo’.

David Ivo Galego,

s2272270

 

REFFERENCES:

Woodbury L (13 April 2016) ‘Re: 7-channel Panning’ [Forum post comment], Cycling’74 Forums, accessed 23 March 2023.

 

From Raw Data to Unity

The scanner to Unity pipeline is in theory, quite straightforward. However, in practice, there are many individual elements and constraints that need to be taken into account.

The largest worry when it comes to Point Cloud data, is the number of points and the size of the data files. In an ideal world, we have the computational power to handle complex clouds with millions of points that can be rendered at a moments notice. Unfortunately, despite the capabilities of todays laptops and MacBooks, this is not the quickest process.

Starting at the beginning

As we know, when scanning the steps the data is sent to the Cyclone app on the provided iPad from uCreate that allows us to preview and pre-align multiple scans. It also allows us to export the data as a bundle to an .e57 file format. However, as discovered by Yuhan Guo, the Cyclone app contained upwards of 56GB of data saved from previous scans that is only erasable if the app is reinstalled. As uCreate did not solve this issue before we obtained the scanner for the first round of final scans, we had to split the scanning day in two to limit the amount of data that would fit. This also meant that exporting the files was almost not possible as the app needs some semblance of free space to execute the function. Therefore, for the set of 10 morning scans, it went: export scan 1 > delete scan 1 > export scan 2 > delete scan 2 > etc…

With over 400,000,000 points of data to export, this was time consuming, but successful. As the complete bundle was not able to export, it also took more time to manually align the scans again in the Cyclone 360 register software on the uCreate laptop. It turned out that this laptop was also full! with 1GB total available storage and thus preventing us from uploading the scans to the computer.

Took a trip into New College for warmth and WiFi!

We solved this by connecting my external hard drive as the source of storage for the software.

Thankfully, the next time we returned and picked up the scanner, we explicitly told uCreate to DELETE and REINSTALL the iPad app (not just deleting the projects from the app as they kept insisting we do (we did), that did nothing to fix it). Suddenly there was so much space that we could redo a couple of the scans in addition to finishing the second half of the staircase in the afternoon portion of our timeline.

Total points over the two days: 1.224 Billion!

Alignment

The .e57 files that had to be exported individually, outside their bundle, had to be imported and manually aligned in Cyclone360. The scan could then be exported as a bundle and brought into cloud compare. The afternoon and morning scans could then be aligned in that software to complete the scan of the staircase.

CloudCompare to Blender: Giving it geometry

If I were to import the point cloud directly into Unity at this point, it would have no physical geometry. Therefore no texture and would be invisible to the camera.

The in between step to resolve this is to export the scans (after subsampling to 100,000 points!) to .ply format and bring it into Blender. Working with Geometry Nodes and then assigning a texture to it.

With lighting placed in the same place as the street lamps.

Blender to Unity

Two options to go from Blender to Unity here: Export as FBX or just drag and drop the Blender file into Unity. I found that the Blender file was better excepted by Unity with faster importing and more options available for edits, such as textures and lights.

**NOTE: This is all done in Unity 2021 – We experimented with Unity 2019 as this version is compatible with Arduino however it DOES NOT LIKE the point clouds. So we stuck with 2021.

If you’re reading this in order, please proceed to the next post: ‘An exploration of Kinect, TouchDesigner and Max data transfer’.

Molly Munro

Integrated Sound Triggering System #3 Interactive trigger sound design and production

 

Overview:

The user’s decision-making interactions in the device can be broadly divided into four states:

Going up the steps from the preset position;
Returning from the upper position to the preset state position;
descending from the preset position to the next level;
Returning to the preset position from a lower level position.

In the final implementation of the device, we actually represent the user’s decision position during the above process by setting a distance threshold. On this basis, we can divide the results presented under the data extremes into realistic conceptual sound groups (ResultA) and surreal conceptual sound groups (ResultB).

ResultA: Based on a single layer of sound material prototypes, this conceptual sound set is based on the sound of all possible events in the decision making process in the installation, from which the auditory dimension of the connection between the person and the real event is evoked and the resonant similarity between the person and the installation is established. In terms of production, the single-layer sounds are edited within a reasonable range of the concept of reality in terms of the values of the time domain parameters (IR coefficients), at the basic sound distortion stage.

ResultB: An extreme surreal composite sound layer based on ResultA, the control sound group as opposed to ResultA. The material of the sound itself is identical to ResultA, but the aural effect is distinctly different in perception from the former; the sound processing technique no longer sets a reasonable range in terms of parameters, and the aural effect is presented as a completely conceptualised non-real-world sound. The aural purpose of this result is to construct an immersive narrative within the concept of this installation, which is completely different from the previous one, but essentially part of the same sound evolution.

The design concept of the installation aims to explore the different states and possibilities of people and systems, so in the production of this sound library I have focused on ‘people’, selecting samples of sound material from the diversity of human interactive behaviour. For example, different age groups – primary school students, university students, workers, etc., different actors – roadside beggars, tourists, etc., different amounts of human voice – two-person conversations, multiple people, etc. . In addition to the speech sound information described above, the human-centred sound body also includes non-semantic sounds, as well as the sounds of actions made by pedestrians recorded in the field, such as the sound made by a wheel when a person is walking. The sound of dragging a suitcase, the sound of a chain when carrying a bicycle, etc., are the focus of this sound bank. These are the main focal points of the sound library.

Classification of sounds:

Based on the above concepts, the objects sampled for this sample sound sampling can be broadly divided into the following categories:

1. footsteps

2. human

Speech: conversational and non-verbal sound symbols such as breathing, laughing, coughing, etc.

Action (all body sounds except footsteps)

Phantom (variants of human and animal sounds)

3. animals (dogs, birds …… )

4. objects (bicycles, trolleys, bells …… )

 

The Link of Trigger Sound Libraries:

https://drive.google.com/file/d/11IggVWbuaUf60oav9JVdlxHemTmLRgqy/view?usp=sharing

Sound production process and reflection:

During the sound editing process, I processed multiple versions of each individual sample sound, adjusting them to each other to varying degrees in multiple dimensions such as the time and frequency domains, varying from low to high frequencies to each other, as well as overlaying multiple layers of sound dimensions from single to composite. These different versions of sound in the scene represent the intermediate process of the user’s decision from ResultA to ResultB, a combination of multiple sound states (multiple small processes) and triggered transitions according to the data range.

During the pre-production field recording, I purposefully recorded many human-based sound samples using Sennheiser 416 microphones, including but not limited to human voices, group noise and footsteps, moreover, people unintentionally sneezing, coughing, bicycle chains and bells on backpacks, busy cell phones, etc. These sound samples provided me with a lot of creative inspiration for the post-production sound.

Throughout the post-production process, my goal was to be able to use the same realistic sound samples to create multiple, contrasting versions of the sounds. For example, a bent, grainy techno sound effect with an ethereal echo sound effect, or a sharp, transient, prominent tone effect with a thick, layered sound effect, etc. The main DAWs I use are Logic pro and Reaper, and the sound production process techniques are as follows:

Time stretching and pitch shifting:

changing the pitch and playback speed of audio can produce unusual and surreal sounds. Logic Pro has built-in time stretching and pitch shifting tools.

Distortion and Saturation:

Adding distortion or saturation to audio can produce a raspy, aggressive sound. I try to experiment with different types of distortion and saturation plug-ins to find the right amount of texture for each sound. (Overdrive/ Distortion/ Amp Designer)

Layering

Combine multiple layers of sounds with different textures and timbres to create a complex, transcendent atmosphere. It includes layered synthesized sounds, recordings of natural environments, and processed samples.

Reverb and Delay:

Use reverb and delay plug-ins to create a sense of space and depth in the sound. I try to make extreme settings, such as long decay times or high feedback levels, to produce weird, dreamlike atmospheres (Space Designer/ ChromaVerb/ Delay Designer).

Frequency shifting:

Frequency shifting can produce metallic, dissonant sounds and can be used to add an element of unreality to your audio (Frequency Shifter in Logic Pro).

Resample:

Merge audio to a new track and further process it using any combination of the above techniques. This can lead to some unique and unexpected results (EXS24 in Logic Pro).

Automate Effects:

Automate effect parameters such as filter cutoff frequencies or reverb mix levels to create dynamic, ever-changing soundscapes.

Experimental processing:

Use a spectrum processing plug-in like FabFilter Pro-Q3 to manipulate individual frequencies of the audio, removing, retaining or adding the parts of the frequencies I want to achieve some special spooky sounding effects.

Grain Synthesis:

Grain synthesis is a technique that involves manipulating small fragments of sound to create unique textures, such as Logic Pro’s Mangle.

Reverse:

Inverting sounds can create interesting textures and timbres that are not possible in traditional ways. Upside down vocals can often give a grotesque, ghostly feel. It can also be layered with its original, non-reversed counterpart to add depth and complexity to the mix. Often, the combination of forward and reverse sounds can create a sense of movement and space. For example, inverting certain elements of a mix may produce unexpected and interesting effects. For example, reversing the reverb tail can create a unique sucking sound that draws the listener in, while reversing the delay can create a disorienting bouncing effect.

Flanger:

Creates a unique, wide, spacious sound by combining the original audio signal with a slightly delayed and modulated version of itself. It creates a comb filter effect that produces a series of notches in the spectrum that move over time, resulting in the characteristic hiss associated with flanging. It creates a lot of space and dimensionality for me in my mixes. But too much flanger can make a mix sound unnatural, so I experiment with different settings to find the most appropriate amount of flanger for the target sound.

 

Bitcrusher:

This technique is mainly used for bit reduction and downsampling, but can also be used to add digital distortion and saturation to my sound. I control saturation mainly by adjusting the “drive” parameter.

Trigger sound design ideas and production process:https://blogs.ed.ac.uk/dmsp-place23/2023/03/23/interactive-trigger-sound/

Max and Arduino overall architecture testing process:

If you’re reading this in order, please proceed to the next post: ‘Graphic Identity’.

 

css.php

Report this page

To report inappropriate content on this page, please use the form below. Upon receiving your report, we will be in touch as per the Take Down Policy of the service.

Please note that personal data collected through this form is used and stored for the purposes of processing this report and communication with you.

If you are unable to report a concern about content via this form please contact the Service Owner.

Please enter an email address you wish to be contacted on. Please describe the unacceptable content in sufficient detail to allow us to locate it, and why you consider it to be unacceptable.
By submitting this report, you accept that it is accurate and that fraudulent or nuisance complaints may result in action by the University.

  Cancel