Any views expressed within media held on this service are those of the contributors, should not be taken as approved or endorsed by the University, and do not necessarily reflect the views of the University in respect of any particular issue.

field recording initial plan

Preparation 1: List of field recording contents

Things need to be done on the day of the recording
  1. Ambient sound of the whole day in space
  2. Conversations and the sound of the movement of pedestrians
  3. IR data of space(If adding some sound effects later, I can apply the reverb there)
  4. Vibrations on the steps(Trying to combine with ambient sound to enrich the audience’s listening experience)
  5. The dialogue and the sound of movement in our narrative

Preparation 2: Selecting the equipment

Equipment selection gives priority to flat frequency response, portability and availability.

Equipment list

  1. Sennheiser MKH 416
A short shotgun microphone with a super-cardioid polar pattern.
Use a shotgun microphone to:
Record the sound effect of pedestrians’ movements without disturbing them;
Record the dialogue and the movement in the narrative acting by our teammates.
  1. Sanken Cub-01 (MC31)&contact mic from ac workshop
Sanken Cub-01 (MC31): A boundary microphone for location recording.
Use a contact mic to:
Recording the vibrations on the railings;
Use a boundary mic to:
Get a cleaner, flatter and more natural sound for post-processing. Stick it up a little higher, it might be a good way to record the conversation from pedestrians.
  1. Two pairs of condenser mic
Schoeps MK6(cardioid pattern)/MK4 small-diaphragm condenser microphones
Use 2 pairs of condenser mic to:
Record rich and detailed ambient sounds based on Binaural;
Record IR data of space.
  1. Sennheiser – AMBEO VR Mi(x)
would face difficulties coping with envisioned set-up, so will no longer in use
Sennheiser – AMBEO VR Mic – Ambisonic Microphone
http://www.sennheiser-sites.com/responsive-manuals/AMBEO_VR_MIC/EN/index.html#page/AMBEO%20VR%20Mic/VR_MIC_01_Produktinformationen_EN.2.1.html#ww1012215
Use a pair of omnidirectional condenser mic to:
Record rich and detailed ambient sounds by ambisonics;
Record IR data of space.
  1. Field Recorder
F8 (8 microphone inputs Field Recorder) + H5
  1. others
Pc x 1
Mic stands x 2
Cables x 8
Stereo Bar x 1
AABatteries x n
Double-sided glue

Preparation 3 on the day of scanning

  1. Observe where it is suitable for microphones in new steps and draw a microphone layout before recording
  2. Observe the behaviour and dialogue of passers-by,adding contents to the dialogue and behaviour in the narrative
  3. A test recording with H5 to simulate the official recording and to see if there are any unforeseen circumstances

Preparation 4 Planning for the recording day

  1. 416 x H5 walking around to capture sounds flexible —— 2 People
  2. Other mic:record 5-10min per hour ,a guard is needed near each mic(pairs)—— 4 People
  3. Control DAW —— 1 to 2 Person
  4. Recording information (The recording form will be ready before recording)—— 1 to 2 Person
  5. Archiving and preliminary editing within one week of recording
Staffing arrangements do not have to be fixed, but handovers must be done carefully.
Chenyu Li

Sound Meeting #1

On the 23 Feb, the first Sound department meeting took place. All parts of the sound team attended the meeting and took part in its content. The session was structured across each sound task, as mentioned in the latest sound post. Each team member had the opportunity to catch up and show individual progress on their coordinated task. A collective effort also allowed for planning the future steps of each job.

This post will briefly attempt to overview the key topics, discussions, and decisions for each underlined sound task during this meeting. The meeting took place in the following order:

      1. Soundscape Capture: Proposal and Planning;
      2. Place Sonification: Data-Reading method demonstration and future Creative approaches;
      3. Sound Installation: Proposal Review and further planning;
      4. Interactive sound: Resources overview.

The meeting recording can be found here: DMSP Sound meeting #1 (2).vtt

Soundscape Capture

The soundscape capturing was the first task to take place as a topic of this meeting since it was agreed to be the current priority of the project since it envisions this to take its recording stage in the upcoming days. The respective coordinating member, Chenyu Li, presented this segment.

The presentation started off with laying off the possibilities for field-recording  methods along with the respective needed resources. A wide range of solutions came up:

      • Shotgun mic to record conversations by people on-sight;
      • Contact microphones to capture steps and rail handlings;
      • Matched Pair condensers for stereo recording;
      •  Ambisonic recording.

field recording initial plan

After some analysis, it was agreed that the ambisonic solution would face difficulties coping with envisioned set-up. Contact microphones would not be practical for capturing footsteps on stone surfaces, nor would the railing add significant sonic value to the final product. When it came to the shotgun mic solution, although of great interest, it brings up matters of ethics and privacy and therefore was set to be considered on-sight. The matched pair solution was agreed to be the main focus of the recording plans. Adding a second pair facing the opposite direction was further thought out to work a rear stereo image for the envisioned surround sound system. Therefore, the project plans to use pairs of Schoeps MK6(cardioid pattern)/MK4 small-diaphragm condenser microphones with a ZOOM F8 recorder.

 

Place Sonification

Place Sonification was presented by the coordinating member David Galego, who demonstrated current developments, followed by a discussion on future creative approaches for this part of the project.

XYZ+RGBA data reading patcher

The demonstrated methods showed how to export readable point cloud data for sonification and integrate these as parameters in a functioning MAX/MSP patch. This demonstration showed the following developments

      1. Exporting XYZ+RGBA data from cloud compare;
      2. Sorting XYZ+RGBA data in excel;
      3. Exporting XYZ+RGBA data from excel to readable txt. file;
      4. Integrate data into data reading MAX/MSP patch;
      5. Demonstration of patch functionalities and variable attribution.

The MAX/MSP demonstration patcher along with txt. file can be downloaded here: https://github.com/s2272270/PlaceSonification.git

After this demonstration, a discussion on the creative approach to the further development of the patcher came into place. This discussion reflected on two possible methods to sonify this data:

    • Processing/granulating the previously recorded soundscape;
    • Using parameters to generate sound through the means of MSP;

Although these approaches are not mutually exclusive, the sense of priority to either one or the other was the aspect under discussion. This discussion reached the coordinator member leaning towards processing methods, whereas the rest of the collective towards proper generative contexts. As such, it was set that the coordinator would bring up this discussion in the next general meeting with the Digital Design Media collective taking part in this reasoning since it was understood that the visual aspect and project concept play deciding factors in how it should sound.

 

Sound Installation

Sound Installation was presented by the coordinating member Yuanguang Zhu (YG). This segment reviewed aspects mentioned in the previous blog post, “YG – Preparation for Surround Sound Production”, posted on the 13th of February, 2023. After a careful review of aspects such as the envisioned system’s wiring design and set-up infrastructure, a series of factors were altered and agreed upon:

      • The collective understood that the speaker set-up should not be based on truss fixed points, as the respective truss mount kit will likely not be an available resource.
      • The collective identified a wiring incongruence between the proposed interface “RME Fireface UC/X” and the Genelec 8030A since the interface provides its analogue outputs as 1/4″ TRS, whereas the Genelec speakers input XLR.
      • The suggested interface, “RME FireFace UC/X”, does not provide a functional digital connection since it provides ADAT and coaxial ports instead of USB or CAT network protocols.
      •  The collective understood the field recording proposed solutions to be out of the scope of the task.

Therefore the collective suggested for this task look further along the following lines:

      • To plan a speaker set-up that is ground-stand based;
      • To look into interface options that provide at least 8 XLR DA outputs.
      • To look into an interface solution that provides a reliable digital connection such as USB, DANTE or standard CAT5e.
      • To look into a component that may allow tuning of the system.
      • To update the proposal in the post “YG – Preparation for Surround Sound Production”.

Having described the interface’s characteristics and the system’s need for tuning possibilities, the collective suggested looking into the possibility of having the university’s Digico SD11 mixing desk.

 

SD11

Interactive Sound

The Interactive Sound segment was presented by the coordinating member Xiaoquing XU.

This segment analysed the available resources for an Arduino system that could read the project’s envisioned user interaction (step-up/step-down), and turn it into a sonic response. While looking into different hardware possibilities, the ideal digital support agreed on was again MAX/MSP. After careful analysis and some discussion, the following solutions were understood as the most simple yet effective:

    • A Max/MSP patch that inputs two live contact microphones (one on each step) react once a certain dB threshold is surpassed.
    • An Arduino system that uses a sensor (distance sensor or light sensor) and feeds live data into a Max/MSP patch that can then be interpreted and sonified.

The collective advised the Arduino sensor option not only to be the most likely to present more accurate results but also to have various documented resources on how to perform these solutions, like the diagram presented below for using a light sensor.

Arduino + light sensor diagram
More sound meetings will occur in the upcoming weeks, so we will keep you posted.

 

Export e57 file from LiDAR

Before using the iPad to export, ensure the iPad has enough storage space. The average export file size is usually between 5 GB and 10 GB, if there is not enough space the export will fail. After trying, deleting the created jobs in cyclone will not free up storage space on the iPad, so the best thing to do is to ask the ucreate staff to reinstall the software on the iPad and log into the account.

export e57 file

Select the linked file in the right hand menu bar of the job and click on the button in the bottom right hand corner of the file to select the export e57 file. After the export is complete we can transfer it to our computer via airdrop for editing.

import data from lidar

Open cyclone and connect it to lidar.

Create a new job.

Click on the button at the bottom left of the jobs page.

Click on scanner data and select the data to be imported.

Wait for the data transfer to complete.

If you’re reading this in order, please proceed to the next post: ‘From Raw Data to Unity’.

 

yuxuan,

17 Feb 2023

Meetings

 

Gantt chart

In order to document the project journey and to keep track of team meetings, practical sessions, and meetings with tutor Asad Khan. The two team members that are assigned to meeting notes and documentation (Molly and Daniela) are familiar with using the collaborative software “Notion” for note keeping.

January 26th, 2023 – First (non-official) Meeting.

This is the first time all group members met one another. Taking place during Thursday’s DMSP lecture, the members traded names, their backgrounds, and why they chose the topic of Places. Following a general group discussion about the topic, A google form was created and sent out to determine when everyone was free during the week in order to establish a standing weekly meeting.

January 27th, 2023 – First Contact with Asad.

 

First meeting with everybody here with Asad. Introductions, why we have chosen this topic, what we envision for places, and how we conceive the possible portrayals of this theme were discussed as he was not present in the first meeting. ChatGPT was explored with questions such as ‘is a non-place the same as a liminal place?’
We are to keep posting into the Teams chat for a constant record of events, ideas, stream of consciousness, etc. The focus at this stage is to collect the data from scans.

January 30th, 2023 – Workshop with Asad.

The first workshop was run by Asad. The team explored the data processing software CloudCompare and how it manages point cloud data. Some points learned:

  • Merge between two lidar scans is possible. Defining the start of the point cloud and the end state.
  • We would need to import into CloudCompare and subsample before exporting to unity to reduce the number of points and avoid crashing.
  • You could convert the scans into sound.
  • Use a medium setting on the scanner – will take 5 minutes.
  • Define the places where you want to scan before you go to the site – scout these places.
  • We can make 3D objects and then convert them into point clouds and place them into a scan.
  • Microscan something in detail with a handheld scanner and make it into a hologram?
January 31st, 2023 – Team meeting.

For this meeting, we decided to meet up in the Common room in Alison House and after our last meeting with Asad, we wanted to discuss where we want the project to go forward.

One thing we all agreed upon, was to develop the project with a focus on an important place in Edinburgh, so we decided to work in Miro and add different places we could scan for the project. Some of the places that came up were: The Royal Mile, Mary King Close, Innocent Railway Tunnel, Botanical Royal Gardens, Armchair Bookshop, Banshees labyrinth, and The New Steps.

One of the topics that came up, was how could we incorporate the physical aspect of the exhibition, we discussed the creation of a 3D printer scale down the size of the places we scan, and also of holograph effect from mico-LiDAR scanning.

The next meeting will be in ECA as it will be our first time with the LiDAR scanner and want to learn to use it and start scanning our environment as an exploration of the technology.

February 2nd, 2023 – First Scans.

In the ECA exhibition hall, the team started to take their first LiDAR scans. We discovered that for the best and most accurate linking between scan positions, it works best when in line of sight of the last scan. It does technically work when moved up a level, however, there is the more manual alignment required.

Read the blog post about ECA.

The day before, David had tested out the scanner in his room at home. This is where the mirror phenomenon was discovered: It takes the reflection as if it were a doorway.  Read David’s exploration blog post.

February 6th, 2023 – LiDAR Training at uCreate.

We had the induction training in uCreate Studio, where we learned the security protocols when working in the workshop. We also had an introduction to the different machines that we have available to use in creating, such as 3D printing, laser cutters, CNC machines, and thermoformed machines.

Afterward, we went to the LiDAR workshop, where they showed us the correct way to use the LiDAR scanner, as well as the procedure we need to follow to transfer the data from the iPad to the computer, and the software we need to use to work with the data.

February 7th, 2023 – Individual project proposals and final decision.

Each member presented their own idea of how they envisioned the project to be formed. Some members were unable to attend at the same time, so those who were free most of the day met with them first to hear their ideas and present them to the rest of the group later. Each idea was discussed, pros and cons analyzed and eventually, we came to a decision we all agreed on. The biggest decision that needed to be explored before being 100% certain, was the location: The News Steps.

February 7th, 2023 – Scans of the News Steps.

LiDAR scans of the News steps with Daniela, Molly, and David. Experimented at night with the flash enabled on the scanner. Also tested out how the scanner would align two different scans on different levels of the stairs. The scans came out really well and gave us an idea of how we could keep developing the project. We were able to link both scans, even tho they were at different heights on the stairs.

LiDAR Scanning The News Steps

February 11th, 2023 – Team meeting with Asad.

This team meeting allowed the group to touch base with Asad prior to the first submission to verify that the project idea is realistic, achievable, and interesting.

February 13th, 2023 – Team meeting for Submission 1.

The team got together to figure out the final details of the submission. We had a good record of our overall process but had to create a nice workflow for our blog. We worked on finishing up blog posts with information on our previous meetings, and our research development, and we assigned the roles of each team member for the next submission.

February 23th, 2023 – Sound meeting

The first Sound department meeting took place, all parts of the sound team attended the meeting and took part in its content. The session was structured across each sound task, as mentioned in the latest sound post. Each team member had the opportunity to catch up and show individual progress on their coordinated task. A collective effort also allowed for planning the future steps of each job.

The meeting took place in the following and played out in the following order:

  1. Soundscape Capture with Chenyu Li – Proposal and Planning;
  2. Place Sonification with David Ivo Galego – Data-Reading method demonstration and future Creative approaches;
  3. Sound Installation with Yuanguang Zhu – Proposal Review and further planning;
  4. Interactive sound with Xiaoqing Xu – Resources overview.

DMSP Sound meeting #1 (2).vtt

March 3rd, 2023 – Team meeting with Asad

In this meeting we decided to book a short throw projector to make some tests of how it would look, we also purchased a shower curtain to try and project on top of that, but once we tried it, we realized that there was not enough brightness. This helped us to understand what kind of projectors we would need for our exhibition, and we notes that we needed to find out projections screens that fit the space we are gonna be in.

March 9th, 2023 – Team meeting with Jules

In this meeting, we had a talk with Jules about our concept, but most importantly it was a more technical talk, about how many projections we are planning to use, and what kind of sound equipment was gonna be needed. Jules recommends we have a test day, so we can make sure what we choose is correct and working properly.

March 10th, 2023 – Team meeting with Asad.

For this meeting, we meet online with Asad and had a really interesting talk and explanation on how we can use CharGPT in our work, as a collaborator and helper to develop our projects.

March 26th, 2023 – Team meeting with Asad.

During this meeting we meet in the Aitrium to test the sound and projections in the room the exhibition was gonna take place. One important thing we discovered was that there is a switch in the Atrium to close the blinds on the ceiling.

April 3rd, 2023 – Midnight Test

During this day Molly and myself went to the Aitrum at night, to test the projections and how they looked when there was no light outside. We also moved things around and worked with the objects already in the Atrium to create an optimal setup for the exhibition, we created a video explaining where we planned to put the different elements of the exhibition, so we could have this as a guide for the day we needed to set up. During this time we also discovered in a corner of the room several big wooden boxes, that we decided to use as stands and props in the space.

 

If you’re reading this in order, please proceed to the next post: ‘Design Methodology’.

Molly and Daniela

LiDAR Scanning The News Steps

In order to be certain that the decided upon place of The News Steps was feasible for our project, it was important to test how the scanner responded to the environment. There was also the worry of space on the landings, if there was enough room for the scanner and the public to pass for example.

It was late evening when group members Daniela, Molly and David went to the steps. An advantage of it being dark at the time of scanning presented the opportunity to use the flash embedded in the scanner and explore how many points could be produced and the level of detail.

Overall it was successful. The scans aligned despite the height differences of the landings. There is plenty of room for the scanner, we just have to be spread out if there are more than 2 group members at the site; and there is enough light for a good amount of points in the dark.

Video of the scans:

https://uoe-my.sharepoint.com/:v:/g/personal/s2272270_ed_ac_uk/Eb6CPpOa45lAjBPjfTL7tXUBCG_JMZqhRUzJJnKDbjK8HQ?e=Juljjq

         

If you’re reading this in order, please proceed to the next post:
The Day(s) of the Lidar Scans’
.

Molly Munro

css.php

Report this page

To report inappropriate content on this page, please use the form below. Upon receiving your report, we will be in touch as per the Take Down Policy of the service.

Please note that personal data collected through this form is used and stored for the purposes of processing this report and communication with you.

If you are unable to report a concern about content via this form please contact the Service Owner.

Please enter an email address you wish to be contacted on. Please describe the unacceptable content in sufficient detail to allow us to locate it, and why you consider it to be unacceptable.
By submitting this report, you accept that it is accurate and that fraudulent or nuisance complaints may result in action by the University.

  Cancel