Any views expressed within media held on this service are those of the contributors, should not be taken as approved or endorsed by the University, and do not necessarily reflect the views of the University in respect of any particular issue.

Meetings

 

Gantt chart

In order to document the project journey and to keep track of team meetings, practical sessions, and meetings with tutor Asad Khan. The two team members that are assigned to meeting notes and documentation (Molly and Daniela) are familiar with using the collaborative software “Notion” for note keeping.

January 26th, 2023 – First (non-official) Meeting.

This is the first time all group members met one another. Taking place during Thursday’s DMSP lecture, the members traded names, their backgrounds, and why they chose the topic of Places. Following a general group discussion about the topic, A google form was created and sent out to determine when everyone was free during the week in order to establish a standing weekly meeting.

January 27th, 2023 – First Contact with Asad.

 

First meeting with everybody here with Asad. Introductions, why we have chosen this topic, what we envision for places, and how we conceive the possible portrayals of this theme were discussed as he was not present in the first meeting. ChatGPT was explored with questions such as ‘is a non-place the same as a liminal place?’
We are to keep posting into the Teams chat for a constant record of events, ideas, stream of consciousness, etc. The focus at this stage is to collect the data from scans.

January 30th, 2023 – Workshop with Asad.

The first workshop was run by Asad. The team explored the data processing software CloudCompare and how it manages point cloud data. Some points learned:

  • Merge between two lidar scans is possible. Defining the start of the point cloud and the end state.
  • We would need to import into CloudCompare and subsample before exporting to unity to reduce the number of points and avoid crashing.
  • You could convert the scans into sound.
  • Use a medium setting on the scanner – will take 5 minutes.
  • Define the places where you want to scan before you go to the site – scout these places.
  • We can make 3D objects and then convert them into point clouds and place them into a scan.
  • Microscan something in detail with a handheld scanner and make it into a hologram?
January 31st, 2023 – Team meeting.

For this meeting, we decided to meet up in the Common room in Alison House and after our last meeting with Asad, we wanted to discuss where we want the project to go forward.

One thing we all agreed upon, was to develop the project with a focus on an important place in Edinburgh, so we decided to work in Miro and add different places we could scan for the project. Some of the places that came up were: The Royal Mile, Mary King Close, Innocent Railway Tunnel, Botanical Royal Gardens, Armchair Bookshop, Banshees labyrinth, and The New Steps.

One of the topics that came up, was how could we incorporate the physical aspect of the exhibition, we discussed the creation of a 3D printer scale down the size of the places we scan, and also of holograph effect from mico-LiDAR scanning.

The next meeting will be in ECA as it will be our first time with the LiDAR scanner and want to learn to use it and start scanning our environment as an exploration of the technology.

February 2nd, 2023 – First Scans.

In the ECA exhibition hall, the team started to take their first LiDAR scans. We discovered that for the best and most accurate linking between scan positions, it works best when in line of sight of the last scan. It does technically work when moved up a level, however, there is the more manual alignment required.

Read the blog post about ECA.

The day before, David had tested out the scanner in his room at home. This is where the mirror phenomenon was discovered: It takes the reflection as if it were a doorway.  Read David’s exploration blog post.

February 6th, 2023 – LiDAR Training at uCreate.

We had the induction training in uCreate Studio, where we learned the security protocols when working in the workshop. We also had an introduction to the different machines that we have available to use in creating, such as 3D printing, laser cutters, CNC machines, and thermoformed machines.

Afterward, we went to the LiDAR workshop, where they showed us the correct way to use the LiDAR scanner, as well as the procedure we need to follow to transfer the data from the iPad to the computer, and the software we need to use to work with the data.

February 7th, 2023 – Individual project proposals and final decision.

Each member presented their own idea of how they envisioned the project to be formed. Some members were unable to attend at the same time, so those who were free most of the day met with them first to hear their ideas and present them to the rest of the group later. Each idea was discussed, pros and cons analyzed and eventually, we came to a decision we all agreed on. The biggest decision that needed to be explored before being 100% certain, was the location: The News Steps.

February 7th, 2023 – Scans of the News Steps.

LiDAR scans of the News steps with Daniela, Molly, and David. Experimented at night with the flash enabled on the scanner. Also tested out how the scanner would align two different scans on different levels of the stairs. The scans came out really well and gave us an idea of how we could keep developing the project. We were able to link both scans, even tho they were at different heights on the stairs.

LiDAR Scanning The News Steps

February 11th, 2023 – Team meeting with Asad.

This team meeting allowed the group to touch base with Asad prior to the first submission to verify that the project idea is realistic, achievable, and interesting.

February 13th, 2023 – Team meeting for Submission 1.

The team got together to figure out the final details of the submission. We had a good record of our overall process but had to create a nice workflow for our blog. We worked on finishing up blog posts with information on our previous meetings, and our research development, and we assigned the roles of each team member for the next submission.

February 23th, 2023 – Sound meeting

The first Sound department meeting took place, all parts of the sound team attended the meeting and took part in its content. The session was structured across each sound task, as mentioned in the latest sound post. Each team member had the opportunity to catch up and show individual progress on their coordinated task. A collective effort also allowed for planning the future steps of each job.

The meeting took place in the following and played out in the following order:

  1. Soundscape Capture with Chenyu Li – Proposal and Planning;
  2. Place Sonification with David Ivo Galego – Data-Reading method demonstration and future Creative approaches;
  3. Sound Installation with Yuanguang Zhu – Proposal Review and further planning;
  4. Interactive sound with Xiaoqing Xu – Resources overview.

DMSP Sound meeting #1 (2).vtt

March 3rd, 2023 – Team meeting with Asad

In this meeting we decided to book a short throw projector to make some tests of how it would look, we also purchased a shower curtain to try and project on top of that, but once we tried it, we realized that there was not enough brightness. This helped us to understand what kind of projectors we would need for our exhibition, and we notes that we needed to find out projections screens that fit the space we are gonna be in.

March 9th, 2023 – Team meeting with Jules

In this meeting, we had a talk with Jules about our concept, but most importantly it was a more technical talk, about how many projections we are planning to use, and what kind of sound equipment was gonna be needed. Jules recommends we have a test day, so we can make sure what we choose is correct and working properly.

March 10th, 2023 – Team meeting with Asad.

For this meeting, we meet online with Asad and had a really interesting talk and explanation on how we can use CharGPT in our work, as a collaborator and helper to develop our projects.

March 26th, 2023 – Team meeting with Asad.

During this meeting we meet in the Aitrium to test the sound and projections in the room the exhibition was gonna take place. One important thing we discovered was that there is a switch in the Atrium to close the blinds on the ceiling.

April 3rd, 2023 – Midnight Test

During this day Molly and myself went to the Aitrum at night, to test the projections and how they looked when there was no light outside. We also moved things around and worked with the objects already in the Atrium to create an optimal setup for the exhibition, we created a video explaining where we planned to put the different elements of the exhibition, so we could have this as a guide for the day we needed to set up. During this time we also discovered in a corner of the room several big wooden boxes, that we decided to use as stands and props in the space.

 

If you’re reading this in order, please proceed to the next post: ‘Design Methodology’.

Molly and Daniela

Workflow

Scan

The Leica BLK360 laser scanner: Through the cooperation of the scanner and iPad, the scene can be scanned into a point cloud. The iPad application can automatically match multiple scans in the same scene. Through the corresponding software on the computer and iPad, the scanned scene can be exported.

The new BLK360 3D scanner from Leica Geosystems - DEVELOP3D

Post-processing

CloudCompare: With this software, we can import the scanned point cloud. It reduces the number of points in the scene and modifies the color, and saturation of those points. It can also create a camera, creates keyframe animations for the camera, and export video in mp4 format.

TouchDesigner&attraktors designer: The point cloud can be imported into the software attraktors designer. And it can recognize the points and control them. For example, it can control the movement of points within a certain range, or convert points into lines for movement. It can also completely scramble the points, forming new scenes with certain movements.

 

Reference link:

https://www.youtube.com/watch?v=SuFeM07ddPc&list=WL&index=2

https://www.youtube.com/watch?v=ssJUxwtR44o

Installation

Arduino&sensors: Through the cooperation of the written program and the sensor, we can realize the immersive interaction between the audience and the scene. For example, the user’s footsteps are recognized by the pressure sensor, and the scene changes; Interact with the scene through gesture recognition hardware(Kinect/Leap motion).

 

13th Feb 2023

yuxuan

 

Interaction Process

The interaction process is shown in the figure above, there are observers and operators in this exhibition. The operator can interact with scenes and narrative sounds will come with the scene moving forward or backwards. The observer could view the whole interaction process and be inspired as well in the end.

View the narrative part here: https://blogs.ed.ac.uk/dmsp-place23/2023/02/13/narrative/

 

Additionally, the colour and size of the point cloud will change based on time changes. The default scene will get some deformation and morph with the actions of the operator, eventually, the “non-place” turns to a “place” by providing the audience with a sense of place through expression in different dimensions of human life: emotions, biographies, imagination, and stories.

View software support here: https://blogs.ed.ac.uk/dmsp-place23/2023/02/13/workflow/

Allison & Yijun

Building the Space

How is this going to look as an installation? As an exhibition?

We’ve got a vision of the outline of the space. Starting as a paper prototype and developing into a 3D mockup, we’re starting to see it come together. By visualising the space early on in the planning stages, we are able to identify  possible issues as well as start to find the best physical location to present the installation.

  >>

This structure can often be seen in museums and art galleries. Perhaps one of the more recent and recognised example of projectors and sound being used is the Van Gogh Exhibition.

van gogh alive edinburgh
Traynor, S. (2022) We visited Edinburgh’s new Van Gogh Alive exhibit and got goosebumps, EdinburghLive. Available at: https://www.edinburghlive.co.uk/best-in-edinburgh/van-gogh-alive-edinburgh-visited-23414535 (Accessed: 10 February 2023).

The Vincent Van Gogh Experience is an immersive, multimedia exhibit that brings the artwork and life of the famous Dutch post-impressionist painter to life. Using cutting-edge technology such as virtual reality, augmented reality, and projections, visitors are transported into the world of Van Gogh and his paintings. The experience offers a unique and interactive way to understand and appreciate Van Gogh’s iconic works.

Christie NMK lifestyle 3
Digital Systems, C. (2021) Christie brings cultural artifacts to life at the National Museum of Korea, www.projectorcentral.com. Available at: https://www.projectorcentral.com/Christie-at-National-Museum-of-Korea.htm (Accessed: 13 February 2023).
he largest 5D immersive Ukiyo-e art exhibition opened in Shanghai (2022) Barco. Available at: https://www.barco.com/en/customer-stories/2022/q1/ukiyo-e-art-exhibition.

Other exhibitions show how the use of projections can be immersive for a audience in a large area without reducing the quality of the art, but rather enhancing it. Allowing the audience to perceive it in ways they might not perceive a 2D static piece of art. Combined with sound, this experience has the potential to fully immerse the user into the space.

As our installation is large and immersive we don’t want to be intruding on other groups presentations. Or, on the other hand, have any of the other presentation affection the immersion of the user in ours.

Hence it is our preference that the space would be set in the Atrium of Alison house. The metal frames are ideally distanced in proportion to where the projector screens would be hung. It also allows for the projectors themselves to be placed behind the screens. There are also speakers already integrated into the room, along with plenty of plug points for the equipment.

 

 

 

 

 

 

After a conversation with our tutor Asad about this idea, we discovered that projecting LiDAR scans is a trial and error process, specifically the brightness and contrast of the scans.  We will need to test the quality of the LiDAR scans and how they show up on the screens themselves. It could be interesting to experiment with different materials for the screens, such as sheets, mesh, thick or thin fabric. This is very much an iterative process.

The university offers a wide range of projectors to choose from. a minimum of 3 is required, best case scenario, there will be 4.  These would be connected to a single computer that needs to have a good level of computing power to handle all the images and the transitions.

https://bookit.eca.ed.ac.uk/av/wizard/resourcedetail.aspx?id=4689

There will be two computers involved in this process to lower the risk of either one crashing with all the software running the sound and the images and the interactions. To ensure that they are in sync, we could use an Arduino with a counter that keeps them both on the same timings.

The interactive control system consists of a stair set composed of three levels. The user (one at a time) will be instructed to step onto the intermediate step and told to either step up or down. This action will define the dynamic switch of the exhibition: “to step up, and go forward in time” or “to step down, and move back in time”. The technical structure’s design of this control system is envisioned in two possible ways:

    • Arduino system using sensors (e.g. distance sensor, pressure sensors, or sound sensors)
    • Contact microphones set onto a Max/MSP system.

For any of the two design concepts, the structural nature of the stairs set would ideally consist of wood box-like structures since it can provide the structural consistency for contact sensors. Still, It could also offer the desired acoustic properties to be read through contact mics or sound sensors.

To see more about the specific sound equipment we will need, please refer to this post: YG – Preparation for Surround Sound Production

YG – Preparation for Surround Sound Production

YG – Preparation for Surround Sound Production

In order to better realize our team’s ideas, surround sound preparation is essential. To do this, I searched the school’s equipment library for equipment that we could use. The list is as follows.

Pre-production

 

  1. Microphone

Sennheiser AMBEO VR mic

https://bookit.eca.ed.ac.uk/av/wizard/resourcedetail.aspx?id=4149

 

  1. Field Recorder

 

Zaxcin Nomad Lite

https://bookit.eca.ed.ac.uk/av/wizard/resourcedetail.aspx?id=1695

 

Zoom F8

https://bookit.eca.ed.ac.uk/av/wizard/resourcedetail.aspx?id=4175

 

Post-production and live sound reinforcement

 

  1. Interface

 

RME FireFace UC

https://bookit.eca.ed.ac.uk/av/wizard/resourcedetail.aspx?id=1147

 

RME FireFace UCX

https://bookit.eca.ed.ac.uk/av/wizard/resourcedetail.aspx?id=2635

 

  1. Speaker

 

Genelec 8030A

https://bookit.eca.ed.ac.uk/av/wizard/resourcedetail.aspx?id=1727

 

Genelec 7060 Active Subwoofer

https://bookit.eca.ed.ac.uk/av/wizard/resourcedetail.aspx?id=6927

 

In addition to this, our sound design could produce some of the sound effects to enrich the scenes’ storytelling content. I think the focus of this audio work is on how to bring the sounds that we have created to the scene in surround sound as much as possible, so I think we need to have more discussion about the implementation of surround sound.

We could consider the speaker position setup like this.

Narrative

The lidar scanner will record new steps within 24 hours. The time will move forward while the audience climbs up the steps, and vice versa.

Some narratives phrase cooperate with audio could enhance the immersive experience of the audience, the audience would be able to imagine their own story in their mind:

Early morning 

“The fresh air and quietness of the morning are just what I needed.”

“Is there any coffee shop open?”

 Morning 

“Do you know which classroom are we going to today?”

“Nooo, I am late”

“I will be there in 5 mins”

I’m a bit winded, but the peace and solitude of this place is worth it.”

Afternoon

“Be careful with your steps”

“The steps kill me”

“I guess the dungeon is over there”

Night

“It is a long day”

“What’s the next place we are going?”

“See, the street lights are on”

“Just arriving! Would you please leave a door for me?”

Midnight

Singing of the drunkard

couple quarreling

 

Ambient Sound/Sound List

In the morning, you can hear the sound of cyclists carrying bicycles up and down the stairs (such as the sound of bicycles hitting the railings).

At night, fireflies gather on the street lamps. When you accidentally disturb them, they will fly away.

If you stay on a step for a long time, a wired noise will be generated by the steps

Future Plan

As a group, we are thinking to ask random people in new steps to share their story with this place, or their final destination to rich our content and indicate the relationship between people and place.

 

13th Feb 2023

Allison, Chenyu, Yijun, Xiaoqing

The Sound Task Force

After some weeks of discussion and have decided on the project’s foundational concepts, understanding and structuring its sonic possibilities and defining the respective tasks seems appropriate. To quickly sum up the foundations of this project, the project will be composed of an exhibition, where an immersive experience of a pre-captured space (with LiDAR) will be displaced that users will explore through interactive methods. Through this brief, and after careful thought, the collective can identify four main sound-work domains:

      • Soundscape Capturing
      • Place Sonification
      • Interactive Sound 
      • Installation.

This is handy since the group comprises 4 Sound Design students, to whom these sound-work domains can be later assigned as roles.

Soundscape Capturing

The relationship between the concept of place and soundscape is crucial and interdependent. A place can be defined as a location with unique physical, cultural, and social characteristics. On the other hand, the soundscape refers to the acoustic environment that a person experiences in a particular place, including natural and human-made sounds. The soundscape contributes to our perception of a place, creating a sense of identity and atmosphere.

Capturing and reproducing the soundscape along with the place (through LiDAR) will play a critical role in shaping perception and understanding of the presented place. Therefore, it will probably be one of the first performed sound tasks for this project. The defined location of capture is sonically vibrant, having an urban profile sided with nature. It is composed of an extensive collection of elements that must be carefully thought out across all capture moments: planning, recording, and editing. This being said, through professional field-recording technology and carefully thought-out techniques, we propose to explore ways of best capturing this soundscape, having in mind its high-fidelity representation and the means of the entirety of the project along with its final form.

Place Sonification

One of the project’s core concepts to capture a location into point cloud data urges a new sound opportunity. A critical rule taught through the course and that here applies greatly: where there is audio, there is data, and where there is data, there can always be audio.

Grain River – Project that sonifies recorded GPS/Accelerometer data (Galego, 2021)

One aspect of sound art is data sonification, which involves transforming data into sound to make it more accessible and to enhance its representation. In data sonification, data points are mapped to specific sound parameters, such as pitch, frequency, and volume. This creates a sonic representation of the data that can reveal patterns, trends, and relationships in ways that might not be immediately apparent through visual means. Sound art that employs data sonification can range from immersive installations to interactive pieces and has the potential to offer new perspectives on data and provide a unique and engaging form of artistic expression.
This being said, point cloud data can be turned into quantifiable formats such as XYZ and RGB and exported into an xml. file. The resultant xml. file can be converted into txt. file and be fed to a Max/MSP patch that reads these numbers. From here, through computational sound composing, data will generate a meaningful sound-art piece that represents the recorded place. In a sense, letting the physical shape of the site perform a new soundscape. This method could also play along with the recorded soundscape audio, where this high-fidelity representation can be processed and re-shaped according to the point cloud data sets.

Interactive Sound

As stated previously, the user will perform some degree of interactivity with the exhibition. Users will interact through a control system that takes in a relatively simple gesture (stepping up or down). Although simple, this represents a decision-making process of the user, which on its terms, presents multiple sets of data:

    • outcome A
    • outcome B
    •  similarity to a past outcome
    • dissimilarity to a past outcome

Such parameters can be represented by quantified data and be sonified through Max/MSP. Perhaps one other form of sonification would be to use unity/unreal for this purpose, which along with middleware like Wwise or Fmod, could work sound in exciting and varied ways. This task, however, will demand the crucial point of finding the optimal way to read live data from this simplistic control and manage a system that is optimal for such a purpose.

Installation

As stated previously, the final presented product of this project will be an exhibition, in the sense that it will use space to deliver an audio-visual product. This premise demands the installation factor for both Audio and Visual domains. Now, it is also clearly stated that the project proposes an immersive experience. On the one hand, this “immersive” label narrows down the installation options. On the other hand, it demands innovative and creative solutions.

Installation is the task that, for the most part, is thought out as the last step when it is not. It plays with various factors that must be planned, measured, tested and improved. It accounts for the type of audio to playback, the space being used, and most importantly, the listener, both as an individual and the audience. The installation must be thought out early in the project and improved along the project developments and must account for all the different needs that might occur over time.

Two possible solutions come to mind for the “immersive exhibition” concern of the project: a multichannel surround sound speaker set-up or an ambisonic solution with a binaural format. However, although ambisonics provides powerful ways to create immersive experiences, it also demands more resources and would not offer a simple solution for the different possible orientations of the listener. Therefore, a multichannel surround sound system seems more appropriate as this can more easily be adapted and tuned to the ongoing project’s needs.

 

Molly – Draft Idea

As a group, we decided that in order to narrow down our final idea that it would be useful for all team members to come up with how they could see this project unfolding. We had agreed in a previous meeting that the physical presentation of this project is going to be in the form of an installation/exhibition – a space (i.e place!) that the user can explore and be immersed in.

I’ll get into the specifics of my idea in a moment; first I want to show what inspired it. In Portsmouth, they have a series of installations as a part of We Shine Portsmouth where artists display their exhibitions around the city. In 2021, British artists Anna Heinrich and Leon Palmer used 3D laser scanning technology, a voile screen, projected film, sound and lighting effects, an installation was created that could be folded down and moved to another location, like a mythical vessel.

https://heinrichpalmer.co.uk/project/ship-of-the-gods/

The way that they projected this on such a scale was really interesting. The movement of the ship, the surround sound and the lighting come together to create an engaging experience.

My Idea

So – how do I envision this project? During our first group brainstorm, one idea that was mentioned was experiencing and scanning a the same place at different times of the day/over multiple days. I loved this concept and thought that the vast array of skills this group has, that this has the potential to be incredibly immersive.

Where?

The News Steps that go from the top of the Royal Mile down to Waverley station almost broke me when I walked up them with a very heavy rucksack one day – so logically, I would like to spend the whole day running up and down taking LiDAR scans.

Photo by Molly Munro

These are a winding set of steps in the heart of the city, broken up by consistent landings where visitors can often be seen catching their breath. These landings could also be very useful for placing the LiDAR scanner on.

Something to keep in mind is that it can get busy, and it is a very expensive set of equipment – are there risk assessments needed? Do one of us stand guard in a hi-vis jacket? 

When?

As I am wanting to capture the passing of time, scans and field recordings would be taken periodically throughout the day, possibly even over two days as it would be interesting to capture the “changing of the day”. Either way, I would record minimum over 24 hours, from early morning to late at night (12am-11pm?) take it in shifts to record this OR on multiple days and splice it together as if it were one.

What?

What is this going to look like as an instillation? I envision this having 3 rear projection screens surrounding the user. Front view: POV facing up the stairs; Side views: LiDAR split down the middle, left and right respectively.

Paper prototype visualisation
POV of user – 1
POV of user – 2

In front of the user, near the centre of the three screens, would be the interactive control that allows for moving forward or backward in time. As time moves forward, the images change accordingly to the scans at that time while simultaneously moving up the stairs. As time goes backwards, it reverses.

Accompanying the images of the scans, there is lighting in the room that reflects the lighting at the time of day – sunrise, midday, sunset etc. There would also be a couple of speakers in the corners that are outputting the sounds recorded.

It could also be really interesting to have a button that transforms the ‘normal’ place into a distorted reality of sorts –  into a ‘non-place’ where we have edited the LiDAR scans, morphed, twisted, warped them. Colours and sounds change to the opposites etc like being transported into a twilight zone.

How?

Equipment that I know we need:

We would be creating the movie/animation and they would just be scrubbing through the footage essentially.

Take a lidar scan on each landing of the steps and then manually align them (due to height difference).

Molly Munro

6th February 2023

Xiaoqing – Draft idea

Background:

Use interactive devices supported by 3D printing technology to display the cultural heritage of a special place with Scottish cultural background, and use immersive display experience to explore the possibility of the interactive relationship between individuals and spaces.

 

Form:

The multi-dimensional scanning scene is displayed through projection, and the audience can feel the scanning changes of image data caused by individual behavior and movement from the first perspective in the exhibition space. The time development of the entire installation is guided by the well-established story line, so that the immersive experience of the space environment and the cultural significance and aesthetics behind it can be experienced.

 

Place:

Mary King Close

Mary King Street is an underground street in the Old Town of Edinburgh, Scotland. It dates back to the 17th century and is one of the oldest areas of the city. It is named after Mary King, a prominent figure in Edinburgh’s history who lived there in the 1630s. The close was once a bustling center of activity, with merchants selling their wares, artisans working and families living there. However, when the plague hit Edinburgh in 1645, it was sealed off for decades until it reopened as a tourist attraction in 2003, complete with mythical tales of its haunted ghost.

 

project extension:

  1. Non-realistic space: realize the transformation of real space and non-realistic space through the user’s specific trigger form, and make corresponding display form of picture and sound;
  2. Use programming technology to generate sound through visual movement

 

Sound Plan:

  1. Large ambient sound that matches the environment (simultaneous recording or screen generation)
  2. Scene music: Multiple music motives in different areas of the location that can enhance the immersive experience of the first-person perspective, triggered by individual triggers/sensing (Experimental Electronics/Scotland/medieval)

3 Voiceover: Narrative, quickly explain the background and meaning of the project

 

My project story proposal:

The protagonist accidentally meets the ghost in the mirror after straying into the alley (performance artist). In the story it tells, he personally experienced the artist’s relationship between life and art when the plague hit Edinburgh in 1645 struggling and thinking. The character experience space develops with the sound narrative, perhaps adding our members’ individual thoughts in different spaces.

 

First practice

Time:31/01/2023

Place:ECA West Court

Equipment:Leica BLK360

 

Purpose:

To understand how LIDAR technology and its devices work in space, including how it is presented, to extend the scanning range. Method: Change the scanning angle of the equipment in an indoor closed space to detect the 3D printing results of the device in response to human motion by changing the motion trajectory of the human body

 

Summary:

Lidar scanning technology can quickly acquire a large number of data in a short time, and can provide a large number of objective databases for our final mapping space, so as to accurately represent the terrain or structure. Compared with traditional scanning methods, 3D modeling can provide more detailed scanning results However, according to this practice, we found that the equipment has certain requirements on the spatial topography to be scanned, so for areas with a large number of trees or other vegetation, the collected data may not be so detailed or accurate, and according to the data, the accuracy of the results may be affected when the data is collected under wet conditions such as rain and snow.

Yijun Zhou – Draft idea

Inspiration

The first time we used the radar we tried walking in a circle around the location to be scanned and the results scanned out particular shapes based on our movement. This inspired me to think that we can do some design exploration and expression based on our behaviors and actions in space.

The trajectory of our movements on our first scan
The trajectory of our movements on our first scan
Idea

As shown in the diagram, we can correspond the scanned shape to the score to get the exact track.

Through transforming the shape of our actions, distance depth, and positional coordinates into unique sound effects to express our relationship with space and emotional connection.

At the same time, it can combine the sounds of other people in this space, such as the sound of conversation and footsteps.

Combined with Allison’s ideas, we can translate body language into music, expressing how the environment affects us unconsciously, as well as our emotions and perceptions of space. It is possible that the physical space outside the mirror corresponds to one kind of music and the space inside the mirror to another. Through this project, we can reflect on how the environment shapes people, and how people reshape the environment.

Some attemps

I tried providing Chat gpt with a few simple numbers to generate a basic melody, and I found that it worked.

css.php

Report this page

To report inappropriate content on this page, please use the form below. Upon receiving your report, we will be in touch as per the Take Down Policy of the service.

Please note that personal data collected through this form is used and stored for the purposes of processing this report and communication with you.

If you are unable to report a concern about content via this form please contact the Service Owner.

Please enter an email address you wish to be contacted on. Please describe the unacceptable content in sufficient detail to allow us to locate it, and why you consider it to be unacceptable.
By submitting this report, you accept that it is accurate and that fraudulent or nuisance complaints may result in action by the University.

  Cancel