Any views expressed within media held on this service are those of the contributors, should not be taken as approved or endorsed by the University, and do not necessarily reflect the views of the University in respect of any particular issue.

From Raw Data to Unity

The scanner to Unity pipeline is in theory, quite straightforward. However, in practice, there are many individual elements and constraints that need to be taken into account.

The largest worry when it comes to Point Cloud data, is the number of points and the size of the data files. In an ideal world, we have the computational power to handle complex clouds with millions of points that can be rendered at a moments notice. Unfortunately, despite the capabilities of todays laptops and MacBooks, this is not the quickest process.

Starting at the beginning

As we know, when scanning the steps the data is sent to the Cyclone app on the provided iPad from uCreate that allows us to preview and pre-align multiple scans. It also allows us to export the data as a bundle to an .e57 file format. However, as discovered by Yuhan Guo, the Cyclone app contained upwards of 56GB of data saved from previous scans that is only erasable if the app is reinstalled. As uCreate did not solve this issue before we obtained the scanner for the first round of final scans, we had to split the scanning day in two to limit the amount of data that would fit. This also meant that exporting the files was almost not possible as the app needs some semblance of free space to execute the function. Therefore, for the set of 10 morning scans, it went: export scan 1 > delete scan 1 > export scan 2 > delete scan 2 > etc…

With over 400,000,000 points of data to export, this was time consuming, but successful. As the complete bundle was not able to export, it also took more time to manually align the scans again in the Cyclone 360 register software on the uCreate laptop. It turned out that this laptop was also full! with 1GB total available storage and thus preventing us from uploading the scans to the computer.

Took a trip into New College for warmth and WiFi!

We solved this by connecting my external hard drive as the source of storage for the software.

Thankfully, the next time we returned and picked up the scanner, we explicitly told uCreate to DELETE and REINSTALL the iPad app (not just deleting the projects from the app as they kept insisting we do (we did), that did nothing to fix it). Suddenly there was so much space that we could redo a couple of the scans in addition to finishing the second half of the staircase in the afternoon portion of our timeline.

Total points over the two days: 1.224 Billion!

Alignment

The .e57 files that had to be exported individually, outside their bundle, had to be imported and manually aligned in Cyclone360. The scan could then be exported as a bundle and brought into cloud compare. The afternoon and morning scans could then be aligned in that software to complete the scan of the staircase.

CloudCompare to Blender: Giving it geometry

If I were to import the point cloud directly into Unity at this point, it would have no physical geometry. Therefore no texture and would be invisible to the camera.

The in between step to resolve this is to export the scans (after subsampling to 100,000 points!) to .ply format and bring it into Blender. Working with Geometry Nodes and then assigning a texture to it.

With lighting placed in the same place as the street lamps.

Blender to Unity

Two options to go from Blender to Unity here: Export as FBX or just drag and drop the Blender file into Unity. I found that the Blender file was better excepted by Unity with faster importing and more options available for edits, such as textures and lights.

**NOTE: This is all done in Unity 2021 – We experimented with Unity 2019 as this version is compatible with Arduino however it DOES NOT LIKE the point clouds. So we stuck with 2021.

If you’re reading this in order, please proceed to the next post: ‘An exploration of Kinect, TouchDesigner and Max data transfer’.

Molly Munro

The Day(s) of the Lidar Scans

 

Day 1 – The Morning and Halfway up

Friday 3rd March from 6am – 10:30am

Waking up bright and early at 5am for a 6am field recording at the steps – the city was quiet, dark, and peaceful… if you don’t count listening to all the delivery trucks and bin lorries.

We arrived at The New Steps with an ok-ish attitude, a considerable amount of caffeine, and a plan, to record the scans for the first section of the steps. Deciding to start from the bottom of the steps, we set up everything, took out our companion and biggest ally for the day our buddy Liam the LiDAR (yes we named him, it was a long day), and hit record.

The initial plan was to take one scan on every landing of the steps, just changing the scanner to the right or to the left after each one. But after the first couple of tries, we discovered that we were missing quite a lot of information from the middle of the steps, making the scans look quite empty.

So very carefully we adjusted one of the legs of the tripod and placed the scanner in the middle space between the landings. Because we know how expensive the scanner is, and we were really afraid of someone walking down the steps in a rush and accidentally kicking it, Molly took the role of bodyguard, sitting down right under the scanner to make sure it will stay in its place. And in some scans you can see a bit of blue flashing right next to where the scan was placed, this was the top of Molly’s hat peeking thru.

Around our sixth scan, the iPad showed 10% of the battery left, and the storage was completely full, making us unable to keep scanning.  So we went looking for somewhere to charge the electronics, and to warm up for a bit as well. We ended up in the New College where The School of Divinity is located. Asking at the main entrance where we could sit and work for a bit, we headed to Rainy Hall, and settled down there, to export the scans from the iPad into our external hard drive, and then import them to Cyclone.

Here was when we realized just how much data we had gathered so far, as we could not export the bundle from the iPad because we had no storage in it, so we had to export each file individually, then delete it, over and over. Once we had all the files, we put them into Cyclone, and manually aligned each and every single scan, and when we were ready to export them so we could move to CloudCompare, we could not. Why? because the laptop also had no memory left (just our luck).

So back to the steps we went to finish scanning the four places we were missing. And once we were done with it, we waited for the sound team to finish their recordings, and we all headed to Alison House to finish processing the data. Once in the Atrium, we exported the rest of the scans and added them to the bundle in Cyclone. After a bit of trial and error, we were finally able to connect our external hard drive to Cyclone and export our first bundle of scans, yey!

The overall experience for this first day of fieldwork was quite nice, as once we had the scans all together and were able to see just how much amazing data we gathered during some hours in the morning, it made us excited to keep working and looking forward to experimenting with it.

Day 2 – The Afternoon/Evening and the Second Half

Tuesday 7th March from 1:30pm – 7pm

One would have thought that going out at 5am in the morning would be way harder than doing the same thing during the middle of the day, but boy we were wrong. It was much colder this day, making it extremely hard for us to feel comfortable standing on the steps while doing the scans.

We did the first batch of scans from 1pm to around 3pm as we wanted to record the mid-day rush, we quickly notices just how much more movement the was on the steps, as there were quite a good amount of tourists going up and down. We knew that they were tourists because once they hit the middle of the steps, they all looked lost and tried to go thru the gates that lead somewhere else, thinking it was the exit because that is what google maps tells you to go. So we kindly acted as guides telling them that they still had to go up another big stretch of stairs.

We went to charge the iPad and then came back around 4pm to do a couple of scans for the evening part. Then we needed to wait until the sun was setting, so we went to find refuge from the cold in a coffee shop. This day felt like it was neverending, we were sitting down drinking our coffees looking at the window waiting for the sun to go down and it would just not do it. It wants until about 6:30 pm that we could see the sky changing, so we ran back to the stairs and did the final part of it. And just like that, we were done with all our scans. We still had to go and process the data and join the two bundles together, but this was way easier than the first time we did it.

We knew we had gathered a huge amount of data, but I don’t think we were ready when we did the math and found out we had more than one billion points. So it really makes sense why our computers sounded like they were dying while we were processing the data. After some subsampling of our cloud point, we created different versions of it, some to use in blender, unity, or touch designer, and some for our sound teammates to process using MaxMSP.

Timetable

Scan

Location

Time

1

Bottom gate

7 AM

2

Bottom stairs base

7:13

3

Flight 1 mid

7:30

4

Landing 1

7:42

5

Flight 2 mid

8am

6

Landing 2

8:10

7

Flight 3 mid

10am

8

Landing 3

10:11

9

Flight 4 mid

10:22

10 

Landing 5

10:30

11

Flight 4 mid

1:30 PM

12

Landing 5

2

13

Flight 5 mid

2:10

14

Landing 6

2:20

15

Flight 6 mid

2:25

16

Landing 7

2:43

17

Flight 7 mid

2:48

18

Landing 8 top

4:40

19

Top tip top outside

4:45

20

Flight 7 mid

6:30

21

Landing 8top 2

6:40

If you’re reading this in order, please proceed to the next post: ‘Field Recording Report’.

Molly and Daniela

Meetings

 

Gantt chart

In order to document the project journey and to keep track of team meetings, practical sessions, and meetings with tutor Asad Khan. The two team members that are assigned to meeting notes and documentation (Molly and Daniela) are familiar with using the collaborative software “Notion” for note keeping.

January 26th, 2023 – First (non-official) Meeting.

This is the first time all group members met one another. Taking place during Thursday’s DMSP lecture, the members traded names, their backgrounds, and why they chose the topic of Places. Following a general group discussion about the topic, A google form was created and sent out to determine when everyone was free during the week in order to establish a standing weekly meeting.

January 27th, 2023 – First Contact with Asad.

 

First meeting with everybody here with Asad. Introductions, why we have chosen this topic, what we envision for places, and how we conceive the possible portrayals of this theme were discussed as he was not present in the first meeting. ChatGPT was explored with questions such as ‘is a non-place the same as a liminal place?’
We are to keep posting into the Teams chat for a constant record of events, ideas, stream of consciousness, etc. The focus at this stage is to collect the data from scans.

January 30th, 2023 – Workshop with Asad.

The first workshop was run by Asad. The team explored the data processing software CloudCompare and how it manages point cloud data. Some points learned:

  • Merge between two lidar scans is possible. Defining the start of the point cloud and the end state.
  • We would need to import into CloudCompare and subsample before exporting to unity to reduce the number of points and avoid crashing.
  • You could convert the scans into sound.
  • Use a medium setting on the scanner – will take 5 minutes.
  • Define the places where you want to scan before you go to the site – scout these places.
  • We can make 3D objects and then convert them into point clouds and place them into a scan.
  • Microscan something in detail with a handheld scanner and make it into a hologram?
January 31st, 2023 – Team meeting.

For this meeting, we decided to meet up in the Common room in Alison House and after our last meeting with Asad, we wanted to discuss where we want the project to go forward.

One thing we all agreed upon, was to develop the project with a focus on an important place in Edinburgh, so we decided to work in Miro and add different places we could scan for the project. Some of the places that came up were: The Royal Mile, Mary King Close, Innocent Railway Tunnel, Botanical Royal Gardens, Armchair Bookshop, Banshees labyrinth, and The New Steps.

One of the topics that came up, was how could we incorporate the physical aspect of the exhibition, we discussed the creation of a 3D printer scale down the size of the places we scan, and also of holograph effect from mico-LiDAR scanning.

The next meeting will be in ECA as it will be our first time with the LiDAR scanner and want to learn to use it and start scanning our environment as an exploration of the technology.

February 2nd, 2023 – First Scans.

In the ECA exhibition hall, the team started to take their first LiDAR scans. We discovered that for the best and most accurate linking between scan positions, it works best when in line of sight of the last scan. It does technically work when moved up a level, however, there is the more manual alignment required.

Read the blog post about ECA.

The day before, David had tested out the scanner in his room at home. This is where the mirror phenomenon was discovered: It takes the reflection as if it were a doorway.  Read David’s exploration blog post.

February 6th, 2023 – LiDAR Training at uCreate.

We had the induction training in uCreate Studio, where we learned the security protocols when working in the workshop. We also had an introduction to the different machines that we have available to use in creating, such as 3D printing, laser cutters, CNC machines, and thermoformed machines.

Afterward, we went to the LiDAR workshop, where they showed us the correct way to use the LiDAR scanner, as well as the procedure we need to follow to transfer the data from the iPad to the computer, and the software we need to use to work with the data.

February 7th, 2023 – Individual project proposals and final decision.

Each member presented their own idea of how they envisioned the project to be formed. Some members were unable to attend at the same time, so those who were free most of the day met with them first to hear their ideas and present them to the rest of the group later. Each idea was discussed, pros and cons analyzed and eventually, we came to a decision we all agreed on. The biggest decision that needed to be explored before being 100% certain, was the location: The News Steps.

February 7th, 2023 – Scans of the News Steps.

LiDAR scans of the News steps with Daniela, Molly, and David. Experimented at night with the flash enabled on the scanner. Also tested out how the scanner would align two different scans on different levels of the stairs. The scans came out really well and gave us an idea of how we could keep developing the project. We were able to link both scans, even tho they were at different heights on the stairs.

LiDAR Scanning The News Steps

February 11th, 2023 – Team meeting with Asad.

This team meeting allowed the group to touch base with Asad prior to the first submission to verify that the project idea is realistic, achievable, and interesting.

February 13th, 2023 – Team meeting for Submission 1.

The team got together to figure out the final details of the submission. We had a good record of our overall process but had to create a nice workflow for our blog. We worked on finishing up blog posts with information on our previous meetings, and our research development, and we assigned the roles of each team member for the next submission.

February 23th, 2023 – Sound meeting

The first Sound department meeting took place, all parts of the sound team attended the meeting and took part in its content. The session was structured across each sound task, as mentioned in the latest sound post. Each team member had the opportunity to catch up and show individual progress on their coordinated task. A collective effort also allowed for planning the future steps of each job.

The meeting took place in the following and played out in the following order:

  1. Soundscape Capture with Chenyu Li – Proposal and Planning;
  2. Place Sonification with David Ivo Galego – Data-Reading method demonstration and future Creative approaches;
  3. Sound Installation with Yuanguang Zhu – Proposal Review and further planning;
  4. Interactive sound with Xiaoqing Xu – Resources overview.

DMSP Sound meeting #1 (2).vtt

March 3rd, 2023 – Team meeting with Asad

In this meeting we decided to book a short throw projector to make some tests of how it would look, we also purchased a shower curtain to try and project on top of that, but once we tried it, we realized that there was not enough brightness. This helped us to understand what kind of projectors we would need for our exhibition, and we notes that we needed to find out projections screens that fit the space we are gonna be in.

March 9th, 2023 – Team meeting with Jules

In this meeting, we had a talk with Jules about our concept, but most importantly it was a more technical talk, about how many projections we are planning to use, and what kind of sound equipment was gonna be needed. Jules recommends we have a test day, so we can make sure what we choose is correct and working properly.

March 10th, 2023 – Team meeting with Asad.

For this meeting, we meet online with Asad and had a really interesting talk and explanation on how we can use CharGPT in our work, as a collaborator and helper to develop our projects.

March 26th, 2023 – Team meeting with Asad.

During this meeting we meet in the Aitrium to test the sound and projections in the room the exhibition was gonna take place. One important thing we discovered was that there is a switch in the Atrium to close the blinds on the ceiling.

April 3rd, 2023 – Midnight Test

During this day Molly and myself went to the Aitrum at night, to test the projections and how they looked when there was no light outside. We also moved things around and worked with the objects already in the Atrium to create an optimal setup for the exhibition, we created a video explaining where we planned to put the different elements of the exhibition, so we could have this as a guide for the day we needed to set up. During this time we also discovered in a corner of the room several big wooden boxes, that we decided to use as stands and props in the space.

 

If you’re reading this in order, please proceed to the next post: ‘Design Methodology’.

Molly and Daniela

LiDAR Scanning The News Steps

In order to be certain that the decided upon place of The News Steps was feasible for our project, it was important to test how the scanner responded to the environment. There was also the worry of space on the landings, if there was enough room for the scanner and the public to pass for example.

It was late evening when group members Daniela, Molly and David went to the steps. An advantage of it being dark at the time of scanning presented the opportunity to use the flash embedded in the scanner and explore how many points could be produced and the level of detail.

Overall it was successful. The scans aligned despite the height differences of the landings. There is plenty of room for the scanner, we just have to be spread out if there are more than 2 group members at the site; and there is enough light for a good amount of points in the dark.

Video of the scans:

https://uoe-my.sharepoint.com/:v:/g/personal/s2272270_ed_ac_uk/Eb6CPpOa45lAjBPjfTL7tXUBCG_JMZqhRUzJJnKDbjK8HQ?e=Juljjq

         

If you’re reading this in order, please proceed to the next post:
The Day(s) of the Lidar Scans’
.

Molly Munro

Building the Space

How is this going to look as an installation? As an exhibition?

We’ve got a vision of the outline of the space. Starting as a paper prototype and developing into a 3D mockup, we’re starting to see it come together. By visualising the space early on in the planning stages, we are able to identify  possible issues as well as start to find the best physical location to present the installation.

  >>

This structure can often be seen in museums and art galleries. Perhaps one of the more recent and recognised example of projectors and sound being used is the Van Gogh Exhibition.

van gogh alive edinburgh
Traynor, S. (2022) We visited Edinburgh’s new Van Gogh Alive exhibit and got goosebumps, EdinburghLive. Available at: https://www.edinburghlive.co.uk/best-in-edinburgh/van-gogh-alive-edinburgh-visited-23414535 (Accessed: 10 February 2023).

The Vincent Van Gogh Experience is an immersive, multimedia exhibit that brings the artwork and life of the famous Dutch post-impressionist painter to life. Using cutting-edge technology such as virtual reality, augmented reality, and projections, visitors are transported into the world of Van Gogh and his paintings. The experience offers a unique and interactive way to understand and appreciate Van Gogh’s iconic works.

Christie NMK lifestyle 3
Digital Systems, C. (2021) Christie brings cultural artifacts to life at the National Museum of Korea, www.projectorcentral.com. Available at: https://www.projectorcentral.com/Christie-at-National-Museum-of-Korea.htm (Accessed: 13 February 2023).
he largest 5D immersive Ukiyo-e art exhibition opened in Shanghai (2022) Barco. Available at: https://www.barco.com/en/customer-stories/2022/q1/ukiyo-e-art-exhibition.

Other exhibitions show how the use of projections can be immersive for a audience in a large area without reducing the quality of the art, but rather enhancing it. Allowing the audience to perceive it in ways they might not perceive a 2D static piece of art. Combined with sound, this experience has the potential to fully immerse the user into the space.

As our installation is large and immersive we don’t want to be intruding on other groups presentations. Or, on the other hand, have any of the other presentation affection the immersion of the user in ours.

Hence it is our preference that the space would be set in the Atrium of Alison house. The metal frames are ideally distanced in proportion to where the projector screens would be hung. It also allows for the projectors themselves to be placed behind the screens. There are also speakers already integrated into the room, along with plenty of plug points for the equipment.

 

 

 

 

 

 

After a conversation with our tutor Asad about this idea, we discovered that projecting LiDAR scans is a trial and error process, specifically the brightness and contrast of the scans.  We will need to test the quality of the LiDAR scans and how they show up on the screens themselves. It could be interesting to experiment with different materials for the screens, such as sheets, mesh, thick or thin fabric. This is very much an iterative process.

The university offers a wide range of projectors to choose from. a minimum of 3 is required, best case scenario, there will be 4.  These would be connected to a single computer that needs to have a good level of computing power to handle all the images and the transitions.

https://bookit.eca.ed.ac.uk/av/wizard/resourcedetail.aspx?id=4689

There will be two computers involved in this process to lower the risk of either one crashing with all the software running the sound and the images and the interactions. To ensure that they are in sync, we could use an Arduino with a counter that keeps them both on the same timings.

The interactive control system consists of a stair set composed of three levels. The user (one at a time) will be instructed to step onto the intermediate step and told to either step up or down. This action will define the dynamic switch of the exhibition: “to step up, and go forward in time” or “to step down, and move back in time”. The technical structure’s design of this control system is envisioned in two possible ways:

    • Arduino system using sensors (e.g. distance sensor, pressure sensors, or sound sensors)
    • Contact microphones set onto a Max/MSP system.

For any of the two design concepts, the structural nature of the stairs set would ideally consist of wood box-like structures since it can provide the structural consistency for contact sensors. Still, It could also offer the desired acoustic properties to be read through contact mics or sound sensors.

To see more about the specific sound equipment we will need, please refer to this post: YG – Preparation for Surround Sound Production

YG – Preparation for Surround Sound Production

Molly – Draft Idea

As a group, we decided that in order to narrow down our final idea that it would be useful for all team members to come up with how they could see this project unfolding. We had agreed in a previous meeting that the physical presentation of this project is going to be in the form of an installation/exhibition – a space (i.e place!) that the user can explore and be immersed in.

I’ll get into the specifics of my idea in a moment; first I want to show what inspired it. In Portsmouth, they have a series of installations as a part of We Shine Portsmouth where artists display their exhibitions around the city. In 2021, British artists Anna Heinrich and Leon Palmer used 3D laser scanning technology, a voile screen, projected film, sound and lighting effects, an installation was created that could be folded down and moved to another location, like a mythical vessel.

https://heinrichpalmer.co.uk/project/ship-of-the-gods/

The way that they projected this on such a scale was really interesting. The movement of the ship, the surround sound and the lighting come together to create an engaging experience.

My Idea

So – how do I envision this project? During our first group brainstorm, one idea that was mentioned was experiencing and scanning a the same place at different times of the day/over multiple days. I loved this concept and thought that the vast array of skills this group has, that this has the potential to be incredibly immersive.

Where?

The News Steps that go from the top of the Royal Mile down to Waverley station almost broke me when I walked up them with a very heavy rucksack one day – so logically, I would like to spend the whole day running up and down taking LiDAR scans.

Photo by Molly Munro

These are a winding set of steps in the heart of the city, broken up by consistent landings where visitors can often be seen catching their breath. These landings could also be very useful for placing the LiDAR scanner on.

Something to keep in mind is that it can get busy, and it is a very expensive set of equipment – are there risk assessments needed? Do one of us stand guard in a hi-vis jacket? 

When?

As I am wanting to capture the passing of time, scans and field recordings would be taken periodically throughout the day, possibly even over two days as it would be interesting to capture the “changing of the day”. Either way, I would record minimum over 24 hours, from early morning to late at night (12am-11pm?) take it in shifts to record this OR on multiple days and splice it together as if it were one.

What?

What is this going to look like as an instillation? I envision this having 3 rear projection screens surrounding the user. Front view: POV facing up the stairs; Side views: LiDAR split down the middle, left and right respectively.

Paper prototype visualisation
POV of user – 1
POV of user – 2

In front of the user, near the centre of the three screens, would be the interactive control that allows for moving forward or backward in time. As time moves forward, the images change accordingly to the scans at that time while simultaneously moving up the stairs. As time goes backwards, it reverses.

Accompanying the images of the scans, there is lighting in the room that reflects the lighting at the time of day – sunrise, midday, sunset etc. There would also be a couple of speakers in the corners that are outputting the sounds recorded.

It could also be really interesting to have a button that transforms the ‘normal’ place into a distorted reality of sorts –  into a ‘non-place’ where we have edited the LiDAR scans, morphed, twisted, warped them. Colours and sounds change to the opposites etc like being transported into a twilight zone.

How?

Equipment that I know we need:

We would be creating the movie/animation and they would just be scrubbing through the footage essentially.

Take a lidar scan on each landing of the steps and then manually align them (due to height difference).

Molly Munro

6th February 2023

css.php

Report this page

To report inappropriate content on this page, please use the form below. Upon receiving your report, we will be in touch as per the Take Down Policy of the service.

Please note that personal data collected through this form is used and stored for the purposes of processing this report and communication with you.

If you are unable to report a concern about content via this form please contact the Service Owner.

Please enter an email address you wish to be contacted on. Please describe the unacceptable content in sufficient detail to allow us to locate it, and why you consider it to be unacceptable.
By submitting this report, you accept that it is accurate and that fraudulent or nuisance complaints may result in action by the University.

  Cancel