Any views expressed within media held on this service are those of the contributors, should not be taken as approved or endorsed by the University, and do not necessarily reflect the views of the University in respect of any particular issue.
We had already thought that Touchdesigner would probably provide us with the best way to make what we needed. We also knew that you can input data from MAX into Touchdesigner which would work to make it a reactive visual. I started the process by looking for tutorials for what we needed specifically because neither of us have much experience using Touchdesigner (especially me). So it was going to be a real learning experience.
I looked extensively for the right tutorials, starting with just beginner Touchdesigner ones (which soon we realised we did not have enough time to start at the basics) and then trying to find specific ones. It became clear quite quickly that what we were attempting to make was not something commonly done, so I found just one tutorial that could maybe work. It used multiple attractors that attracted particles around a spherical base object.
I started following the tutorial and it was definitely a long, hard process. It created some really interesting visuals in the process that looked quite like the one we were thinking of. The problem was trying to figure out where we would be able to input our own data as the locational information for the attractors. The tutorial has randomly generated coordinates for the attractors, which are constantly changing as well.
Yanis had used Touchdesigner a little before, so he started by trying to make something himself first, and then proceeded to use some tutorials that showed how to make wavey/sea like visuals.
We had a small meeting with our team leader Lulu to tie up some loose ends. Which for the most part were the small visual holes we had, but one big thing was the actual layout of our exhibit. We knew we were going to use tracks, on a smaller scale and handmade after concluding that our original idea was indeed too expensive and complicated. We hadn’t decided the scale of this. Originally the plan was to have the spheres be quite large, but that would be not only hard to move, but also hard to make in such a short time. Dave had already suggested scaling everything down, even to a type of switchboard so the focus could be on the digital visual. However we were determined to still make it a big enough model to really feel like an interactive exhibit that people could move around.
We had talked to Jules briefly about the layout of the Atrium and how it could be set up for our work. We wanted to project a lot of things, and Jules said we could drape sheets down the sides of the Atrium scaffolding. He also suggested projecting visuals from the back of the sheets, so it wouldn’t be in the way for the audience.
I made some basic layout sketches on Illustrator to demonstrate all these conclusions. First, it was much more scaled down. Then we decided to add three screens to surround the central planet set-up, with ambient visuals projecting on the left and right screens, and the central front one showing our reactive visual overlayed with the overhead projection of the water basin.
This is a very basic representation of the layout, and will probably change a bit after some tests and experiments. It definitely excited us to see how we would be setting it up because it made it all feel very real and imminent.
Yanis fleshed out his sketches a little more, clearly defining the overarching attribute of each planet, but now moon.
We had researched into different types of hypothetical planets, and based these off of them. There are many types of already defined planet types like e.g. gas giants. However there are also many types that are only a theory, such as entirely metal planets and lava planets.
He noted what type of materials we would need for each, and what look we were going for. After our online meeting where we concluded we would have to redefine them as moons, we discussed what materials we would definitely need for each, as our next order of business was buying them to then finally start making them. We also thought about how to incorporate the sensor into the design. We opted for a small window that the sensor could sit in, that would point at the central “earth”.
We are very excited to get started with this our other courses haven’t offered this much opportunity to physically make things, so this is a nice change of scenery. It’s has also been a true collaborative approach to reach these conclusions which has been really nice. We want the same for this project, and know there our own and each other’s strengths lie which has been really helpful to inform our decisions.
We had some online meetings with Dave to discuss the development of the project. Most of them consisted of Dave telling us that we were probably quite behind and needed to shoot into action. Which we did know, but again because we did a 180 on our concept, it was hard to know where to start I guess.
I brought up what I had discussed with my astronomy student friend, and presented the options for our planets or moons. We quickly concluded that having the central sphere be a star wouldn’t make sense because our main focus was the tides and that would mean changing our concept once again. It was good to get that out of the way, but it did mean that Yanis and I had to slightly change our perception (no pun intended) of the spheres. Yanis had in the mean time sketched out ideas for each of the three orbiting spheres. We had already discussed wanting to have them all be the same, but thought this might not be the most visually striking. His sketched built off of this and produced three different attributed/concepts for each, which also meant we could be quite creative. With the change to them being classified to moons, we just kept in mind to keep the size of the physical models small enough to justify their “moon status” as it were.
We talked a bit more about the other aspects of the process, like the sound development and interactive aspect of the exhibit; like how people would be encouraged to touch and interact, how we could express these interactions through the sound too, etc.
I have a friend who studies astrobiology and planetary science as her master degree, so I thought it would be a good idea to talk with her about the actual science related to our topic. Even though our concept is fictional, it is rooted in reality with all the different gravity and tidal equations that were used.
We mainly spoke about how to make certain elements more accurate. The biggest thing being what would orbit our central planet. In all solar or planetary systems, planets will orbit a star as it has a bigger mass. A big mass doesn’t always equate to a bigger sized planet. Yanis and I had mainly been discussing making planets orbiting another, but that’s not really a thing that happens. The solution she suggested was to either change the planets to moons, as they do orbit planets and influence the tides, or make the “earth” a star to make it gravitationally correct and gives us the opporunity to make planets, but then we lose the tide aspect (which is integral). She did also tell me that there are different types of moons, which would work with our idea of having orbs with different characteristics.
Lastly, she showed me a website that can map planets and gravitational pull, so you can see how exactly they would orbit based off of mass, sizer, etc. This might be useful, but we don’t have any defined values right now for any planet, so we will see how that can factor into it.
For the sound design, and as an alternative option for our visual, we got a short tutorial on MAX/MSP and Jitter. For the digital visual, we were really wanting to have a reactive particle system, that would have the particles gravitate towards the orbs if they moved. We had discussed using Touchdesigner for this but Dave wanted us to think of an alternative solution just in case. Also just in case the physical models proved too difficult, we had something to fall back on.
I personally was quite ill during this workshop and was really trying my best to understand what was going on, but it wasn’t really working. Luckily, Lulu, Fraser and Jackson understood what was going on.
It was really interesting to see how MAX could potentially help us with the visuals. Jules also showed us some other resources to create interesting visuals that would fit our theme of the tidal force. Yanis and I are still quite set on having a reactive visual, which didn’t immediately seem like a possibility with MAX, but will be explored further if Touchdesigner doesn’t agree with us.
Fraser was able to link our sound Max patch to a visual of three spheres orbiting a central sphere. This gave us a solid base that visualised all the sound work the rest of the group did.
We had a meeting to further flesh out what we spoke about in our meeting with Dave. We mainly tried to answer the questions we had noted down during the last meeting.
We had to decide how we would set up our exhibit, originally we had discussed hanging our planets from the ceiling and being able to move them around like that. In the meeting with Dave, he said that it would be nice for the audience to be able to move them wherever they wanted.
We also discussed that we could make a completely fictional planetary system, as that would give us more freedom to create something interesting in terms of sound, visuals and concept. We settled on having a central “earth” planet with a few moons or planets “orbiting” it. This way we could also manipulate the tide more so the effect would be greater.
Yanis and I tasked ourselves with solving how we could set this up. The orbs would have to be able to move, but in one line. But also couldn’t be moved out of that line because the sensor would always have to face the central orb. We thought we could use tracks on the ground to put the spheres on. This seems quite hard though, so we will think more about this system as it’s quite expensive and complicated.
We booked the sensors we are planning to use for the final thing, so we tried to see the best use for them. We concluded that they would need to be moving in a determined trajectory, so pointing towards a central “earth” planet. The data would be more stable, as these sensors are ultra sonic sensors, so would detect an object rather than an inherent location. We did discuss using a GPS sensor, and also booked and tested this out. This was going off of Dave’s suggestion to give the audience more agency to move the planets wherever they would want. The problem that came up with these was the accuracy. It would only measure to a an accuracy of 2 meters, and we were probably going to be working with a much smaller distance.
Yanis and I discussed how we would make our “planet” models while the others were testing out all these sensors and their capabilities. We wanted to use this course as an opportunity to be creative and use different materials. So we concluded that having three “planets” orbiting the central one would give us enough opportunity to do this. We looked into different aesthetics for the “planets”. Because our planetary system is fictional, that meant we could also create our own interpretation of a planet. We really liked the idea of having a hollow sphere with a light inside it. We thought this would look visually striking but also not enough like an actual planet and thus would help clarify how fictional this all is.
We had our first meeting with Dave after our submission. We went over our submission and discussed our feedback. Most of it was about the lack of experimentation. We hadn’t done too much experimentation, also due to changing our concept very close to the deadline. Because of this we had a lot to discuss about the small details surrounding it.
I made notes with all the questions that would need answering or researching in the next few weeks.
One big thing that was emphasised a lot by Dave was to experiment as much as possible. We know we need to do this however because we changed our topic so spontaneously, it’s hard to anticipate what we would need to experiment with right now.
Once Dave had left, Yanis and I discussed our visuals and how we should attack those, while the rest discussed the maths that we would need to calculate all the different forces and relative distances to represent the gravity and tide in the right way. To be honest, we didn’t understand any of it but they seemed really happy when they finally cracked it.
We met for the first time and discussed our backgrounds and interests. Our initial ideas revolved around perception, including the movie Soul and its exploration of the flow state, the relationship between sound, time, and scale, the changing tide as a metaphor influenced by the moon, and how individual perception varies from person to person.
27/01/2025 – Thinking phase
We explored perception, time, and human experience through interactive and immersive mediums. Inspired by Soul, it delves into abstract ideas of creation and consciousness, using VR/3D environments, light, colour, and texture to represent thoughts and ideas inside the brain.
Notes:
Music visualization as a journey, where movement and environment respond to beats and tempo.
Time as an abstract shape, expressed through lines, motion, particle effects, and light.
Tide and cosmic forces, drawing from the traditional Chinese calendar and the moon’s influence.
The Great Beyond and flow state, creating an immersive installation where people “disappear” into focus.
Social distancing reimagined, using sound and nostalgia to redefine personal space.
Life flashing before death, visually exploring the universally known concept of seeing one’s life in a moment.
28/01/2025 – Time phase
We’re interested in how people react and reflect on a shared, universal experience, being time.
1 minute
To explore the moments before death, questioning whether brain activity has a visual representation and how this experience should feel. Different cultures interpret this transition uniquely, offering a compelling perspective. It considers whether passing on is a cycle, with memory looping and time repeating, encouraging deep reflection on one’s own life.
Time and universe
An interactive system where life experiences shape visual, auditory, and textural outcomes. Trees, with their rings as a record of life, could serve as a central metaphor. Techniques like projection mapping, interactive environments, and motion capture may be used, though challenging to execute. Motion capture could also depict life cycles, while lines could represent personal choices, prompting visitors to reflect on different lived experiences and perspectives. The focus is on life paths and perception, questioning whether the experience should center on science, religion, or regeneration. Narrowing it to a small time frame could deepen the emotional impact, emphasizing how people feel in fleeting moments.
Our first team picture, Shuzhang couldn’t make it so we added a little drawing to represent her 🙂
02/02/2025 – Interactive wall phase
We had a short meeting to talk about our ideas based on what we talked about in the previous phase. Lulu and Yanis presented the most developed ideas so we discussed these in particular. Lulu’s focused more on the tech side of things so having an interactive wall for people to touch. Yanis’ focused on the concept of our project, talking about the perception of things rather than human beings. We discussed perhaps combining these ideas in a way and what that would look like. What tech would we need and have to learn? What would we have to research still based on the concept? We were happy to have an idea, and felt excited that it matched up and made us excited to make the output. We finished the meeting chatting a bit about last semester and the sound designers’ projects (which dealt with perception).
04/02/2025 – Intense brainstorm phase
We first went to the wrong room, once we found each other, we moved some tables around to make a good thinking environment. Once we had settled in Dave explained to us how this workshop would go. He felt that we were still quite undecided on a topic for our project and this workshop could hopefully help us narrow down what our big idea would be. He set some challenges which were followed by discussion that involved drawing a mind map on a white board. We ended up concluding on the most popular concept ideas: (CULTURAL) MEMORIES + FLOW OF TIME + MUSIC. None of us really knew where this could lead us, and to what BIG idea, so we went away from the meeting with these topics in mind and to research them for the next meeting.
08/02/2025 – Scatter phase
This is a random collection of things that were talked about in our meeting:
Fraser said use different types of tech representative of time – old projector for old show
Water holding memory feels pseudoscience but we can twist it and own it for our project
Fraser talked about art installations – show resources shared through pics
Talked about water
We talked about nfc’s and interactive art
What’s the element that changes over time? How to set ourselves apart from normal exhibition?
Time zones -> falling asleep and waking up at the same time Time dilation?
Talked about water more and the flow of time
Agreed to go away and research more
We’re showing them where our studio is before the meeting starts
10/02/205 – Water phase + Dave
Water was a big topic during this meeting. We had already talked about using water in the previous meeting, but now it was in full force. All the ideas for tech, or installations were all on the basis of using water as our “shepherd” in a way. Through the meeting we concluded using different forms of water to show different life scenarios/feelings like love, loss, joy,…. And how these could be interactive. We had the idea to have five different feelings represented in different places in a big room, and have each place be interactive in some way. This was a very BIG idea. We knew it was extremely broad, but it was the closest we had gotten to a final idea. We seemed to never be able to find something that fit quite right with us.
Started from meeting with Jules and it snowballed in all types of directions. Starting with concluding this: an installation that explores our perceptions of dimensions of time
Flow, motion, continuity –
Ripples and waves in time –
Moments, frozen in time –
Which then evolved into this:
Sun & Shadow – lights and projectors moving to lengthen and shorten shadows
Flow & Ripples – interactive ripple wall – Yanis – Chaotic – container/cupboards with different reactions in light colour and sound
Tides & Moon – interactive moon influences the tides (projected)
Objects representing the transitions between sections
Still it didn’t feel complete or finished.
12/02/2025 – Resolve Phase
Lots of ideas were being thrown into the groupchat. And it all felt very broad. Lulu then texted us saying that her and Hasse had sat together and tried to figure out how to attack this. They proposed a completely new idea about the perception of the tides, which is something we discussed all the way in our initial meeting. They proposed a project about how the tides can be perceived differently and make an interactive installation/exhibition where the audience can control elements like lighting and music by “altering” the push and pull of the gravitational forces. This would use movement sensors, sculptures, projectors, ambient soundscapes and a whole lot of water. We all looked over the proposal and it was the first time the project actually made some sense and had a central theme. We were able to tie together all the small bits of ideas we had discussed over the past weeks and it feels like our interests are all equally shown in our finished big idea! It maybe wasn’t the most efficient way to get to our end project idea, however it was really nice to explore ideas this much and consider very broad ideas, but also very small ones. Each week we were thinking of something new, and it was very exciting thinking of where we could go with this
We met in Q.25 in the Hunter Building in the ECA main campus, which is the DDM studio so it felt nice to have a change of scenery.
Lulu wasn’t able to make it sadly, but we kept going nonetheless.
Fraser showed us a sketch that he made of an installation based off of what we discussed in the last meeting:
It came down to the perception of the flow of time through life stages. It included draped fabric coming down from the ceiling dividing a room into different sections and also had physical objects to display the passing of time.
We realised we all had separate ideas – so we need to unify them
The question I was wondering then was; what is the common denominator in our idea? What ties is all together?
We talked about if we wanted to have the installation divided into different, strict, stages. Yanis argues that it should be one big room because time and emotion – life – doesn’t pass the same for everyone. It is chaotic and we should let our audience decide where they want to go, what they will feel and what they will take from it. We shouldn’t suggest a right way.
Hasse mentioned not telling the audience what stage is where;
This made us shift our focus a bit to stage rather than age, and how that would relate to our installation.
Through discussion a theme started forming. Water was a big thing that kept being mentioned so that got established as our vessel to communicate our message. Hasse had considered making large suspended interactive drips of water, that when interacted with, the environment of the installation would change. We really enjoyed this idea and decided to look into the tech of this to inform our outcome.
Eventually we ended with the flow of time through emotional time, and using water to represent these emotions. That brought us to:
Installation that represents moments in time through forms of water.
LOVE – RIPPLES – TWO PEOPLE INTERACT
UNCERTAIN – RAIN – UNDETERMINED
LOST – EVAPORATION – INTERACTIVE WALL HUMIDIFIER FOG
JOY – RAINBOW – PROJECTOR
FOCUS – FLOW – INTERACTION
We all really liked this idea, and how interactive it would be for the audience. I think we still need to fully understand what we’re saying.