Any views expressed within media held on this service are those of the contributors, should not be taken as approved or endorsed by the University, and do not necessarily reflect the views of the University in respect of any particular issue.

Digital visual regroup

Yanis had taken over figuring out Touchdesigner and had gotten the tutorial to work, the next step was to find a way to input our real time data. He and Lulu tried to figure it out, but it wouldn’t work. So seeing as Touchdesigner was not working out very well for what we needed, we needed to rethink how we were going to make our visual.

When we got our MAX tutorial from Jules, he had mentioned that Jitter was able to create interesting visuals. I looked into this as we had all our sensor data, sound data and orbiting planet data on MAX so it would probably be an easier process to input our data to create a reactive visual system.

Again, Yanis and I are not familiar with MAX or Jitter, so I looked for tutorials that could help us. I found a tutorial series on YouTube that, again, worked with particle attractors with coordinates.

Tutorial series used

This is the only picture I took of me following the tutorial, and just really felt like this man was my saviour as we had been struggling so much with Touchdesigner and something was finally working in our favour.

It broke a few times when I was following the tutorial, so it was a real trial-and-error process.

How far I got with the tutorial:

I got to a good base with the visual, as the rest of the tutorials started exploring constantly moving attractors and different colour/textures, and I didn’t want to jump too far ahead if it wouldn’t work with our data.

The patch included a node for the attractor coordinates, and I had checked the later tutorials to see if he incorporated multiple attractors, which he did. I tried to figure out where I could customise these coordinates, but I didn’t know enough about MAX or our sound patch to figure it out.

I asked Lulu for help at this stage, as she had done a lot of the maths for our patches and knew how it was set up. I handed over my patch to her, with the tutorial link and she was able to figure out how to make it reactive to our data changes.

Lulu’s revision of the patch using our sensor data:

I’m not great at maths and could not wrap my head around how she managed it, but it worked. They were able to customise it to represent the different moon types too by changing the colour.

Digital visual regroup / dmsp-perception25 by is licensed under a

Leave a Reply

Digital visual regroup / dmsp-perception25 by is licensed under a
css.php

Report this page

To report inappropriate content on this page, please use the form below. Upon receiving your report, we will be in touch as per the Take Down Policy of the service.

Please note that personal data collected through this form is used and stored for the purposes of processing this report and communication with you.

If you are unable to report a concern about content via this form please contact the Service Owner.

Please enter an email address you wish to be contacted on. Please describe the unacceptable content in sufficient detail to allow us to locate it, and why you consider it to be unacceptable.
By submitting this report, you accept that it is accurate and that fraudulent or nuisance complaints may result in action by the University.

  Cancel