I’ve begun to experiment with different ways in which I can use the chair to create sound and music.

As a fun starting point, I began to play around with the site melobytes.com which has several features that generate music from an image you upload, so I inserted some of  my chair pictures from the previous posts to see what it would generate.

‘AI image to song’

‘AI image to sound’

‘Image to music’

‘Image spectrogram’

All of these had unique results: the more tuneful ones use some data from the image, as well as random events to structure the music such as a time signature. The first clip contains some ‘singing’ (although it’s not entirely audible in this output), which contains some random adjectives as well as descriptions of what the AI can identify within the image, forming a kind of computer poetry. My favourite was the last one, which stretches out the image and uses it as a spectrogram, meaning the entire output is dependent on the shapes and colour in the picture. I might try using this feature again using different close-ups of the chair, so the little scratches and bumps become part of a soundscape.

 

Here are some sounds I recorded using different parts of the chair.

Here are all four recordings combined:

These recordings of analogue sounds have a much more natural tone compared to the AI generated noise. However since this chair isn’t designed to produce sound by itself, we don’t necessary ascribe any of these sounds to it when we’re not given the context – Notwithstanding, my recordings may have specific attributes based on the materials of the chair, and the melobytes generated noise is dependant on the algorithm and computer hardware.

I feel more drawn to the electronic soundscapes produced using the photos of the chair, but I also appreciate the buildup of layers that I created by combining my recordings, as all the little sounds compliment each other.