Exploring Structure, Combining Visuals
Another week in the studio, another week of incremental developments across the project. As I move towards delivery, I’m spending much more time fine-tuning various elements of the system than conducting grand experiments with new techniques. In this entry we see three visual networks combined into one cohesive execution as body representation is inserted into the interactive particle network, then the total render is fed through the visual sequencer. Sonically, ruminations on structure in last week’s entry have been channelled into experimentation programming song sections with the Beatstep sequencer.
Sound
Following on from the in-depth musical discussion last week, my main sonic focus in this session was structure. In practical terms, macro-structure was possible through long-form sequencing on the Beatstep, taking advantage of the hardware’s ‘Pattern’ function that strings sequences together. By programming empty or populated sequences within patterns, I could sequence gates over full bars to mute the bass and trigger samples (sent to either of the resonators or the Databender) to compose various combinations of elements to be explored over the duration. Additionally, a field-recording ‘riser’ was added to the last bar of each sequence to signal the change.
The structure of body movement sequences was approached much more conceptually, beginning with the suggestion of Oliveros style ‘meditation’ directions to which I alluded in the previous post; such as ‘start as an egg and hatch’. In the demonstration, I begin in a kneeling position, which is mapped to produce low frequencies and little sonic activity. Emerging from this position, I reveal higher frequency activity, before finally standing with the introduction of the bass and drums. While this ‘composition’ would need work before presentation, it is a promising example of what could be achieved on stage without having to touch the synthesiser.
I would like however to revisit example recording while continuing this style of composition method, as moments in the performance made me feel as though the models were not as responsive as previous iterations, I struggled to create sonic energy that could match the rhythmic stage of the composition. I imagine that by introducing more dynamic control of the temporal parameters (eg. bass LFO, Databender repeats) with new, more radically different examples I will unlock a wider range of performative intensity.
Tonally, I moved completely to algorithmically generated movement in the Marbles module. By patching the Disting EX Resonator’s ‘Chord’ input to the ‘Y’ output of Marbles, I ensured that chord changes would synchronise with the generated bass line; a timing challenge I had struggled with when chords were controlled by the clocked LFO. Only one additional model was created for the ‘Resonance’ parameter of the Spectral Multiband Resonator, allowing me to modulate between the dry field recording (now wave sounds) and the resonant chord. The dry field recording may be heard at the beginning of the video.
In the end, programming of sequences formed a piece that built and changed over about 6 minutes (The demonstration was a shortened version). Concluding the session, I wished I had more time to iteratively explore the structure, as I had finally reached the point where choreography and composition intersected. After all these months of experimentation, I was at last coordinating movement sequences with musical variation.
Vision
In the visual network, the aforementioned combination of the point-cloud generated body representation with the interactive particle network was achieved by rendering the respective geometries within the same Render TOP and duplicating position modulation across both cameras. This has resulted in a convincing combination of the two environments, allowing particles nearer the camera to obscure the body, all objects to undergo shared depth-of-field effects and interactions between the two geometries to feel causational.
Stylistically, new possibilities have been found in providing custom shapes to constitute the body and particle network, exemplified by the square particles in the overview and circular in the demo. During the demonstration recording, I found that round shapes to be more congruent with the soft, ambient soundscape. Shadows and texture shading were made possible by applying Phong MATs to the respective Geometry COMPs, with realistic shadow behaviour accentuated by an SSAO TOP following the Render TOP. The combination of these refined shapes with the texturing and shadowing has resulted in a much more realistic and aesthetically appealing visual element to the system.
One significant breakthrough did occur redesigning the body representation this week; namely the reduction in resolution of the point-cloud TOP network. Initially a 640 x 360 pixel image generating just as many depth-points, I found that by reducing this to around ¼ of the resolution not only provided a more minimal and abstracted body visualisation, but also significantly reduced strain on the GPU. Less body points meant less instances of shapes, allowing subtle movement of shapes to be more obvious, leading to my addition of slow rotations and size modulations to the shapes constituting the body. A clear contrast between full resolution and reduced resolution is clear between the overview below and demonstration above.
Revisiting the visual sampler network this week provided a final element of audiovisual synchresis that really topped the demonstration off, particularly when used as a structural device beginning with the addition of drums. In this instance, I found that a more minimal approach to the sampler suited the increased activity in the visuals. Two channels of the sampler were activated (synced to kick and snare), though even just the kick channel may have sufficed. New recordings were taken each bar, visually bolstering the 4 beat bar form and striking a balance between repetition and progression. Posterization in the post-processing of the samplers also made for a more distinctive appearance, a process which I suggest is necessary when using the visual sampler technique - like low-pass filtering an audio delay.