Generating Triggers from Continuous Data

16/05/23

Triggering samples with the body

Problem seeking solution:

How can I generate triggers from continuous movement to activate samples within a sampler in a way that is intuitive, reliable and repeatable?

This week, I am trying to find a solution to triggering samples with the body in the system I have developed thus far for synthesizer control with the Kinect, Touchdesigner and Wekinator. Prior to this research project, I have made attempts at this with similar systems with little success. Those attempts included creating points in space that, when crossed by a body part (eg. hand) triggered a specific midi note, or the triggering of samples when the velocity of a body part exceeded a threshold. While these techniques should surely be reconsidered, their failure at the time does not inspire me with great confidence. 

Primarily, my inspiration for this technique comes from Tim Murray-Browne’s artwork Sonified Body, in which he used IML code to create latent body representations that triggered and manipulated samples. Through my contact with Tim, I found that he developed his own onset detection algorithm based on this project by Luke Dahl (https://zenodo.org/record/1178738#.ZFhAkexBwUE) in which data streams are processed by two filters with short and long buffer windows respectively. 

While I understand this technique in principal, I have struggled to replicate it in practice in Touchdesigner. My mathematical understanding does not reach the level required to understand Dahl’s equations, however I have not given up yet. Key differences between Dahl and Murray-Browne include that Dahl is using onset detection on body points, whereas Murray-Browne (I believe) is using onset detection on the latent values generated by his varational autoencoders. I am unsure as to which method will be most effective for my uses.

Dahl recommends to calculate a 3D velocity vector of the hand, then pass it through two separate 1-pole filters: one of 5ms and one of 100ms. He then calculates the angle between these two values, which increases as velocity increases and the value passed through the shorter filter accelerates faster than its counterpart. When this angle value exceeds a certain threshold, this is the point to trigger a sample. This algorithm is somewhat at odds with what Tim communicated to me - however he was working from memory and hadn’t discussed the technique in some time. I may revisit this, but for the time being I am looking at another solution which, realistically, should be explored:

Gesture Recognition

Wekinator supports gesture recognition, which to this point I have left unexplored. I think this may be the most effective way to meaningfully trigger samples, in that the intention and result will be most clear to the audience with this technique. My initial experiments with GR in Wekinator were focused on simple movements based on a single limb, eg. moving my hand in a circle, however these proved ineffective. I believe this is due to most body points outputting a consistent value throughout the motion, therefore standing still is 90% of the way there to the gesture. This implies that more complex, whole-body movements will be better identified by the software as distinct to other movements. Tonight, I will test. 

Until then,

Previous
Previous

Manual/Interactive Performance

Next
Next

Body Visualisation with Point Clouds