Body Visualisation with Point Clouds
09/05/2023
Visualisation of the body // Using Wekinator for visual environment control
Problem seeking solution:
How can I effectively visualise the body in Touchdesigner with input from the Kinect?
In keeping with my effort to produce a system for live audiovisual performance in the vein of Louse Harris’s audiovisuality, I am exploring using body tracking as the common source of data from which to construct visual and sonic material. While using body data to control abstract visual environments last week (the particle system) proved to be an engaging experience as a user, the technique left body representation wanting. In this experiment I am searching for a way to creatively depict the body from the array of data the Kinect supplies; silhouette and point cloud images as well as body points in space.
To begin with, I revisited techniques for silhouette generation created from the Player ID output in the Kinect TOP. This proved effective for rudimentary body representation in 2D space, which benefits from feedback, edge and displacement processes and can be easily integrated into other visual environments (such as the 2D representation in the particle system).
A notable result of this initial experiment was the addition of a Cache TOP to a feedback loop with an edged silhouette inputted. This feeding back of the signal into the Cache created a delay effect on the visual, in which echoes of the body followed each movement for minutes afterwards. Manipulation of the Cache TOP parameters resulted in different delay times and could call back buffers of alternate lengths, recalling body positions from various times in the history of the sensor.
However, ultimately these 2D techniques did not achieve the artistic quality I was searching for. I then began to explore other outputs that the Kinect supplies, including the point cloud image. Due to my inexperience using point clouds, I searched online and came across this tutorial looking at interpreting the Kinect’s point cloud output.
Following this tutorial, I discovered how to use the Kinect’s point cloud output to instance geometry and create a 3D representation of the body that consists of particles which may be manipulated in size, colour, material and other attributes.
I expanded on this technique through experimentation and introduced other elements I often use in other projects. This included audio reactivity, noise generation and audio visualisation, which resulted in an interesting and artistically effective representation of the body.
Additionally, Wekinator found a purpose in this patch when I desired a creative way to control the position of the camera. I found that by mapping Wekinator’s outputs to the camera’s coordinates, I could train the camera to move depending on the orientation of my body. I trained a handful of examples, including wide poses with my arms above my head to trigger wide camera angles and tight, compressed dance poses with my hands near my chest that made the camera approach the body. During rapid, rhythmic movement, the camera moved in time to the dance moves and thus the music.
This proved to be very effective for installation during Brisbane Art Design festival, and was featured at Warehouse 25 as an interactive dance floor work.
Using the Kinect and Touchdesigner in this way to represent the human body is a very effective solution to body visualisation for audiovisual performance, and I believe it will become one of my options to include in the final work.