top of page

Visiting workshop: STOCOS

#STOCOS, #interactiveshoes, #computervisuals, #stochasticsynthesis


As a part of the ARTEC seminar series, I had invited Stocos, a Spanish/Swiss artist trio creating interactive dance pieces, to do a two-day workshop and an artist talk with demonstration. The trio consists of Pablo Palacio, composer and interactive music instrument designer, Muriel Romero, choreographer and dancer, and Daniel Bisig, visual artist, sensor technician and programmer. The group combines research into embodied interaction, artificial intelligence, and simulation of swarm behavior with artistic exploration of music, visuals and dance.


Stocos (Pablo, front left, Daniel front center, Muriel, far right) with workshop participants

The workshop took place at DansIT's (Dance in Trøndelag) studio at Svartlamon in Trondheim. It took place as a collaboration between ARTEC and DansIT, where the former covered the expenses related to travel and accomodation for the group, and the latter offered me their venue free of charge as a recidency. A total of fourteen participants (including me as the organizer) took part in the workshop, even if some were not present at all times. The group consisted of people with very varying backgrounds: composers, musicians, dancers, actors, media/video artists, stage designers, dramaturgs, philosophers and music technologists. Of the participants, eight of them were either employed or students at NTNU and another three had relatively recently achieved their degrees (Master/PhD) at NTNU and were now working in the art sector in Trondheim. Due to restrictions in the number of people, there were additional applicants to the workshop on a waiting list.


The first workshop day started with a presentation of the workshop holders in Stocos, their artistic aims and statement, and then continued with an overview of their work during the last 10 years or so. A core concept of the group, which is also reflected in their name, is the use of different types of stochastic algorithms, both at signal-level (in the form of noise) and in different ways of mapping movement parameters with visuals and sound. In addition, different ways of applying swarm properties in especially mapping and visual projections were central. This was applied for the first time in the piece with the same name as the group, where the swarms were first of all visible as minimalistic visual projections, mostly in black and white, showing either moving entities or trajectories of invisble moving entities - interacting to larger or lesser degrees with dancers. Another work, Piano-Dancer, was an exploration of the same core principles of stochastics and swarms, but here a single dancer's movements and algorithms were controlling a motorised piano. In the piece Neural Narratives, they were creating virtual extensions of the dancing body with computer visuals, with tree-like entities following the movements of the dancers while also having a semi-autonomous character. Lastly, they presented their most recent work, The marriage between Heaven and Hell, based on the literary work of William Blake. In this work they continued their work with interactive control of voices, this time using phrases from Blake's poem. Particular to this work was the use of Laban derived bodily expressive aspects, such as energy, fluidity, symmetry or impulsivity, translating them (in real time) into audible and visual digital abstractions. This piece also featured the use of body-worn interactive lasers, where the movements of the dancers controlled whether they were active of not. The presentation spurred a lot of questions and discussion about everything from technicalities to aesthetic issues.


The second part of the first workshop day was more technical and focused on the different types of sensors the group used. They started showing technology they have used in earlier years, namely the Kinect. Using the open source driver implemented in EyesWeb the sensor gave them access both to the countour and the skeleton of the dancer. Even if the Kinect had relatively high latency and the skeleton tracking was quite buggy, they still had used it In recent years. More and more, however, they had moved to IMUs for their tracking. They used a Bosch/Adafruit BNO055 connected to an arduino with wi-fi shield, with everything put in a 3D-printed case, and then put in an elastic band - attached to the arm or the lower leg. This avoided several of the problems associated with video-based tracking (occlusion, IR light from other sources, etc.), even if there were specific challenges associated also with the IMUs. The IMUs also offered absolute orientation data in 3D, making it possible to track the position and rotation of the extremities, not only their movements. Finally, the group showed us their most recent development, namely sensor shoes, with seven pressure sensors in the sole.



Daniel Bisig explaining the construction of the sensor shoe.

After just having monitored the values from the sensors in visual graphs, the last part of the first workshop day focused on the mapping on the values from the sensors to sound. Here, the participants got to try out the IMU bands and the sensor shoes with several different mappings - synthetic singing voices, sampled speaking voices, stochastically modulated noise, and several others.

The second day of the workshop started with a movement session led by Romero with focus on exercises related to Laban's theories of individual choreographic spaces around the dancer, and how these could be divided into planes cutting in all directions around a dancer. Many of the exercises directly articulated these planes.

After the movement session, the focus was moved to the computer generated visuals. Most of these were in some way or another derived from swarm simulations, and Daniel Bisig spent quite some time explaining how these were programmed and manipulated so as to create different visual expressions. Among other things we got to see how using the individual agents' traces as visual elements along with the manipulation of pan and zoom of the space they inhabited created a multitude of different patterns and effects. Some of the participants also got to try out an interactive algorithm that used the contour around the body to attract the swarm in such a way that they delineated the user's silhouette. We also got to see how the swarms could articulate letters more or less using the same technique, and how the dampening of attraction forces made the contours around the letters less distinct.



Workshop participant Elena Perez trying out the interactive swarm visuals.

There was also time to try out the interactive lasers (with sound) from The marriage between Heaven and Hell, which was, naturally enough, very popular. All the lights were turned off, and a hazer was activated, so that the trajectories of the lasers were visible. The lasers all had lenses spreading the beam vertically about 30% so that the result was a line when it was projected on the floor. We also got to see how moving the lasers in different ways generated different effects, like "scanning" and "strobe". We then did further tryouts in the darkened space - now with the sensors controlling stage lights (via MIDI/DMX) together with transformed vocal sounds. Even if the lights had a significant latency, it was nice to see how they could be mapped from the IMU sensors.

At the end of the workshop, Stocos presented longer extracts from several of their pieces, to answer the question that many participants had posed: How do you put it all together? We then got a much better understanding of the artistic contexts and intentions the different technologies and mappings were used in - and further how their artistic expressions could be transformed with different techniques.

All in all, the impression was that the workshop participants were highly inspired by the workshop, and several of them talked about getting new artistic ideas. Stocos were also impressed with the engagement that the participants showed. Several people also expressed a wish to have Stocos back to Trondheim to do a full performance.

bottom of page