Deus Ex Machina III

Musical Metacreation: Creative Software and Software Creativity

October 18 2013 – Goldcorp Centre for the Arts, SFU Woodwards, Fei and Milton Wong Experimental Theatre.

Live duets between improvising musicians and live algorithms. Featuring Brian Nesselroad, percussion; Lisa Cay Miller, keyboard; François Houle, clarinet with autonomous software systems by Arne Eigenfeldt, Toby Gifford, Martin Gotfrit, Andrew Brown, and Oliver Bown.

Deus Ex Machina III presents live duets between improvising musicians and “live algorithms”, which are computer applications that engage with human performers. The software is autonomous, acting impulsively and unpredictably; neither automatic or controlled, it participates in the creative performance. Each piece brings together an improvising musician with an improvising piece of software, in some cases for the first time, in others as the culmination of a long-term collaboration.

This is the third event in the musical metacreation series at SFU. The first presented works created by machines, performed by humans; the second presented works created by machines, performed by robots and humans; this event presents works generated by machines in collaboration with humans during performance.

The Indifference Engine vs. Brian Nesselroad (Eigenfeldt)

My software is often built around the concept of negotiation, in which virtual musical agents attempt to come to some understanding in terms of what they want to achieve musically, and how they try to get there. This can be translated into the notion of desires and intentions. In this particular work, the eight virtual agents have to deal with a live performer, who has his own desires and intentions, unknown to them. The agents must decide whether to try to follow the live performer, or continue with their own plans. To make things more complicated, each agent is given only a short “view” of the outside world (a quarter second, every two seconds) in order to form their individual beliefs of what the performer is doing. Since these beliefs will often be contradictory, the agents end up spending a lot of time arguing, resulting in the occasional indifference to the live performer.

Trois Agents (Brown, Gifford)

As an exploration of interaction and collective expression, Trois Agents is both obvious and mysterious. The work is for two human and one virtual performer but expands textually to multiple voices through the amplification of gestural expression by computational reflexivity. The trio format is well established in music and this work continues a recent trend toward active and deliberate inclusion of a technological musical participant in the ensemble. Andrew Brown and Toby Gifford are both developers of CIM, the virtual musician, and are performers in the trio. They use their unique insights into their virtual partner to provoke and invoke the expressive interaction, but are constantly surprised by the indeterminate responses of CIM which, in turn, stimulate their own improvisational evolution (AB & TG).

Sélah (Gotfrit)

Although it is used 71 times in the Psalms – Sélah is a difficult word to translate. It may be either a liturgico-musical mark or an instruction on the reading of the text, something like “stop and listen” or “pause, and think of that”. Sélah may also have been used to indicate a musical interlude. But Sélah also appears in prayers that were written much later, perhaps even after the meaning was lost. No one really knows what Sélah means.

CIMetrical (Brown, Gifford)

This work explores the interplay between human and virtual keyboard players. It utilizes the CIM software, an autonomous musical agent created by the Queensland Conservatorium music technology research group. This improvisatory performance combines ‘conversational’ and simultaneous interactions to create both a sense of independent musical agency and cohesive ensemble unity.

Zamyatin (Bown)

Zamyatin is a simple improvising system that has been creatively hacked together by its maker in a bricolage manner. It is part of an ongoing study into software systems that act in performance contexts with autonomous qualities. The system comprises an audio analysis layer, an inner control system exhibiting a form of complex dynamical behaviour, and a set of “composed” output modules that respond to the patterned output from the dynamical system. The inner systems consists of a bespoke “Decision Tree” that is built to feed back on itself, maintaining both a responsive behaviour to the outside world and a generative behaviour, driven by its own internal activity. The system has been tweaked to find interesting degrees of interaction between this responsivity and internal generativity, and then ‘sonified’ through the composition of different output modules. Zamyatin’s name derives from the Russian author whose dystopian vision included machines for systematic composition, that removed the savagery of human performance from music. Did he ever imagine the computer music free-improv of the early 21st Century?