Ch. Z.0: Technological wormholes


Portals of interconnectiveness


In this subchapter we will look into how routing solutions and interactivity not only mediates the musical interactions, but actively facilitates the dynamic exchange of musical agency among various participants, musicians, audience, and digital systems.


Expanding Musical Agency through technology

In live performances, approaches such as MIDI routing and tempo synchronization across performance rigs, serve as conduits for expanding musical agency. For instance, routing MIDI signals from Alessandra’s electric piano to control virtual instruments in my setup allows for an intertwining of musical expressions that blurs the lines between individual creativity and collective output. Such setups enable performers to dynamically alter each other's soundscapes in real-time, facilitating a co-creative and interactive musical experience. For this to work in a purposeful way, the remixician needs to consider the interoperability of the system, ensuring that the data streams, note values or sync messages are compatible, has pragmatical scaling and flexibility in terms of how they distribute control of the musical environment.


One man's output is another's input

When I consider what I have available to remix in a performance, I do not only focus on the audio and how I can manipulate it sonically. I also look at the type of movements, frequencies and dynamic behaviours that occurr in the signal, to evaluate the potential of creating a transductive response in my own rig. The most familiar example of this in electronic music performance is probably sidechain compression, where e.g. a kick drum signal is used to duck the volume of a bass line in a DAW, to avoid phase issues and ensure that the impact of the kick shines through without running out of headroom in the mix. If we extend this approach by using audio to directly control parameters like effect chains' dry/wet balance, stereo field wideness or channel volumes, we have achieved a transductive interagency that can drastically improve the way musical gestures can influence the sonic landscape and introduce a sort of gravity of influence between processes and elements. For more info on signal transduction and utilization of audio channels to generate parameter control across rigs, see subchapter Y.0 - Transductive strategies, as this pertains more to the operational aspects of signal processing.


Adaptive Synchronization and collective timing

Utilizing features like Ableton's tempo follow can dynamically synchronize multiple performance setups, allowing for an adaptive response to live musical inputs. This approach works by designating an audio channel input for Ableton to calculate the most plausible bpm for the master transport's tempo. Given the approximate, fluctuating nature of this techology, the chosen audio source for this function will drastically change range of temporal shifts in the live set. The upside to this, is that depending on what I choose to route to this channel, I can deliberately make the tempo follow function struggle to catch the tempo, creating fluid acceleration and deceleration of the flow of timed events, arpeggiators, recorded loops and periodical parameter controllers like LFOs and envelope generators.


If Alessandra receives this audio channel for her tempo follow function, I can decide to send quantized, rhythmically stable events like sequenced drums, arpeggiated percussion or synth melody lines, aligning her Digital Audio Workstation in a temporal relation that is closer to my actual bpm. If however, I choose to route my live percussion audio, where I deliberately oppose the "grid" of time by playing polyrhythmically or even subvert the sense of tempo completely, her DAW is going to jump significantly between different tempo configurations.

This fluid approach to the temporal dimension can be further utilized through bpm tapping during the performance, which instead of tempo follow, means manual tap control of the DAWs transport tempo, which is not only useful to sync effects in our rig to each other's actions, but can be creatively interacted with through polyrhythmic tapping and extreme nudging (incremental, temporary acceleration or deceleration of the transport speed to offset the grid of Ableton relatively to each other).


A practical example is how I sometimes recontextualize a pulse, loop or periodical movement in Alessandra's output by tapping my tempo in another time signature, not aligning to her quarter notes, but polyrhythmically aligning to the tempo the other time signature would be.

For instance, she will be playing a motif in 4/4 at 100 bpm. I can then tap into a 4:5 tempo relation to her DAW, ending up in 5/4 at 125 bpm. This has to be practiced extensively to ensure enough precision to arrive at the intended relative tempo, but with experience it becomes easier to recognize when success is achieved. This creates co-existing rhythmical layers where even the quantized delays, arpeggiators, loops and other timed events of my rig operate in a challenging, yet logical and intuitive relation to Alessandra's rig.


From Control to Collaboration

These technological facilitations shift the traditional roles of musicians from sole controllers of their instruments to collaborators in a broader auditory experiment. Try to imagine this less as controlling each other and more about distributing creativity, sharing the responsibility for where the music is heading and how it is expressed. These approaches can help us arrive to more complex and challenging soundscapes and musical narratives in a spontaneous way. Remember that it might require giving performative cues, hand gestures or other communication to ensure that rerouting of signal flows or drastical interagency happens as a collaborative effort.

Electronic music software and hardware usually operate in a master/slave paradigm, so one should therefore make it possible to change this configuration on-the-fly, to not lock one musician in a static hierarchy of reduced agency.