Ch. X.0:

Facilitation for the unknown

Above is an example of "virtual order and hierarchy" in Ableton. The "Live in group" gives me the option to make my looper plugin "listen" to different audio sources on-the-fly by turning the different tracks on/off - an internal virtual equivalent to Jan Bang's method of using an external analog mixer to decide which audio channels enter his Akai sampler.

Notice the A and B crossfader designations of the Looper group's track layers (L1, L2, L3, ...) enabling gradual morphing between chronological layers of loops. In this way, I can return to the origin of the initial musical idea (A postition) or remove the starting points and only focus on the last additions of layers (position B).

Virtual proprioception


The ability to conceptualize and visualize what is happening in highly interconnective environments is paramount for success in handling a live remix. The placement of gear needs to be ergonomic and intuitively located in the space, the signal flow in input/output matrices logically accounted for, chair and table heights, strings needs tuning and drum skins tightening, the list goes on.


The virtual internal spaces we interact with in addition, requires a similar awareness and embodied facilitation. The internalizing of all the options of choice “inside” the computer/DAW/audio interface, can be seen analogous to excercising one's virtual proprioception, the awareness of where one is moving in the virtual space.


When operating Digital Audio Workstations in a live performance, the less one is required to look at the computer screen, the more visual capacity one has available for interaction with the other musicians in the performance as well as the audience. Ideally one should know exactly what happens inside the DAW when a specific gesture is performed on the controller, so that eventually the gesture becomes part of muscle memory and does not require any visual attention.


For this degree of tactility to be possible, the mapping design of one’s controller must be made intuitively understandable for the body, by choosing the suitable control type (fader, potentiometer, pressure pad, etc.) that best approximates the feeling of the effect or gesture, scaling the parameters correctly as well as considering ergonomy and practicality in different settings and situations. Tip: see subchapter Y:0 - Transductive strategies.


Below is a screenshot of how I distribute the signals from a selection of open audio channels in an input group into operationally distinct tracks that have specific processing chains within. Open is my completely dry channel, Cutter has three chains that produce stuttering, repeating, temporal cutting effects on the signal, Freeze is a combination of real-time half-time inducing delay with a granular freeze effect above a certain amplitude threshold, PAD creates spaceous reverberations and granular feedbacking sustains, while Rytmik is a combination of two experimental granular delay plugins that bring rhythmic clouds of a more rhizomatic kind.

These are then routed to return channel G and H to be able to separate the dry and wet signal in two "banks" of Rød and Blå (red and blue). They can further be processed on group level for e.g. a tremolo effect on only all the wet effected signals conjunctively. Lilla (purple) is the return of my external pedal effect loop, which can receive from the input group or other groups, as well as introducing a partial feedback loop by re-routing it back into the input group.

Templating a virtual nodular system

Nodular is a hybrid term I made to signify the combining of the multidirectionality of node-based signal routing with the interconnected flexibility of modular systems.

In modular systems, you're typically limited to the inputs and outputs of each module. In contrast, a nodular approach allows for more creative routing, tapping into and redirecting signals in unconventional ways, creating dynamic technological "wormholes" of dynamic response within one's rig.

 

Facilitating for a full spectrum live musical experience in an improv duo format requires flexible, multidirectional design of signal paths, clever mapping of controllers, the ability to dynamically change tempo, tonal centers and timbres, tame or boost frequencies, daring to seek dangerously close to peak volumes and chaotic feedback, yet retaining headroom for nuances. The availability of swapping instruments at hand quickly or controlling several parameters at once demands transductive efficiency, a term I use to describe how much effort it takes to transduce a gesture, intentional movement or sensory signal to its new medium. All of these aspects, regardless of how the music ends up sounding, decide whether the feeling of operating such a system feels chaotic or organized.

Creating music in this way induces deep presence, sensitive listening, encourages bold leadership, capacity for following, multitasking, responsivity, and challenge ones ability for proactive and intuitive choicemaking.

Dedication to evolving one's rig design is what makes the performative capacity increase. Retaining eye contact intently whilst managing complex sets of parametres in both virtual and physical space can truly only happen when these movements are coded into one's muscular memory after years of integration, trial and error mixed with great ambition and simultaneously awareness of ones own and the rig's limitations and boundaries.


In my performance rig there are several layers of signal chains available to me from external (outside DAW/interface) and internal (within DAW/interface) origins.

The interface: Antelope Discrete 8 Synergy Core
It has Thunderbolt connection, resulting in a 4-5 ms overall latency at its best configuration with my DAW (low sample buffer size and 48 kHz samplerate). Responsivity is crucial for high enough temporal "resolution" to achieve complex polyrhythms and precise interaction.

Inputs (analog):
Channel 1 - 2 - return stereo from the outboard fx pedal loop.
Channel 3 - 4 - stereo Korg Wavedrum (electronic drum instrument).
Channel 5 - 6 - Contact microphones for percussion/triggers.
Channel 7 - Waldorf Pulse 2
Channel 8 - piezo mic percussive notebook pad

Inputs (ADAT):
Channel 1 – 8 - Return channels of Alessandra's external sources (main audio stereo channel, microphone, Leaf Audio Soundbox, Yamaha CP, piano microphones) and internal Ableton Live channels routed out via ADAT.


Outputs (analog):

Channel 1 - 2 - stereo master out

Reamp 1 - mono send to outboard FX loop (guitar effect pedals, preamps, etc.)

Reamp 2 - mono send to studio of dry kick drum signal (for post-production mix emergencies)


Outputs (ADAT)

Channel 1 - 2 - stereo master channel for Alessandra to live sample or process my output

Channel 3 - 4 - live aux sends for various elements or loops from my DAW or to send sidechain signals and tempo following audio signal to Alessandra.
Channel 5 - 6 - sends the return channels of my DAW

In the screenshot below, you can see the Ableton routing of different sources and the return channel group to outputs corresponding to my ADAT virtual channels (11/12, 13/14, 15/16)