Embodied/Encoded:

A Study of Presence in Digital Music

4.     Live Performance

 

While physical presence is an affordance of the acousmatic scenario given its disposition toward virtual space, live performance situations involving human bodies require attention to different modes of presence. In certain situations, the performer can be ‘transplanted’ through digital-spatial modifications or the incorporation of pre-recorded sounds. But the sense of ‘being somewhere’ in this context is anchored by the embodied subject, whose social presence encompasses and guides layers of meaning. Of central importance then, is the relation between the performer—their movements, expressions, and sounds—and disembodied, acousmatic sound. Within this framework, electronic components can function as tightly coupled extensions of the instrument, invisibly emanating from the performers’ actions, or as an autonomous agent with a separate identity, inviting ‘other’ spaces and sources into the experience. This collision of real and virtual entities is both the promise and pitfall of the ‘mixed’ medium.

 

In an influential article, Simon Emmerson states: “in live works the instrument is the anchor and we can never for long leave the realm of its influence: it cannot tolerate digressions into anecdote. We always refer back to its presence.” Furthermore, “we expect a type of behaviour from an instrument that relates to its size, shape, and known performance practice.”39 Biocca suggests social presence occurs when:

 

users feel that a form, behavior, or sensory experience indicates the presence of another intelligence. The amount of social presence is the degree to which a user feels access to the intelligence, intentions, and sensory impressions of another.40

 

In my collaborations with performers, presence is no longer confined to virtual experience; it emerges from connections between real bodies in space: the exhibited visual performance gestures, the sounds of a live acoustic instrument, disembodied electronics, and the acoustic environment.

 

My concerns in this genre have been torn between ‘classical’ models of composer/performer relationships41 and a desire for improvisation—for autonomy on behalf of all collaborators and possibilities of de-coupling the electronic medium from other sounding agents. This does not necessarily involve digressions into anecdote, but does bring into question the nature and scope of influence held by instruments in relation to the electronics, and the musical integrity of the combination.

 

I am most inspired by live performances from artists who cultivate personal, idiosyncratic practices on their instrument, resisting formalization; many in free-improvisation traditions. The branch of this artistic research concerned with performance can be viewed as a succession of efforts to negotiate these diverging practices (composer/performer vs distributed agency) with specific collaborators. Projects will be presented chronologically with media from rehearsal and concert settings.

 

4.1.  Fall 2021

 

The following describes my initial conception for a live electronic music performance system:


The embodied performer drives a digital system which extracts and reduces information from the performance to generate digital responses. The relationships between performance gesture and computer response are not always linear and tend to elicit interesting dynamics as the performer attempts to predict or influence what will happen. I have the ability to intervene and guide the system – to push it in different directions.

 

The first instance of live performance for Embodied/Encoded took place on October 2, 2021 in concert with mezzo-soprano Marika Schultze (Example 18). The Max instrument used featured real-time comb filtering, pitch-shifting, variable echo with onset trigger, recording and granulation with non-real time onset analysis, and spatialisation in HOA with the Spat5 package from IRCAM. At the end of the signal chain, I used an amplitude gate to correlate sustained input with electronic textures in a direct and simple way.

In the excerpt, a short vocal passage is recorded and ‘sliced’ according to onsets. The slices are then iterated in a pseudo-random order, circling the audience. Coming in and out of focus with variable spatial movement is a percussive, granulated texture with which the vocalist interacts, supported by an ambient bed of recently sung pitches. Occasionally the variable echo is triggered by a strong attack. The excerpt closes with the amplitude gate on the accumulated electronic texture, creating a stark and dramatic correlation between live and electronic components.

 

One challenging aspect of the performance was my limited ability to interact with the voice in a performative way. Due to severe time constraints, I could not integrate a meaningful control system such as a MIDI device, and was restricted to the basic laptop interface. This limitation is addressed in later projects.


Example 18

Marika Schultze, excerpt from Presence & Pattern concert, Norwegian Academy of Music, October 2, 2021

4.2.  Spring 2022

 

An important turning point in my live performance instrument design was the use of FluCoMa tools to extract different sound parameters from the performer to shape modulations or trigger events. Parsing out performance information in a modular way provided a wide range of possibilities for electro/acoustic interaction. While the fluid.bufonsetslice~ (non-real time) object was used to slice recorded passages, a real time onset trigger could be routed to any module (fluid.onsetslice~), as well as running analyses of pitch and spectral content (fluid.pitch~ and fluid.spectralshape~).

 

These techniques were devised specifically for a collaboration with percussionist Ingar Zach in February 2022. Example 19 features two excerpts from our first session. In this modular Max arrangement, sections of audio are recorded, sliced according to onsets, and sent to a granular synthesizer which is also triggered by incoming onsets. Other modules, which could be freely interconnected, included a comb filter tuned to a multiple of the spectral centroid, a delay-chain which shifts pitch wildly with spectral fluctuations of the input, an FM unit, and a phase-vocoder for spectral ‘freezing’ effects. Modules were generally responsive to onset triggers, each with independent parameters for the probability of triggering.

Example 19 

Ingar Zach and Robert Seaback, Norwegian Academy of Music - Studio 1, February 16, 2022


front-end of modular Max instrument

I later tested this system more comprehensively with electric guitar and increased the degree of interactivity with automated responses. Example 20 is a collection of short improvisations from late February, 2022 in which I was not manipulating the system in any way other than with the sounds from my instrument.

 

A MIDI controller was added before a second session with Ingar, to engage more spontaneously and musically with the system and with collaborators. The Akai MPD226 is a simple unit with a bank of faders and knobs, and touch-sensitive pads instead of a keyboard. Example 21 demonstrates the mapping of collections of data (in the form of patch presets) to knob positions to interpolate between different effect settings. I am performing the interpolations with Ingar on percussion and electronics.

Example 20.1

Example 20.2



Example 21

Ingar Zach and Robert Seaback, Norwegian Academy of Music - Studio 1, April 20, 2022

4.3.  Fall 2022

 

My collaboration with Ingar Zach culminated in an improvised performance at the Norwegian Academy of Music on November 4, 2022.

 

The instrument I developed for the concert diverged from the modular assemblage explored earlier in the year. Instead of processing a live audio input and re-presenting it in some way, my approach was to use non-sounding data streams from the input to drive an otherwise autonomous synthesis engine. This came after a period of experimentation with recordings of my sessions with Ingar, in which I used audio excerpts to audition different acoustic/electronic combinations. Because Ingar generates an immersive, broadband sound already with his instrument, I envisioned a digital counterpart that would map onto his sonorities without crowding the primary resonances of the drums, filling out spectral and physical space.

rehearsal with Ingar Zach, Norwegian Academy of Music, November 3, 2022

photos: Ingo J. Biermann

My initial intention was to find a way to transition gradually between looping patterns and complex synthetic timbres. I implemented a type of wave-terrain synthesis using the “2dwave” object in Max. Wave terrain is an extension of ordinary wavetable synthesis, adding an additional axis through which the wavetable can be scanned. My instrument took a soundfile input and sliced it according to onset times. Each slice was treated as a ‘2d’ wavetable with a small scan window (x-axis) which could move (via the y-axis) across partitioned sections of the slice. A sawtooth wave (or phasor~ in Max) was used to scan the x-axis at regular intervals while low-frequency noise was used to scan the y-axis. The input sample could be treated as a shifting, looping pattern or a synthetic tone at frequencies above audio rate. The scan rate of the y-axis affects the inner activity of the sound, at high rates generating broadband noise. Each wavetable contained a window function and overlap parameters. Sounds were either sent straight to the system output or further processed with filters and delay-based effects.

In the concert version, the fundamental pitch of the wavetable is determined by a mic input, and a modulable delay-chain reacts to spectral fluctuations. We were positioned in the center of the concert hall—the intention being an open directional focus as sounds emanating from the center collided with those from the perimeter. An 8.2 channel system was arranged for this performance.

Robert Seaback and Ingar Zach

concert excerpt, Norwegian Academy of Music, November 4, 2022

video: Ingo J. Biermann

4.4.  Spring 2023

 

I produced a concert at Levinsalen, Norwegian Academy of Music, on March 8, 2023 featuring acousmatic works in ambisonic format. This event also served as a test ground for the generative or ambient properties of my wave terrain synthesis instrument, presented under the title stasis (see Appendix A). This involved setting up 3-4 sonic layers within the system, sustaining and subtly evolving as the audience entered the concert hall.

 

The gradual changes were the result of three parameters: independent amplitude modulation of each wavetable sub-voice (however many are overlapping), amplitude modulation of each primary wavetable voice, and performed changes to the patch via MIDI control. The use of AM on sub-voices was inspired by the parameter of octaviation found in Csound fof synthesis unit generators. In an fof grain stream, the gradual attenuation of alternate grains causes a pitch-drop of one octave.42 In my approach, each ‘grain’ is amplitude modulated by a sine wave at user-specified frequency. With slow and dissonant modulation rates, interesting entanglements occur between pitch and pulse as voices fuse and pull apart as heard in Example 22.

Example 22.1

Example 22.2

 

Approximately ten minutes of this material saturated the space before the formal concert program of fixed-duration works. I was inspired by experiences in my home studio, where I would audition a few combinations and sit for many minutes with the slowly shifting texture or walk around the room to find different resonances.


The work diverges aesthetically from others of Embodied/Encoded because it does not function as a window into somewhere, but rather performs the absolute neutralization of space through muti-leveled microsonic deconstruction and resynthesis. The sound world can be described as self-contained and the form as non-dialectic.43

concert setup for New Electronic Music, March 8, 2023

Self-presence could be related to this introspective listening, but as a kind of ‘awareness’ it bypasses the type of body response that is unconscious; that arises from corporeal sound structures in the presence or absence of recognizable sources. Just as certain expressions of American minimalism mirror natural processes of the environment, stasis embodies organicism in its formal unfolding, akin to waves crashing or wind patterns.


4.5.  Fall 2023

 

A central practice within Embodied/Encoded research has been the translation of ambisonic works to different site-specific speaker configurations for concert presentation. Works were presented over the Fellowship period in three concerts at NMH and at international events in France, Ireland, Austria, and Portugal. The Ph.D. concert, titled Being Somewhere, was held on September 17, 2023 at NMH as part of Ultima Contemporary Music Festival, Oslo (see Appendix A). 

Sixteen loudspeakers and two subwoofers were employed for the Ultima performance, which allowed ambisonic resolution up to the 7th order in a 2D configuration. I measured the speaker distances and angles during setup to construct a 6OA (for practical reasons) decoder in Max with the Spat5 package, calibrated with appropriate time delays and gain adjustments. 


Following the principles of open science, HOA files of Embodied/Encoded acousmatic works can be downloaded here from the NMH Brage digital repository. As described in 2.1., final mixes of acousmatic works consisted of 3OA stems from the SPS200 and 5-6OA stems from multichannel files synthesized as virtual ambisonic sources. Care should be taken at the decoding stage to ensure correct, high-fidelity reproduction. Refer to Appendix B for details on proper decoding strategies for the files in Brage.

Being Somewhere concert

photo: Ingo J. Biermann

Being Somewhere featured the premiere of World Within One Step for flute, piccolo, and electronics, performed by Alessandra Rombolà. Conceptually, World Within One Step is about finding a world within a world—about complexity observed in spaces already conceived as singular or elementary units, hinting at the limited scale of human perception. Small, repetitive gestures undergo constant changes in inflection and shade. Subtle movements in pitch unfold and blossom into new sonorities. A parallel world appears to exist below our level of sensibility and control. The title is also suggestive of a near-immediate encounter with something vast, immersive, and possibly unknown.


I started working with Alessandra in May, 2023. I had seen her perform in Oslo on a few different occasions in both improvised and composed contexts. Fluent in both methods, Alessandra's versatility is further exemplified by her wide range of extended techniques. Our collaboration began with my recall of certain parts of her concert improvisations. She demonstrated various sounds and techniques based on my verbal descriptions, and directed me to related repertoire. I narrowed the focus to some specific musical ideas around glissando, multiphonic tremolo, percussive articulation, and breath noise. We made recordings of this material at NMH, Studio 1, on May 15 and June 8, 2023. Some improvised ‘etudes’ from the sessions were based on my hand-drawn graphic notation (see Figure 2), which were fleshed out in the next stages.

 

I composed a fixed electronic part with the recordings and other sounds in the summer months that followed, and developed a delay-based instrument in Max for real-time processing. I then created a text/graphic score for flute & piccolo over the timeline of the fixed electronic part, responding to notes from the studio sessions. This score was workshopped intensely in the weeks prior to the performance with a number of new ideas introduced by Alessandra, all of which increased the synergy between acoustic and electronic elements. The original score and edited performance versions are shown in Figure 3. The most critical reworkings came toward the end of the piece: at time marker 5:20 on page 6, for example, my suggestions for multiphonics and trumpet embouchure were replaced with low register pitches and pizzicato techniques, cohering more effectively with the percussive electronic articulations.

promotional image for Being Somewhere concert, 2023 Ultima Oslo Contemporary Music Festival


Figure 2

graphic notation sketch for studio recording session


Figure 3

graphic score for World Within One Step. Original version (left) and edited version by Alessandra Rombolà (right).


World Within One Step

Alessandra Rombolà - flute, piccolo

  September 17, 2023

video: Ingo J. Biermann

Figure 4.1

ten delay units in series with modulation parameters

Figure 4.2

individual delay cell

In addition to samples of the flute, the fixed electronic part was derived from a chaotic analog synthesizer and drum machine (the SOMA synths listed in section 3.5). Two Max instruments were used as the primary sound processing/generating tools: the first being the previously described wave-terrain synthesizer, the other a specially constructed modulable delay-chain. In the latter case, the signal is sent to a chain of ten delay units in series—each unit contains feedback with an envelope function triggered by an external metronome (Figure 4). Modulations to delay time, envelope rate, and feedback parameters create an expansive palette of sounds. This technique can be heard in World Within One Step in the gestural materials of the fixed part and in the live processing. Part of the performance involved changing the delay presets in real-time to elicit different responses from the performer. Presets, each consisting of a large collection of parametric data, were mapped to knobs on the MPD266 enabling interpolation from one to another.


4.6.  Future

 

A limitation of my methodology for collaborative projects was the site-specificity of the artistic components—that is, projects manifested in the particular concert presentations, and cannot be easily reproduced by other musicians. The most reified example, World Within One Step—containing both a score and a fixed audio component—is still highly specific to Alessandra’s performance practice and would require further score revision to communicate well to another performer. In a positive way, this limitation has forced me to cultivate adaptability as a collaborator and improvisor. I reconfigured existing tools while building upon them for specific projects. From a compositional standpoint, different performance scenarios required different means of integrating electronics.

 

Techniques for live performance over the fellowship period evolved from a model in which instruments both generate and modify material in reflexive couplings, to one in which instruments influence autonomous digital processes. A more holistic integration of the two modes is an area for future research, encompassing both themes of reflexivity and digital disconnection.

 

A planned marker of closure on Embodied/Encoded has been the production of acousmatic works in a standard audio format for distribution. Care was taken at the last stage of the fellowship period to produce stereo versions using virtual microphones in Harpex. Harpex features decoding presets which simulate typical stereo miking techniques such as XY, ORTF, and Blumlein. Stems were prepared for each piece based on the decoding scheme, as not all material was well-reproduced by a single preset. These were professionally mastered for digital formats by Rashad Becker and are included in their respective sections, transcoded to 256 kbps mp3 as per the limitations of this platform.


5.     Conclusion

 

On the question of what distinguishes electronic music from other media, Demers argues “that it is a concern with the meaningfulness of sound. To an extent unrivaled in all previous forms of music, recent electronic music is obsessed with the question of whether sound, in itself, bears meaning.”44 While this project may be yet another example of this obsession, I view its contribution in the unique constellation of discursive points, which emerged gradually from the distillation of my artistic practice. Moving beyond the question of whether sound bears meaning "in itself," I am concerned with how meaning is formed around sound, with context expanding outward from subjectivity and creative acts.


The artworks of “Embodied/Encoded” reconcile certain institutional biases I have experienced in my career, as the Fellowship afforded me time to examine my own feelings and beliefs in relation to electroacoustic and acousmatic music discourse. At one point, my co-supervisor assigned a simple, therapeutic task: write down five words representing what I actually cared about (in music). I wrote:

 

Feeling, visceral, atmosphere

Connection, social presence


Despite my deep involvement in technology, my artistic motivations come from meaning formation and community. This further reinforced my research emphasis on embodied meaning as it related to personal practices of field recording and digital design (the former working title of the project being The Semiotics of Presence). The stylistic turn of the composition Still Life was significant, and became a reference point for deeper research into specific materials (i.e., my local soundscape and digitally modified performance) and longer forms developing texture, sonority, spatial profile, and tonality.  


My original entry point to the concept of presence was Hayles’s semiotics of virtuality, in which presence represents original plenitude in a network crossing between materiality and digitization. The centrality of the concept in XR provided another vantage point that mapped well to aspects of my artistic practice. I sought to re-map XR concepts of presence to the domain of digital music, in which physical presence equates to “being there,” self presence links with embodied music cognition, and social presence encompasses live performance. The creative merging of sound technologies of presence with these concepts eventually became encapsulated by being somewherea play on the XR trope to describe the non-symbolic potentials of sound reproduction. In this sense, meaning derived from site-specificity recedes to the background in favor of non-conceptual meaning abstracted/absorbed by the body.


Hearkening back to the epigraph leading section 1.5., our bodies can be thought of in virtual terms—as mediating the environment just like the technologies of representation we use for such purposes. Biocca suggests, “the mediation of virtual environments leads us to reconsider how the active body mediates our construction of the physical world.”45 This leads to questions about the nature of truth and reality. “If the senses can be so easily fooled, then how can we trust the day-to-day experience of physical reality? This is the century old insight born of all illusions, especially in dreaming where we directly experience interaction of the body and the mind as the primordial simulator.”46


I take this insight as an invitation to creatively explore sonic virtual worlds; their potential to mirror our embodied experience and reflect how the imagination continually reconfigures this experience. They serve as test sites for the boundaries of our perception and channels to presence—be it there, somewhere, or the nowhere of digital disconnection.

39 Simon Emmerson, “Acoustic/Electroacoustic: The Relationship with Instruments,” Journal of New Music Research 27, no. 1-2 (1998): 148.

40 Biocca, “The Cyborg’s Dilemma,” 19.

41 The widely critiqued hierarchy in which the composer/performer relationship is practically transactional, with the composer prescribing all activity in a written score to be interpreted by the performer.

42 Michael Clarke, “FOF and FOG Synthesis in Csound,” in The Csound Book, ed. Richard Boulanger (Cambridge: MIT Press, 2000), 296-7.

43 Non-dialectic as described by Wim Mertins in Basic Concepts of Minimal Music. Wim Mertens, “Basic Concepts of Minimal Music,” in Audio Culture: Readings in Modern Music, ed. Christoph Cox and Daniel Warner (New York: Continuum, 2009), 307-312.

44 Demers, Listening Through the Noise, 13.

45Biocca, “The Cyborg’s Dilemma,” 15.

46Ibid.