Conclusion


The initial premise for this research trajectory – to understand what constitutes performance practice in computer music, how it differs from traditional instrumental practices, and how it can be further challenged and developed – informed the development of a series of case studies where the notion of the computer music practitioner as a multithreaded role of composer, performer and instrument builder was, first, challenged by attempting to focus on one of these three roles, and, second, accepted and reinforced by allowing the influence of the composer’s and performer’s mindsets into the technical decision-making of instrument design, applying compositional and instrument building skills to inform interpretative choices of pre-existing repertoires, and, finally, developing compositional systems derived from instrument design, mapping, and performance traditions.

 

During this research trajectory I have looked at the performance practice of computer music through the mirror of traditional instrumental interpretation. In doing this, a misconception kept reappearing: the pairing of “interpretation” with “performance”. I didn’t want to present traditional music interpretation as a “higher” form of performance, or make a defense of the notion of Werktreue. The sets of skills required to decode and present in front of an audience someone else’s musical ideas, and furthermore, to add an interpretation to a body of renderings of a determined musical work, presents a potential for creative and technical development that most trained traditional instrumentalist know very well, but that electronic musicians are rarely if ever exposed to.

 

In the current discourse on interpretation in computer music, one often assumes that the focus of our attention is the interpreter of traditional instruments and his or her interaction with the electronic system; most historical approaches to human–electronic interaction in contemporary music take this as a basic truth. After all, many pieces for instruments and fixed media (such as Karlheinz Stockhausen’s Kontakte, 1958–1960, for piano, percussion and electronic sounds, or Luigi Nono’s La fabbrica illuminata, 1964, for soprano and tape) have been created since the beginning of the era of electronic music composition, and this compositional format remains among the most commonly used. Recent examples include works by composers such as João Pedro Oliveira (Lâminas líquidas, for marimba and tape, 2002), Ton Bruynèl (Brouillard, for piano and two soundtracks, 1994) and Mario Davidovsky (Synchronisms, a series of pieces for solo instruments and electronic sounds, the latest, for clarinet, composed in 2006).

 

While the new possibilities for digital sound synthesis and transformation have been thoroughly explored, the reliability of fixed media has never been abandoned. The development of technology and the possibility for real-time processing, analysis and synthesis of sounds have opened a window for the realisation of a new kind of music production, where both the traditional and the electronic sound elements of a piece can be controlled on stage by a human performer.


Early works exploring the use of live electronics to enhance the timbral qualities of music include Stockhausen’s Mikrophonie (1964–1965) for large tam-tam, two sound-exciters, two microphonists and two filter control operators, and Nono’s body of works in collaboration with the Experimentalstudio of Freiburg (e.g., Das atmende Klarsein, 1980, and Prometeo, completed in 1985). While bringing to the musical world new sonorities in real time (e.g., ring modulation and filtering, or extensions in time and space by means of reverberation and spatialisation), these approaches are limited by being dependent on the original sounds of the traditional instruments involved in the piece, leaving the impression that in order to gain real-time generation qualities the role of the electronics must compromise its potential for timbral complexity.

 

For the last twenty-five years, one of the main interests of IRCAM has been to develop innovative ways of using technology to achieve real-time interaction between the instrumental and the electronic elements of a musical system. A primary goal was to liberate performers from the rigidity of pre-recorded electronic material, and to provide them with the means to control more complex and timbre-independent electronic material. Examples of this approach are Pierre Boulez’s Répons (1981–1985), for ensemble and computer system, John Chowning’s Voices (2005–2009) for soprano and computer and Cort Lippe’s Music for Septet and Computer (2013). 

 

Although achieving considerable flexibility and timbral complexity, this approach still maintains a kind of master–slave relationship between instrumentalist and electronics, making the musical gestures and nuances of the electronic material independent from the articulations of the instrumental part.

 

Leaving this approach behind would involve a dedicated performer for the electronic elements of a piece, someone with the freedom to articulate the predefined parameters of the synthetic sounds and/or of the instrumental sound manipulation.

Towards a definition of the computer music performer


To this day, the role of the electronic performer is sometimes confused with that of the sound technician behind the mixing desk who maintains the balance between the electronics and the traditional instrument levels. However, that role requires the technician to be in an ideal audience location, far from the stage and far from the performers. A technician may thus require additional cues to follow a specific passage in the electronic part (if it is pre-recorded). In the case of a live processing system, where accurate notation for the traditional performer is less relevant than the exploration of the gestures that the computer system is able to identify and react to, the musicians on stage may need to be cued when a particular kind of playing is required. For example, to react as expected, the computer system may require the incoming signal for a time to be without vibrato.

 

These quasi conductor-like gestures, as well as other actions more closely related to those of a traditional instrumentalist, make up the role of what could be called the computer performer. Some of the characteristics of this new performer would be: the capability to resolve a number of technical challenges during a performance, dividing the logistical responsibilities between the performer and a sound engineer and, in particular, the ability to contribute to the performance like an instrumentalist, with his or her own sound-print and articulations, so as to be able to interact with other performers in a piece both at the timbral (sound) and at the gestural (control) level.

 

We can already find some examples of music creators who might serve as a model for such a computer performer. These individual developments have been greatly enhanced thanks to a broadening of access to computer technology in recent years, which has spawned a whole generation of laptop artists and other more complex human-gesture-controlled systems. Examples of the latter include Michel Waisvisz’s hands and the LiSa system, both developed at the STEIM institute in Amsterdam, and Atau Tanaka’s BioSensors system.

 

The undeniable contribution of these new artists/creators, who are immersed in finding new ways of controlling complex electronic music systems in a performance situation, begs one final question: Is it possible to divide the role of the composer and the performer when creating and using these complex systems?

 

The quest for “beauty” in electronic music forces composers first to generate an imaginary cosmogony of sound elements before justifying – through creative structuring – how this cosmogony works, mutates and exposes a musical meaning. The computer performer faces similar challenges. In a way, the alchemical role often connected to the work of composers or to innovative improvisers must be adopted by this new interpreter, not only to create the instrument on which he or she will perform – with its possibilities and limitations – but also to generate a consistent set of performance skills that will allow a creative interaction with composers and fellow music interpreters.

 

Having succeeded, the next step is to collaborate with composers on the design of software and hardware (musical instruments) that would potentially help to define a musical identity in a different form than that of a traditional score – for example, by promoting the dissemination of these new works by encouraging other computer practitioners to perform them.

This in turn necessitates accurate research into the status quo, to avoid reinventing already efficient tools that can be reutilised. It also demands a renewed focus on the development of new general-purpose tools in less explored areas, such as real time convolution. At the same time, researchers should be encouraged to produce dedicated practical implementations, in the form of software instruments and physical controllers for particular projects.

 

The nature of these musical systems leans towards an electronic equivalent of traditional instrument building, rather than the development of interactive systems, such as those developed in the 1970s by, among others, Joel Chadabe, who defines (Chadabe 1984: 23) his interactive computer systems as follows: 

 

An interactive composing system operates as an intelligent instrument – intelligent in the sense that it responds to a performer in a complex, not entirely predictable way, adding information to what a performer specifies and providing cues to the performer for further actions. The performer, in other words, shares control of the music with information that is automatically generated by the computer, and that information contains unpredictable elements to which the performer reacts while performing. The computer responds to the performer and the performer reacts to the computer, and the music takes its form through that mutually influential, interactive relationship. 

 

My argument, as well as my whole thesis, aims towards a methodology where traditional and computer musicians would share a comparable set of skills, challenges and creative responsibilities. In this scenario (pace Chadabe’s), the computer is a tool for expression, not an autonomous expressive component of the musical system.

 

From these considerations I develop strategies for production and performance with new electronic voices, such as Timbre Networks, where musical aspects such as material exchange, gestures, interlocking, and control of layer-density challenge and give the interpretative freedom that this new performer should demand: a kind of music in which the electronic media is a musical source capable of standing on its own.

We need to find a set of organisational rules that determine the intention and the extent of the interpretative influence on the material and discover efficient and artistically meaningful ways of delivering this control in a performance situation. This does not mean that composers should think of the electronic media (synthetically produced sounds or systems for real time processing) as traditional instruments. Similarly, an electronic performer should not aim to emulate gestures and performance conventions that are a product of hundreds of years of tradition. By revising some of the positive constraints of traditional instruments and performance, it is possible to aim for a successful interaction between the composers and interpreters of this electronic media.

 

One possibility is simply to divide the roles of composer and interpreter. Currently, composers perform the electronic parts for their own pieces for a variety of reasons. The main reason, however, is that they are the ones most familiar with the piece and, since we are far from reaching a consensual standard on electronics (not only regarding the tools used but also regarding parameter descriptions), the effort of defining a meaningful notation that will deliver enough information to the potential interpreter can be bypassed. The cost, of course, is that these pieces can only be performed by their composers, limiting the dissemination of their music. Some composers have adopted the approach of creating event-based scores and self-contained computer applications that could be performed by a technician. 

 

While this approach solves the dissemination issue, in a concert this kind of contribution of the electronic part to the overall piece is not that far removed from that of a traditional pre-recorded tape. Yet perhaps the greatest benefit of being able to divide these roles is to regain the collaborative potential of music. After all, it is through this interaction between composer and performers that music has continuously evolved throughout history, by expanding the sound palette of instruments through the development and formalisation of extended techniques, and by combining the personal backgrounds of particular performers with the structural ideas of composers. Skill development and focused understanding is only reached if we are able to reintegrate these separate roles. As it has been pointed out elsewhere, “Creative duality is concise duality, in which a plurality of meanings or functions is achieved not by simple addition, but by fusion and compression.” (Veale, Feyaerts and Forceville 2013: 55)

 

Once we succeed in dividing and fully developing these roles, we can move forward to our final goal: to recombine the roles of composer, performer and instrument maker in a productive blend with new, emergent properties. In effect, I aim to create a new, multi-threaded role in the guise of the computer musician, in whom the creative singularities of different aspects of musical practice will intertwine and fuse together. The limits of individual roles as we understand them today will be thoroughly blurred, allowing us to discover new and emergent pathways to creative musical performance. 

 

By using computers, and timbre networks in particular, to dis-integrate and creatively re-integrate our conception of what it is to be a musical performer, I aim to recover, for practitioners and audiences alike, the fragility, surprise, and unexpectedness that presenting music on stage is all about.

References

 

Chadabe, Joel (1984). “Interactive Composing: An Overview.” Computer Music Journal 8/1: 22–27.

 

Veale, Tony, Kurt Feyaerts and Charles Forceville (2013). “E Unis Pluribum: Using Mental Agility to Achieve Creative Duality in Word, Image and Sound.” In Tony Veale, Kurt Feyaerts and Charles Forceville (eds.), Creativity and the Agile Mind: A Multi-Disciplinary Study of a Multi-Faceted Phenomenon (pp. 37–58).  Berlin: De Gruyter Mouton.