We will hear the Ciconia canon going through a variety of digital processings during a search for sound abstraction.
The recordings were presented directly in spatialized versions, alongside with versions alienated through digital processings. Sound processing softwares have their implied sound aesthetics, and choosing type of processing is like building an instrument. Helmut Lachenmann has emphasized reflecting around the means and building an instrument, in a narrow and the widest possible sense, as part of the composition process.
"Komponieren heißt: über die Mittel Nachdenken. (...) Mittel - gemeint ist damit das musikalisches Material zunächst im engeren Sinn: jener Vorrat von Möglichkeiten, auf den der Komponist verwiesen ist, also jenes Weithin vergesellschaftete Instrumentarium von Klängen, Klangordnungen, Zeitordnungen, Klangquellen: Instrumente also in engeren und im weiteren Sinn, ihre Spielpraxis, Notationspraxis, Aufführungspraxis, bis hinein oder hinaus in die Institutionen und Vermittlungsrituale - dieses ganze Musikmobiliar, wie es der Komponist nicht nur um sich herum, sondern ebenso in sich selbst vorfindet, kurz: jenes polypenhaft alles umschlingende und in alles sich hineinschlingende Monstrum, das ich anderswo den "ästhetischen Apparat" genannt habe." 1
When composing for an orchestra of musicians, it is part of the process to imagine which instrumental sounds are parts of this musical project, and which are not.
I used Csound as a main tool for treatments of sound, through a Csound orchestra consisting of instruments, which can be performed through Csound scores.
For my Csound orchestra, I needed to take into consideration the format of the sound sources (mono, stereo or ambisonic recordings in first or third order), a long menu of possible processings, each with a set of parameters and controlling curves, and finally nuanced spatial distributions for each sound, to be decoded into many possible concert venues and speaker setups.
Csound is being developed as free open source software used by a large community. For the interested readers, I will provide technical details and sound examples from the "Wheels within wheels" sound installation, and finally attach the Csound code of this instrument. I will refer to the online csound manual for each function. This can allow the reader to go through similar explorations. Readers with less specific interest in these sound treatments may listen to sound examples and go ahead to other chapters of this exposition.
The final Csound instrument had 81 potential parameters for each note. A processing function (Csound 'opcode') often has parameters that are changing dynamically through each sound. I have used randomized curve shapes within computer assisted composition, to shape materials. Multidimensional flux has been key in finding richer treatments. It seemed natural to extend this flux to how the sounds themselves are developing over time. Ircam released the application Cleese in 2017-2018, referred to as a "Ministry of Silly Talks". I find this approach to random flux related. It is a common approach within sound synthesis to add randomized elements to make sound phenomena richer.
Let us in chronology see what my attached Csound instrument is doing.
Initially a sound is read. The options are mono, stereo, quad (1st order Ambisonics) or 16 channels (3rd order Furse Malham Ambisonics). The sound can be read in two different ways.
- At different speeds, with sample interpolation, using 'diskin2'. 2 Lower pitch will always mean stretching the sound.
- With independent treatment of time and pitch, using 'sndwarp'. 3 A time pointer can create granulation effects. This uses spectral analysis. Settings of window size and overlap gives a chance to either restore the original sound as closely as possible, or intentionally introduce artifacts.
Following this initial reading, a series of options are available.
0: The sound passes directly to spatialization.
All the following alternatives are spectral processings. A second analysis and resynthesis will be done. I tested all the configurations and evaluated their usefulness for my musical ideas. These are highly personal preferences and may be different for other musicians.
1: Analysis and resynthesis without additional transformation, except for possible intentional artifacts coming from the analysis settings.
2: Pitch scaling (transposition) with 'pvscale'. 4 The first reading already deals with score pitch. This second transposition will change the harmony of the pitch materials, some times in flux through chaotic pitch curves. Transposition to very high or very low registers can create further artifacts.
3: Pitch shifting with 'pvshift'. 5 Adding or substracting a value in the frequency domain (hertz) will distort the spectral relations within the sounds, and create more artificial sound qualities. I found this very effective.
4: Spectral envelope warping with 'pvswarp'. 6
5: Spectral blurring with 'pvsblur'. 7
6: Smoothing with 'pvsmooth'. 8 In many cases this would be subtle transformations.
7: Spectral arpeggio with 'pvsarp'. 9
8: Spectral freeze with 'pvsfreeze'. 10 The freeze happens suddenly where the control curve exceeds a treshold. It is harder to control the overall duration, and fade-outs may become necessary to avoid clicks. Timbres stop in chords of sine waves, sounding much more artificial than the input sound. In some cases certain frequency ranges became too loud over a large setup of speakers. I often used additional highpass or bandpass filters on the results. It required critical attention and editing of the results, while many unexpected and interesting sounds emerged.
9: Bandpass filter with 'pvsbandp'. 11
10. Bandreject filter with 'pvsbandr'. 12
Many processes created results I enjoyed. I would also wish to find further alienation and abstraction from the original sound. This can be done by combining these processings in a chain. Ideally these would be modules where any combination of these 10 possibilities are available in any order. Unfortunately frequency signals cannot be redefined in Csound, each combination needs to be coded manually. This would dramatically extend an already large body of coding. My temporary solution was to select 8 different combinations. Order of the processing will always make a difference for the results.
11: Pitchshift, freeze, blur, smooth, warp, arpeggio. The freeze part often created a buzzing sound.
14: Transpose, pitchshift, warp, blur, smooth, arpeggio, freeze. The results were changing between abrupt and static. Every chain involving freeze will have this contrast between frozen and not frozen.
15: Pitchshift, warp, bandreject, arpeggio. The result is a thinner and more alienated sound.
16: Bandreject, arpeggio, warp. Thin, brilliant and clearly pitched sounds.
17: Blur, warp, arpeggio. Like an acid echo, I did not use this that much.
Pitch scaling and pitch shift both have a parameter called "kkeepform", which make huge differences for the results. Lets compare the technical descritions from the manual with how I experienced the sound.
- kkeepform=0: "do not keep formants". 13 The result was closest to the original sound, it did not create much special effects.
- kkeepform=1: "keep formants using a liftered cepstrum method". 14 I got strong fundamental sounds, like from a digeridoo. I did not use this configuration at all by personal taste.
- kkeepform=2: "keep formants by using a true envelope method". 15 The results were more fragile and transparent, close to Luigi Nono's sound aesthetic. (For instance in the piece "Post-Preludium per Donau" for tuba and live electronics, where the tuba player performs sounds in a high register which cannot be controlled exactly.) I used this setting frequently, as transparent sounds were more to my liking than the strong and massive ones.
I first set out to use dynamic morpings between different sound sources. This means each note should use two sounds and tranpose them in unison. The morphing functions in Csound are named 'pvscross', 'pvsmorph', 'pvsmix', 'pvsvoc' and 'pvsfilter'. You will find them in the same online Csound manual. After some testing I found the results more normal and realistic than what I would use in this project. The processing chains above brought me much further away from the original sound. For my previous research project with the "Between instrument and everyday sound", I mentioned 'morphing' as an area I would like to explore. I later found that the sense of orchestration in combining sounds was just as interesting for me as technically doing a morphing within a spectral processing. I do not reject the idea of testing it further in the future using more effective combinations of sounds.
For the final spatializations I took on challenges of high order ambisonics rotations, which I will discuss in a separate chapter.
I will finally attach the standard Csound setup developed for this project. It consists of two parts. The first defines the 81 parameter instrument, and a system for impulse responses. It is the first large section, after this a Csound score is generated within Open Music using the OM-Ruben library and inserted. The second file simply contains the tags to complete a .csd file which Csound can understand. An automated system allows rendering of a large number of variation of an idea, difference is ensured by the large number of random variables. It is not fully random though. A lot of compositional thought needs to go into finding the ranges of each parameter. The composer moves to the macro level.
This Csound setup contains a lot of questions and unresolved issues, yet it gave some for me acceptable results. The detailed processings of multiple 16 channel sounds were not the most computing efficient, tasks would often need to be split and mixed together. The OM-Ruben library provides example patches of rendering scores with such external Csound setups, with solutions to split dense granulation in multiple renderings.
A sound installation generated live would need continously variation by these principles. However, the involved processes are too heavy to happen in real time. I wanted a high density of timbral and spatial treatments, much more than real time would allow. I largely abandoned real time performance, except some simple variable delays and variable filters integrating the live performers in the projected sounds.
"How many composers pretend that what is not real time is merely not at all music, without inquiring, first, what music is, second, where the problems lie, and considering then whether or not this technology provides an adequate solution? (...) So, once the "demo" effect has subsided -- it is indeed much fancier to show some action going on a screen than to play the result of a complex computation, no matter how banal the former and sophisticated the latter! - the question is still appropriate: what are the repercussions on composition and musical aesthetics? " 16
The sonograms are made from normalized sounds, to make spectral contents as visible as possible. Some of the processed sounds had strong and wide spectral contents, some times with a metallic sound. I often reduced the volume dramatically, as they sounded brutal and lacked the transparency I was looking for.
I continued adding sound versions until shortly before the "Mouvance III: Distortion" concert.