1. Concrete sounds
This article discusses the exploration and use of concrete sound in my research project. I have chosen to use the term ´concrete sound´ instead of ´environmental sound´ because within my genre, the tradition for treating this topic is in many ways closer to musique concrete and Pierre Schaeffer’s acousmatic listening[1] than what is implied by the term ´environmental sound´, often used within soundscape composition. I use recorded sounds as objects independent of their cause, and this is different from what is the case in soundscape composition where the cause of the sound often is an important part of the expression[2]. Another reason for the choice of term is that ´concrete sound´ could better describe sounds that are intentionally triggered by human interaction (as for example hitting a metal bowl, or rattling with a matchbox). The last reason for choosing this term is the level of abstraction applied to the sounds either when placing them in a musical context, or the abstraction level applied when processing the original source.
The use and exploration of concrete sounds within my field was limited before the introduction of the DAW. Since the treatment and postproduction of these sounds has a long tradition for being done in a sound studio, the opportunity for exploring this field within my genre would lead to considerable expenses because of studio costs. In addition to this the knowledge needed to operate the technology for performing these tasks would normally be in the hands of a technician.
Amongst my sources of inspiration, one of the bands that experimented with incorporating concrete sounds as part of their composition, was Pink Floyd with their song “Money” in 1973[3]. They made a tape loop from a selection of concrete sounds and used it as a rhythmical basis for the rest of the instruments. Parallel to the analogue reproduction of concrete sounds, Einsturtzende Neubauten in the early eighties built their own instruments out of metal parts and found objects. These where amplified and used as part of the instrumentation. The idea of building your own “noise” instruments can be traced back to Luigi Russolo and his noise instruments in the early part of the 20th century[4].
My first experience within this field was through my band The 3rd and the Mortal, which experimented with using concrete sounds on two of our albums in1994[5] and1997[6]. In these experiments the concrete sounds were used as an extra element to the instrumentation, but this direction got abandoned because of our increasing interest in electronically generated sound in the late nineties. It was not until 2002 that this work was further developed through the founding of The Soundbyte, but then as an integrated part of the musical instrumentation. [7][8]. If you look to other genres as, for example, electronica, the integration and utilization of concrete sounds became apparent earlier. Examples of this can be found in many of the Icelandic artist Björk`s work[9]
.
The aim within this artistic research was to use concrete sounds as an integrated part of a musical expression. When using a soundscape perspective you could say that the sounds that might be labelled as “sound pollution” within a certain soundscape are the sounds that are most interesting within my genres aesthetics. For example: Sounds in urban environments such as machines, industrial noise and electrical tools are sounds that are interesting to use within my compositional work, both because of their dynamic range and structure, but also since they can be easily incorporated as rhythmical elements. When experiencing my first “sound walks” equipped with headphones and a recorder it was the noise that suppressed the environmental sound I found most musically interesting (for example an unexpected start of an angle grinder, or a container being lifted). The demand for attention, together with sonic movement and dynamics, and the contrast against the environmental sound led to a search for different sounds within this “family”. This led to a reversed situation in the postproduction since, instead of editing out unwanted noise from the recording; I was separating and clarifying the “noise pollution” in the recording.
1.2 Recording
By using a portable recorder, I have done recordings both in my local environment, but also whilst travelling during the project period. In addition to this, several of the recordings are done in a sound studio environment. The content of these recordings consists of sounds that are controlled by direct human interaction (for example hitting objects, rattling a box with metal parts etc.), sounds that are controlled indirectly by human interaction (for example trains, boats and machines), and different nature sounds (such as water, wind etc).
The main focus during the recordings has been a search for sounds that could be transformed into musical parameters. These parameters have consisted of pitch (for example the shrieking of train tracks, and electrical and mechanical tools or machinery with an even rotation), sounds with a clear attack point or inherent rhythmical structures (for example trains, machinery, sounds from a construction plant), sounds with a specific movement in the stereo recording (for example cars passing), sounds with a specific dynamic range (for example when you start or stop machines).
In addition to this I have recorded soft sounds with close microphone placement such as raindrops and matchsticks.
1.3 Postproduction
When entering the sound studio there is a big difference between how you treat the concrete sounds compared to sounds from traditional instruments in a rock band. Working with music production, there are a lot of conventions as to how the different instruments should be placed in the mix in order not to interfere with each other. A normal procedure is to separate the different instruments through balancing levels, adjusting individual frequencies, and placing them in the stereo field. For example a traditional way of mixing a bass drum and the bass is by placing them in the middle of the stereo field, and then separating them in frequency in order to make the elements complementary, but also separately audible. When you are to apply a train recording in the mix, the sound of the train would naturally occupy a lot of space both in terms of frequencies and stereo field. Since we already have a preconception of what a train should sound like, the recording is in many ways already produced when recorded. The use of timeline-based effects (for instance reverbs and delays), which are often used to “glue” different instruments together in a mix, is also problematic when used on such sound content. Because of this it is an interesting challenge to mix the concrete and the instrumental content together without the final result sounding like two separate, but parallel mixes played back at the same time.
2.1 Studio
As discussed in the article “Studio as a compositional tool” the first part of my project was done in a sound studio environment following a “vertical strategy”. This means that treating the concrete sounds in the compositions consisted of selecting fragments of rhythmical and tonal content from the recordings, and using these as rhythmic and tonal building bricks together with the traditional band instruments.
Video example 1: Concrete sounds in postproduction
- click right side menu to watch video (Video Example 1)
This is an example where different concrete sounds are put together and used in a musical context. (Musicians are Arild Følstad, Trond Engum, Rune Hoemsnes and Kirsti Huke)
2.2 Dynamical interaction
The next strategy was based on a much larger degree of human interaction with the concrete sounds. The first thing I observed/heard when listening to a recording of a train was that I was able to follow the sounds development despite its lack of rhythmical content. Because of the dynamic development and movement in the sound I was able to improvise through this development and interact with the sound in real time performance.
- click right side menu to watch video (Video Example 2)
This is an example taken from a concert in Trondheim the 21, October 2011 where the guitar and the drums follow the development in the train recording. (Musicians are Trond Engum and Rune Hoemsnes)
2.3 Sampler instrument
At this point in the project both the guitar and drum set have been developed to a stage where it was possible to use them during rehearsals and concerts. The integration of the concrete sounds within the musical expression had been fruitful during the compositional studio work, but introducing this element into a real-time performance was a bit more problematic without the use of fixed media. While developing the guitar and drums towards a live electronic direction, the concrete sounds still challenged the boundaries of what could be defined as a real-time performance. After working with the keypad solution (see Real time control) in a different constellation, it became clear that this solution did not reach the same potential of interaction with the concrete sounds that had been gained in the studio environment. First of all the technique of triggering different sounds from the number pad led to a static reproduction compared to other instruments, and secondly it was problematic to control the sound after it was released since I was playing my own instrument at the same time. This led me back to the discussion of fixed medias in real-time performances:
Is it possible to treat stored content like a traditional instrument, and what degree of control is needed through human interaction in order to bring conserved sounds back to life?
Solutions to this question is present in two different directions in this project:
1.
The first direction was to use the concrete sounds as “convolutors” for live instrumental input, a direction discussed in the article “Convolution”.
2.
The second direction pushed the focus towards the midi keyboard, which still is one of the most established and intuitive musical digital interfaces available. This gave several advantages in this particular part of the project both because of its already widespread integration in digital software, but also because it gave me the opportunity to work closely with a professionally trained piano and synthesizer player when building up different instruments from stored audio content. During this work it was crucial that the musician gained as much intuitive control over the pre recorded sounds as possible in order to decrease the distance between the musician’s gestures and the response from the instrument. Since all field recordings were done in stereo, and most sources consisted of static pitch, the ability to adjust the pitch range and level of velocity was limited when comparing these instruments to commercially available sample libraries. On the other hand, these sound sources give a larger degree of personal identity, and the limitations in the defined range of each sample gives each instrument a specific place in the “orchestra”. The possibility for musical phrasing is to a large degree maintained in the duality between directly processing the source sound and the processed output. Moreover, the musician’s interaction with the sound is further attached to velocity and real - time processing, and determined by keystroke energy and control messages sent from a midi keyboard. In addition to the basic functions like key on/off, note number, hold etc, the midi messages are also organized to be multitimbral and complementary in defining different sound ranges on a keyboard.
The use and application of concrete sounds when looking at the basic recordings were mainly based on musical parameters. This means that the sound recordings are edited and processed in order to adjust the audio content into musical parameters like, for example, rhythm or pitch. Because of this, the main use is based on fragments from the original length of the recordings. The possibility of using the concrete sounds as conductors for improvisation (video example 2) can be taken further. The experience of the various audible results of applying concrete sounds in different musical ways (rhythmical patterns, the sampler instruments and their use as convolutors) has been fruitful, and this approach will be further developed. Since the sounds that are used in this project are often termed as “pollution”, as a bi product of our society, it could be a good idea to “recycle” them.
[1]Cox, Christoph and Warner, Daniel (2004): “Audio Culture: readings in Modern Music”, The continuum International Publishing Group Inc. Page 76
[2]Truax, Barry (1996): “Environmental Sound Composition”, Contemporary Music Review, 1996, Vol.15, page 49-65
[3]Pink Floyd, (1973),“Money”, The dark side of the moon, Harvest,1973
[4] Kahn, Douglas, (2001), “Noise water meat” The Mit press 2001, page 56
[5] The 3rd and The Mortal, (1994), “Tears laid in Earth”, “Oceana”, Voices of Wonder, 1994
[6] The 3rd and The Mortal, (1997), “In This Room”, “Hollow”, Voices of Wonder, 1997
[7] The Soundbyte, (2004), “Rivers of broken glass”, Amaranth recordings, 2004
[8] The Soundbyte, (2007), “City Of Glass”, Voices Music and Publishing, 2007
[9]One example is the soundtrack from Lars Von Trier’s film "Dancer in the Dark" (2000).
Video example 1: Concrete sounds in postproduction