1. Studio as a compositional tool
The definition of what a sound studio constitutes has changed dramatically since the introduction of digital sound technology. Before and during the transition from analogue to digital I could say that it was a physical location we entered during the last stage of the composing process and the subsequent recording of every instrument layer by layer. The sound studio was designed for controlling and separating the individual instruments in order to prepare their output for post-production. Because the studio was well equipped with a mixing desks and different outboard effects, it was a place where we had access to manipulating the recorded material sonically in order to “glue” the different layers together, but also for exploring unconventional techniques for creative purposes. The sound studio provided an optimal listening condition that made it possible to adjust both the details in the arrangements and the mix during the production process. Entering a professional studio was a luxury that was reserved for the few, and the art of being a studio musician was a position that needed to be earned through several years of experience.
After the introduction of digital sound technology, in particular during the last decade, the democratisation of technology and the general accessibility of sound production devices have increased dramatically. In addition to the democratisation the working procedures related to sound production have evolved proportionally with technological development.
This has pushed the boundaries of what can be defined as a sound studio when it comes to mobility, costs, functionality and efficiency compared to its predecessors. When Brian Eno[1]in 1978 proposed that the recording studio would become one of the most obvious characteristics of new music, and the main focus of compositional attention in the future, it is hard to imagine that he could have foreseen the expansion of the studio concept itself. Despite this development the utilization of digital sound technology within my genre still follows the same mindset that has been developed through the history of analogue sound technology.
In this article I shall address the following questions:
(A) What are the studio frameworks and conventions within my genre today, and is it possible to expand these boundaries?
(B) Is it possible to decrease the traditional gap between studio production and live performance within my genre using contemporary technology, and is it possible to perform this transition from studio to live performance without loosing the element of human interaction?
Before embarking on answering these questions I would like to discuss the relationship I have had with the sound studio, and how this has changed during the music technological development during the last two decades.
My role working as a guitar player in a band during the early 90`s was quite a different experience from today when discussing the use of music technology. The way we composed our music was in direct musical communication with each other in a real time situation. This opened things up for improvisation based upon a musical dialogue between the different musicians; the composition and arranging were made through a collective consistency of the sounding material and the individual musicians were responsible for their own instrumental role in the aesthetic expression. Entering a sound studio was the last stage of this process, bringing the ideas together in cooperation with the sound technician and producer. This environment introduced us to several sound shaping opportunities, and gave many possibilities for further refinement of details in the compositions. Together with the individual performers, instrumental signatures, and the interaction between them, this was what made the band’s unique sound. To further clarify my role through the shift in studio technology I will start with dividing my studio relationship into two main directions based on two different recording procedures: Linear recording and nonlinear recording.
The first direction is based mainly upon techniques and strategies invented before the introduction of the digital audio workstation (DAW), and the general working procedures can be traced back to the introduction of multi-track recording. First of all this introduced the separation of instruments during recording. This way of working, before the extensive use of the more flexible digital copy/paste techniques, made the working process more linear, recording the different takes/layers from start to end. The first layer was often drums or rhythmical content, and this pre-determined the basic timeline and rhythmic structure for the whole composition. The utilization of working layer by layer enabled creative possibilities both in the production and compositional process. Some examples could be the possibility of dubbing instruments, recording different parts of a song separately, including instruments outside the ensemble and using different guitars and amplifiers in the same song etc. This had a large impact on pushing the “sound” of the compositions one step further.
As also discussed in the article “Real – Time Performance”, the consequence of exploring and pushing these possibilities to the outer limits increased the complexity in recreating the compositions in a live performance. These challenges can be traced back to several of my inspirational sources, as shown here in an example from a documentary of Pink Floyd from August 1980[2]. Even though Pink Floyd already had integrated the tape machine as part of their live performance several years before, this was not an element that came into my band, The 3rd and The Mortal, before the mid 90`s.
Within my specific genre there had been a tradition for pre producing the musical material before the actual recording of the album. These productions have been used to decide which songs and which production directions are going to be followed during the final recording. Using this strategy draws a strict division between pre and postproduction that is quite different to how the production teams work today where composing and producing is part of the same process. Despite this shift in studio procedures, many bands, and especially within the metal genre still use the old strategy when entering the sound recording stage.
Within my genre conventions from analogue recording technology are still valid and the shift over to digital recording has to a large extent been embraced with the same strategies that have been built up in the studio through several years of analogue multi-track recording. In addition to the aforementioned recording procedures there is also a tendency to use analogue devices that can recreate known sound ideals. This is a major topic in most recording magazines and Internet forums, and has become of huge interest to both musicians and producers alike. Besides the traditional way of exploring these devices there is also a tendency to use old analogue equipment in new ways for creative purposes[3].
Because of this we could say that within my genre there is a general trend for working digital, but with an analogue strategy or logic
1.3 Nonlinear recording
The second direction I experienced came about with the introduction of the DAW. The opportunity for cutting and pasting and moving recorded content back and forth in the timeline removed the necessity for linear thinking in studio procedure completely. This technique soon became a natural part of our recording process, and the composition process also became more fragmented. Together with avoiding the storage limitation of the tape recorder, we no longer had the need for attaining one perfect linear take, but could put different parts together in order to construct an optimal performance. These techniques were also embraced both by our inspirational sources and our local peers, especially when it came to drum recording. Examples of this can be seen in Metallica[4]or Atrox[5]. The pinnacle for us when using this technique was introduced in the trip hop genre in the late 90s through bands like Portishead and Massive attack. For the first time this was music that incorporated electronic elements in a way that related to our genre because of their extensive mix between traditional band instruments, which in our case consisted of electric guitars, bass and drum kit, and digital music technology. Massive Attack`s 1998 album Mezzanine made a great impact, and eventually changed our way of working. This gradually changed the way we used the sound studio. After this the division between pre and postproduction became absent since producing and composition became part of the same process. It created the possibility of individually expressing musical ideas far beyond the limitations and instrumentation within the band, and there was no preconception taken in of there being a necessity to bring the music to the stage performed by live musicians. This led us to introduce fixed media as a part of our live set up. The DAW provided us with a direct contact to the processing tools and techniques you find within the aesthetics of electroacoustic music, and the fascination for the DAW as a compositional tool has since then occupied most of my time.
2. Studio aesthetics and framework
Within my genre the “sound” is as much about the combination of instrumentation, the instrumentalists´ playing styles, and the interaction between the musicians, as the actual sound production. The collective instrumental virtuosity is still an important factor in the expression.
What can be said, as a general point, is that the integration of the technology to a large degree is adapted as an extra element on top of an already established instrumentation. This means that the basic instrumentation still exists, and the unwritten conventions for these still remain. The same conventions do not necessarily apply to the electronic components adapted in this genre. I will highlight some of these distinctions by way of using some examples from well-known techniques in modern music production.
2.1 Cut, copy, paste
For traditional instruments within my genre the nonlinear recording procedure could be compared to writing a digital text document. You are able to cut sections out of the document and paste them elsewhere, although you seldom copy, paste and use the same section several times within the same document.
Any variations in playing are important and are perceived either by recording the whole layer or by cutting and pasting together a master track or composite track out of multiple recordings of the same section as explained in the examples under 1.2 in this article. This does not necessarily apply to the “programmed” content.
When using electronic elements, and especially in rhythmical content, it is not unusual to copy and paste the same section throughout the whole song, making variations within this “package” by adding or removing elements in the section during the different parts in the composition[6].
2.2 Looping
Even though the technique of looping is closely related to the copy and paste principle there are some distinctions, especially when it comes to the difference between programmed electronic elements and the use of sampling. The programmed elements are normally made in different software sequencers. The beats within these bars are normally subdivided against a set value measured in beats per minute. On the other hand, the sampled loops are normally repeated independent of a pre set tempo value. The loop points are normally set based upon the development in the recording. The use of looping is now an integrated part of the genre especially in the rhythmical content, but usually used in more of a quantized style than for example how Pink Floyd used it back in 1973 on their album The Dark Side of the Moon[7].
2.3 Quantizing
The division between played, performed and programmed elements are also present when it comes to quantization. Even after the introduction of elastic/fixed audio [8]some years ago, which enabled us to quantize operations with audio in the same manner as we treat midi notes, the general adoption of this is still absent when it comes to using it on recordings of traditional band instruments. Instead of quantizing we still paste recorded selections based on several fragmented takes.
2.4 Sampling
The use of sampling within my genre is somewhat different to the tendency we have seen in other popular music genres in later years. The use of samples taken from already recorded music is absent. In my tradition there is an unwritten law to make all music from scratch, and the aesthetic lies in how you play and perform original material. The originality is not measured through the skills of re-contextualizing other people’s work. Because of this it was a big disappointment to find that several of Massive Attacks [9]song parts had already been used in other music and on commercially available sample packages. This does not suggest an undermining of the art of cutting, pasting and arranging different samples together, but in my genre there is a tradition of constructing and building your own bricks, or to at least give reference to the constructor.
2.5 Master/slave
When connecting software from different manufacturers together you need to decide which of the devices should run as a master and which one to run as the slave. The master device determines the framework for how the slave will operate. The tendencies explained through the examples above points to a direction where the performer is in danger of becoming a slave to the software through the way it is used. This relationship with the software is most obvious in my genres extensive use of fixed media, which leaves a gap between the digital content and the traditional band instruments both in the sound, interaction and performance aspect.
2.6 Authenticity and focus areas
There is a tendency for the production to become more important than the musical input, and it is tempting to adapt the traditional playing to fit within this framework.
I would therefore like to point out some of the premises that I have been exploring during the project in search of solutions that integrate the use of technology without losing the authenticity of the genre.
(1) First of all it is important to get access to the possibilities of the digital techniques without compromising the instrumentation or interaction between human performers.
(2) Secondly there is a point to decrease the physical distance between the instrument and the sound in the gap between instrumental output and the production output. Production output in this sense meaning the transformation from recording, cutting, pasting, processing, mixing to the final result.
(3) There is a need for recreating the material live, and at the same time interacting directly with the technology.
(4) As also discussed in the article “Real-time control” it is important to democratize the technology in the band in order to maintain the individual expressions of the musicians.
(5) It is also important to find solutions where the musicians are more independent of the visual feedback from the computer screen during performance.
The next part in this article will describe how I, throughout this project, have challenged my working procedures in the studio. As I will explain I have used the traditional sound studio in two different ways during the project. Both as an active part in the composing/production stage, but also as a reflective element when transforming the compositions for live performances.
The main body of work has consisted of playing, recording and producing music in different ensembles with musicians from different genres. In addition to the practical musicianship these sessions have also been a forum for discussing the focus areas in this project. The essence of these experiences has been used in the compositional work for “The Soundbyte” trilogy that has been the red line through the project.[10]
As further explained in the article ‘Real-time control’, the augmentation of the electric guitar started quite early in the process, and in retrospect I believe that entering the different projects with this “new” instrument had a great impact in taking me out of my comfort zone, and in that way push the boundaries of my own preconceptions and conventions.
In this article I will point out the differences in playing/composing strategies between the different ensembles I attended during the project period, and how this affected the project.
3.1 Vertical composing strategies
Since working mainly with the DAW as a compositional tool over the last decade we have adapted a vertical composing strategy for layering our instruments during the studio production. Vertical here means that we could layer part by part within a song before we glue the parts together in a desired order, as a distinction from non-linear recording which is based around editing together different performances from the same instrument on the same part. A consequence of this vertical strategy can be that the different parts of the song containing the whole instrumentation are recorded before they are arranged together. The efficiency of this way of working is made possible by using a strict sequencer set up in the DAW in order to limit variations in rhythmical content, and in that way be able to mix parts of different content in the same time signature together, not unlike how a DJ would make a mix tape. Even though the present project started in this direction through playing and recording with The 3rd and The Mortal[11] and Atrox[12] this direction of composing got more abandoned proportional with the development of the instrument augmentations described in Real – time control. However, this is not to undermine this vertical strategy in any way, and it is obvious that this direction is more homogenous to the development in other popular music genres when it comes to the utilization of a set timeline in different DAW´s.
3.2 Horizontal composing strategies
The other composing strategy direction I followed through this project came as a consequence of playing improvised music in different ensembles[13] using the augmented guitar. In the recording sessions this meant that all instrumental layers where recorded from start to end, in a horizontal manner, - and that the development in the structure was based upon human interaction.
3.3 Turning point
In this project, the turning point of using the studio as a compositional tool came during a recording session in Berlin in March 2010. We were to record guitar and drums for several compositions, and started the session by using a mixture between linear and non-linear recording techniques. The drum kit was equipped with 18 different quality microphones in an excellent live room, and the studio had several different guitar amplifiers for free use. This meant that we had the opportunity to choose which of the microphones and takes to use in the postproduction in order to get a preferred sound. This was a perfect set up for exploring and achieving the ultimate sound for the compositions. The recordings went well and the result sounded satisfying.
The third day we realized that we were actually doing the exact same thing that we had been doing in the studio over the last decade. – We were copying our old procedures! – The only difference was that we were doing new songs within environments of better sound production.
This led us to reorganize the whole session for the fourth and final day in the studio.
Since the guitar set up was augmented to a point where I was able to control several processes in the DAW at the same time, we ended up abandoning the guitar amplifiers, and at the same time running a mix of the drum kit through the system real-time convoluted with a recording of a crane. By using linear recording, a horizontal composing strategy and augmenting the
Instruments - this solution enabled us to integrate conventional postproduction techniques in real time interaction through our own instruments.
This experience led to the augmentation of all instruments in “The Soundbyte” ensemble in order to move the sound studio closer to the individual performer, enabling us to access typical post production techniques during real time performance. This specific recording was video taped by Michael Tibes, and follows the article “Convolution” as video example 2.
4. Transformation from studio to live
The technical set up for the different instrument augmentations are described in detail in the article “Real – time control”. This section is a description of how we used the sound studio as a reflective element for calibrating the instruments for live performance. This work was done during one week in April 2011, and was the first step in transforming “The Soundbyte” from a studio to a live band. “The Soundbyte” had until then, and since 2002, only done studio recordings, and never played live concerts together.
4.1 The glass between the live room and the control room
In a traditional sound studio the musicians are placed in the live room and communicates with the technician in the control room from the other side of the glass. The musicians provide their musical output; the technician receives these signals, balances them, filters them, puts on effects, and sends back individual mixes on fold back lines to the musician’s headphones. When both sides of the glass are satisfied, the recording starts.
The same principle can be paralleled in a live situation except that the glass is replaced with PA speakers, and the headphones with monitor speakers.
Through the instrument augmentations we had already moved the sound production tools out of the studio, and adapted them for direct control of the instruments. The next step was therefore to maintain control over the sound production both individually and collectively both in a studio and in a live setting.
4.2 The mirror between the live room and the control room
The strategy for this work is built upon a very basic premise:
- If the musicians are to be able to perform both instrumental and sonic output at the same time, they need to be able to interact both individually and collectively as a response to the mutual output of their instruments.
In principle this means that we had to put up a double studio set up for balancing and producing the instruments output.
Since the drums, guitar and sampler had their own studios attached to the instrument, the lines from these three where distributed to the control room studio. The control room studio had a function as a reflector, and its equipment chain was dissected and removed bit by bit during the process.
The Control Room’s task was to detect where the input needed adjustment, and then mirror it back to the live room where the required adjustments were made directly on the instrument software. This process was repeated until the control room mixer ended up at zero (reset), and the sound signal was identical in both rooms. This process was used for adjusting every device in the different instrument’s studio chain (gain, balancing, equalizing, effects etc.). As an example, if the control room received the wrong balance between the different drums, this was adjusted in the drum setup until the right balance was achieved, and the gain and fader settings on all channels in the control room were equal. In this way we could remove the studio equipment chain bit by bit as soon as it reached zero setting. The chronology in this dissection went from pre amp (gain), fader setting (balance), equalizing (frequency filters), dynamical insert effects (compressor/gate etc.).
After the multi-track recorder was invented the profession and art of producing has been a significant part of the recording industry. In later years we have seen a blending between the composer, producer, role of the musician, and an involvement in modern music production demands skills on all these levels. This is where I have ended up with my band “The Soundbyte” fulfilling the technological circle and arriving back where I once started in the rehearsal room. Through this work we have moved the sound studio closer to our instruments, and in that way come closer to the sound production independent of working in a studio or doing a live performance.
[1]Eno, Brian (1978):“Interview with Brian Eno”, http://music.hyperreal.org/artists/brian_eno/interviews, Last visited 10. April 2011
[2]Pink Floyd (1980): The lost documentary”, http://www.youtube.com/watch?v=olpdUr0o5g4&feature=related#t=2m18s: 2.18 - 2.41 last visited 10. November 2011
[3] Voorvelt, Martijn, “New sounds, old technology” Organized Sound 5(2): 67-73, Cambridge University Press
[4] Metallica - http://www.youtube.com/watch?v=IJRyY5cyLts#t=1m12s : 1.12 – 2.30 last visited 10. November 2011
[5] Atrox http://www.youtube.com/user/atroxband#p/a/u/0/U-4bjfmjDV8 last visited 10. November 2011
[6]Examples of this can be heard inThe 3rdand the mortal(2002), “Memoirs”, VME Music Publishing 2002,Atrox(2008) “Binocular”, Season Of Mist 2008,Manes(2003), “Vilosophe”
[7]Pink Floyd http://www.youtube.com/watch?v=7NKDSDbipGU Last visited 10. November 2011
[8]This processing function is given different names in different software. These examples are taken form ProTools and Logic. The basic function is that you can time stretch audio without changing the pitch.
[9] Mezzanine http://www.guardian.co.uk/music/musicblog/2009/feb/26/sampling-epiphany-massive-attacklast visited 10. November 2011
[10] The Soundbyte (2011), “Trilogy”
[11]The 3rd and The Mortal,(2009), “All sewn up – a tribute to Patrick Fitzgerald”,Crispin Glover Records2009
[12]Concerts in 2009 as asessionguitaristin the Czech republic and at the Pstereo festival in Trondheim
[13] Ensembles as Magnify The Sound and Trondheim Ensemble for electroacoustic music performance (T-Emp)