Reflection text #2


How to begin and when should we end?

From October 2017 till June 2018 I did quite a bit of improvising with Kim-Auto. Both at the office with no recording gear, in the studio with camera and sound recording devices on and in the project’s LAB concerts. In other words, I have played with Kim-Auto both with and without audience attending. The LAB concerts are informal experiments, and I have had very low artistic ambitions in terms of the musical outcome. A musical “disaster” in my ears, could potentially be just as interesting for the project as a more “successful” one.    

In July 2018 I was invited to perform at the Performance Studies Network Conference at Norwegian Academy of Music. Even if an experiment like the LAB concerts would be welcome also for this performance, I decided to prepare for this more formal performance with a slightly higher artistic ambition than for a project LAB concert. 


Questions before 

My main challenge in preparing for a more official performance with Kim-Auto at this stage in the project was the set duration. This performance was supposed to fill a 40 minutes slot. As mentioned in my first reflection text, after about 4 minutes I usually get used to Kim-Autos behaviour and I start losing attention. I certainly didn’t want to lose attention and interest in front of an audience 4 minutes into a 40 minutes performance.  


Here are some of my main questions before the concert:

How can I improvise with Kim-Auto for 40 minutes without losing attention? 

How should I distribute my own contribution? 

Should I plan for certain musical territories or should I just show up and improvise? 

Why am I afraid of improvising with Kim-Auto?    

Why did I accept to do such performance this early in the project? 

How can I get Kim-Auto to offer a more complex sounds, while keeping its responses and musical outputs fairly close to what I play? In other words, how can I make it clearer that we play together, not just next to each other? 

Do we play together or does it just appear to be like that? 

How should we start? 

How can we stop? 

 

I found it necessary, and artistically more interesting to me, to look into some of the ambitions for the further development of Kim-Auto – to some of the features we aim to include in near future. I decided to build a system where the main features of Kim-Auto, namely the archiving module, played a core role in combination with other tools and algorithms. One of my project partners, Morten had already showed me some of the potential if we included pitch-tracker tools, midi-instruments and a few other small processes. He helped me set up the system I ended up tweaking and preparing for this performance.


Here is an excerpt from the performance and below is a brief presentation of the setup. 


The setup

The sounding outputs are drums, piano, voice, pedal steel guitar, acoustic guitar and an analogue drum machine. Both of the guitars and the analogue drum machine are mostly performed by me live during the concert. 


Drums

A pitch tracker “listens” to the voice, which is Kim-Auto playing off from project partner Sidsel Endresen’s sound archive. The machine listens in short fragments (windows) and tries to analyse the pitches during this time window. Much of Sidsel’s material in this specific archive is textural, without much pitch. The accuracy level on the pitch tracker is also set low, so the pitch tracker makes a lot of highly questionable judgements. This information is sent to a midi drum kit that tries to imitate Sidsel’s voice based on the (mis-)information it receives from the pitch tracker. The sounding output is persistent, quite detached from Sidsel’s voice, it appears to be improvising, it leaves a lot of space. I also think it is funny. 

 

Piano 

Just like the drums the piano also receives its information from a pitch tracker. Here the pitch tracker listens to what I am playing during the performance – both what I play in the moment and what Kim-Auto has stored during the performance. The operating principle for this pitch tracker is the same as for the drums’ pitch tracker. However, I found that I needed the accuracy level to be much lower in order to avoid the piano to simply be an echo of my instruments. The output signals are also delayed a few seconds. 

 

Voice

2 instances of Kim-Auto doing the same: playing off from the same archive with textural material from Sidsel. 


Pedal steel guitar and drum machine

Two chains of sound: one going directly from my instruments to the speakers and one going into Kim-Auto’s archiving module which starts to build a new archive. In this performance it takes a lot of time before Kim-Auto records and plays back any information at all from the steel guitar / drum machine. The recording must have been activated mostly during my pauses, with the effect that Kim-Auto’s output was my pauses. 


Acoustic guitar

No effects, no interference by Kim-Auto. 


All these instruments in combination offered a more complex output. The instruments seemingly played together. Some of them actually did, and others were completely detached, leaving space for me and the audience to create some kind of connection.


In order to make Kim-Auto’s output become less predictable and to make the music develop during the performance, we programmed it to slowly change how often it outputs sound. The 3 parallel instances of Kim-Auto (2 x voice + 1 steel guitar / drum machine) individually change how frequent they spit out sound. In series of 15 minutes the 3 parallel algorithms gradually decrease the silences from about 20 seconds between each output to 1 second between each output. 


 

Choice of musical material

With this I could prepare for Kim-Auto’s pauses to become gradually shorter and I could expect longer pauses to come back at some point during the performance. I decided to focus on a few main areas of material in my own playing: ringing cluster chords, both from flageolets and played with the tone bar, silence, percussive sounds played with a mallet on scrub on top of the guitar pickups, short percussive events on the drum machine, gong-like sounds played with clips on the strings. I also decided to mostly stick to the same type of material for quite a bit of time, so Kim-Auto would be able to record and play back similar type of material. This idea failed a bit since Kim-Auto mostly seemed to record when I was silent. Additionally, I asked the sound engineer to mute the drums and the piano from the beginning, and to bring them in at his own will after a while. 


Reflections and questions after

In my view, these minor changes in Kim-Auto and the combination with pitch tracker and midi instruments added much needed complexity and development. I find it interesting how this setup, in combination with restricted musical material, dictates a peculiar form, developed, spread and shaped like a Chinese hand fan. 


I think I was listening for silences, for cues to introduce sound and to hear how Kim-Auto slowly chewed it, for a good moment to introduce new material, for a good moment to wait and to wait a little longer. I ended up quitting after about 25 minutes, not 40. The material was exhausted, or was it me, Kim-Auto, my ears, my ideas and imagination? 


Two major problems still remain: how to start and how to end? 

 

Ivar Grydeland, August 2018

(Full concert here)

© Goodbye Intuition


contact: igrydeland (at) nmh.no