Using CISP in collaboration

 

During the COVID pandemic, I felt a strong urge to escape the isolation of working completely alone, and was able to collaborate with Olaf Kerckhaert (also a Sonology alumni). His most commonly used tool for creating music is Ableton, a software optimized for creating live electronic music. Although it is most commonly employed for making popular electronic music (dance, techno etc..), it can certainly also be employed to create more experimental forms of music, as is proven by Olaf's other work.

 

Instead of just improvising together in parallel, we decided it would be more interesting if I was not generating any sound, but would just send Olaf patterns for him to interpret. So although it did mean adopting a more "instrumental" concept, as I was not directly controlling the instrument, there was a certain amount of disconnection that made this interesting for me.

 

The initial setup was as follows:

Olaf's arrangement quickly went beyond just assigning an instrument to each channel that I sent him. For example, he would filter for certain pitch ranges, transpose or map the pitches.

 

Another effective method was assigning one of my channels to a sampler, and not filling in all the sounds. If my pattern was an arpeggio, the gaps in the assignment could result in interesting rhythms, especially if my pattern was changing used pitches over time. A similar strategy was to use a sampler, but with very tonal sounds (chords, bells etc..), which were transposed to different pitches than I was sending him. By doing so he was able to create new harmonic and melodic patterns.

 

This led to an interesting dynamic from my side: my patterns were not designed by aiming for a certain harmonic or melodic structure, but it would be more important to see what behavior the pattern had: the amount of stability or randomness, various forms of loops and rhythmic development through making complicated velocity patterns. This made the MIDI a lot more intereseting, as in many cases these items were not performed as traditional notes, but formed a raw texture that could be formed into a very wide range of musical results, often completely unrecognisably.

 

Olaf was not just an instrument performing a live score, but actually a dynamically changing agent that would use my input as raw material to his own compositional process. When this worked well, we felt that we were in a kind of distributed form of composition. To me, it showed that if you MIDI in a more ambiguous way and do away with the specific meanings (pitch, velocity) normally assigned to the data, it can be used to communicate musical material in a rewarding way.

 

Selected results

Audio Recording of Guitar & Flow

Harmonic 04-12-2019-2b-NEW MIDI-bii-13-12-2019

Challenges

 

During the process, several challenges showed up:

  • The need to synchronise the tempo and metre of CISP to Olaf's software
  • The ability to easily store and restore past patches that "worked well"
  • Replace parts of the program while not stopping it in an easy and reliable way
  • Manage and keep overview of many channels running in parallel.
  • Live debugging vs live coding
 
 

 

Synchronisation

At first I just attempted to control Olaf's transport (start, stop, pause of his programs) using the MIDI Clock Protocol (MIDI Association 2024), this produces message each 1/16th of a quarter note, and has start and stop messages. However, I found that they didn't really serve my purpouse well, as metre was not something explicitely defined in CISP, but moreso a result of manipulating the entry delays of events on an event by event basis.

 

We found a much simpler solution: Olaf would send me a midi note (C3) each 1/16th note, and this would trigger a next value from my stream. Eventually it also became possible to use some properties of the note to influence the execucation of the stream. Eventually I also made filters, which allowed me to ignore some of the triggers according to some pattern, or even generate more triggers by feeding them into a midi delay. This step changed the situation a lot, as it became less noticable to change the program on-the-fly, as the timing was no longer controlled inside CISP, but externalised.

 

 

Another thing was that Olaf also started modifying the MIDI quite a lot and mapping it in unusual ways for example, I would send in a very rich and dense stream of midi, but he would only map a part of that to actual percussive sounds. This allowed him in some way to have a large amount of control.

 

Documentation and reproducation

 

Another practical issue was storing and restoring succesful programs. Cisp is simply run as a kind of plugin to my text editor. A common flow is that instead of having to decide on a new name for each version, I simply overwrite the old, as this is faster, however, sometimes the improvisation enters a slow ramp down, where the musical result doesn't become more interesting, and something is lost. I have currently resolved this by making a copy of each "successful" program, although further work on categorising results needs to be done.

 

Managing running programs

 

CISP allows running multiple programs at once, on different channels, and because the sync trigger is external, I can easily start multiple programs in parallel. However, it is harder to replace only part of the program. I can assign values to a kind of "bus", this means that part of the program can be redifined later. This bus is singular however, so if I have multiple instances running, updating the bus will affect all running threads.

 

Live debugging

 

One problem with working with software as a musical tool is that it is easy to fall into a trap of constantly changing and complicating the program to a point where programming takes up more time than actually producing music. It is a bit of a paradox, as often you get the best new ideas during making music and it is thus very attractive to implement them immediately, however, a careful balance has to be arranged between changing the tool and using it.

Joint Research Day 2024

 

Below follows a recording of the lecture/perfomance during the Joint Research Day, that took place on November 22nd 2024.

>>>