Live Coding

 

Live coding is the practice of writing and modifying code in real-time, usually done as part of a live performance. The idea is to write/rewrite the generative program while the performance is going on, and a lot of live coders actually project their code on a screen while they are doing so, in that the audience can actually follow what the live-coder is up to. The practice originated in the field of music, but has now extended to visuals, dance and many other forms of art. In some way, I "enhirited" the live coding ethos by accident, from a programming language I have been using for composing electronic music for many years: "ChucK" (Wang 2006). My initial attraction to ChucK was not its on-the-fly live-coding aspect, but more that it very naturally allowed writing audio programs from the level of the individual audio sample, which can be more difficult in other languages. This is known as strongly-timed programming). Sometimes I have followed the convention to project my code in performance-like settings, but I have also employed live coding a technique to generate materials for different forms of composition.

 

Live coding sound vs patterns


The most popular live coding languages (TidalCycles (McClean 2009), Orca (Orca 2016), Sardine (Forment 2023)) are designed for producing "note" patterns that are to be performed by instruments. In many cases, the live-coder produces MIDI or something similar and sends these notes to samplers, drum machines, synthesizers or other MIDI/OSC based software), thus most compositions have a focus on defining rhythm, chords or melodies. Although this has led to a very diverse field of music, I have always felt there is a fundamental limitation inherint in it, as this brings the scope to a purely instrumental model of music, and does not address the nature of the sounds themselves.

Alex McClean (Yaxu) performance using his own TidalCycles

Ever since Varese ("I have always conceived of music as a form of organized sound… Musical ideas are, for me, only sonorous ideas." [citation needed]), there is an alternative approach possible where the division between instrument and composition (notes) is no longer used, one can directly write programs or use electronics to produce sound. Instead of music being build up from notes, the composer concerns herself with directly composing or organizing sound.

 


So when I first wrote my own live-coding tools, which are now called CISP, I intentionally did not start with MIDI, the model I settled with has its origins in the work by by Xenakis, Koenig, Brün etc..[citations] which is refered to as "non-standard" synthesis (Döbereiner 2011), composition is not seen as a process producing a score, nor about reproducing existing instrumental or even analog synthesizers it is about applying compositional idea to generating a waveform directly. In CISP the composer creates programs that produce a changing amplitude over time.

 

CISP


Cisp is a tool I've developed over many years. The source code and many of the programs I have written with it are available on Github. The founding idea of CISP was to create an AC-toolbox (Berg 2024) like algorithmic composition tool, using its LISP syntax and some of the same generators, but applied to directly synthesizing sound and more optimized for live programming.

 

Its basic building blocks are streams. A stream is a series of values that are generated on demand, and in the case of CISP, infinitely long.

In contrast with for example the Patterns Library in Supercollider, it was very important to me that I can write programs without being distracted by details of the syntax. I also wanted to be able to modify parts of the program without having to be too much aware of the complete program. This is why CISP has a very uniform structure: each static value can potentially be replaced by another stream. It allows me to grow a program from the inside out: I start with a very simple structure and I keep replacing parts of it with more program.

 

Initially, CISP only supported generating these streams of amplitudes, but it now also supports generating output to OSC and MIDI (more on that later).

My computations were mostly hierarchical, but most of the "good results", came out of introducing some kind of feedback of output back into the generating material of the program.

Below you will find a demonstration of writing a basic program in CISP:

Writing a basic program in CISP. I use the MIDI output as it makes a bit simpler to follow the relation between the input program and output that we hear.

Basic Functions in CISP:

 

(seq 11 12 13) results in

 

11 12 13 11 12 13 11 12 etc..

 

Another is rv, which stands for random value, thus:

 

(rv 80 84) results in

 

80 81 80 84 83 80 81 81 81 82 etc..

 

Generators can also be nested:

 

(seq (seq 60 62) 64 (seq 75 80 85))

 

This results in:

60 64 75 62 64 80 60 64 85 62

 

I have many types of "walk":

 

walk 60 (ch -5 5)

60 55 50 55 50 45 50 55 60 etc..

 

mup-walk 4 (ch 2 0.5)

4 2 1 2 1 2 4 8 16 32 64 32 16 etc..

 

Timed functions

 

(line (seq 1 5) (st 10))

When evaluated each second:

1 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0

 

 

 

>>>