The software Kim-Auto has been part of our method in the project. A tool. It was developed by programmers at Notam, based on wishes and suggestions from members of this project. 

We are happy to share the software, which is a patch created in MAX. However, please note that we have very limited time supporting the software, itself, now when the project is finished. 

 

The user manual can be found here, and the patch can be sent on request. 

Upbringing Kim-Auto 

The initial programming of Kim-Auto, started after an enquiry to our first programmer at Notam in September 2017:

 

I suggest a patch that records and archives what I do (each time the patch is on) and randomly throws abstracts from this archive back to me when I play. This means that I’m slowly building and archive each time I play with it. I’m not sure if it should "listen to me" and play back from the archive according to a specific parameter or if it should ignore me completely.

  

Goodbye Intuition (GI) needed a set of basic patches / algorithms that we could improvise with during the project’s first workshops. We asked for a computational improvising partner, rather than computational extensions of our instruments. We wanted Kim-Auto’s improvised musical output to be based on our sounds, and we wanted Kim-Auto to record our material and twist it to create its own. In other words, Kim-Auto holds elements of both the player and the instrument paradigms, according to Robert Rowe’s classification of interactive music systems (Robert Rowe, 1992).

 

Translation from idea to algorithm is a complex process. The codes of the artistic ideas, how we choose to articulate them and the programming language, can be very different. When translating between artistic ideas and programming language, new implications, effects and artistic ideas may occur. We have to consider either to strictly keep to the artistic idea and groom the algorithm accordingly, or to buy into and breed the side-effects of the programming process. This is also a consideration with ethical implications; should the machine just be the manifestation of the artistic idea, or should we let the programmer and the machine-logic contribute equally? In GI we tend to lean towards the second strategy. We try to embrace and investigate what the process of making the algorithm holds.  The “errors” and side-effects of the translation itself can add intriguing new features and ideas on how to proceed. A sensitivity towards these “errors” can generate interesting artistic and technical propulsions. 

 

The autonomous machine

Even the simplest processes in our algorithm requires many steps of complex programming. And for each step in these processes there are numerous parameters that potentially could be controlled by humans – both by the programmer during the design and by the player before or when we play with Kim-Auto. Our aim is that Kim-Auto at some point will act like an autonomous machine being in control of its own music making. However, at this stage in the programming phase, Kim-Auto often exceeds the limits for what we, the humans in the group, find valuable as musical contributions. These limits vary within the project group and limits are even a matter of change and development for each and every one of us. Contributions from Kim-Auto which we find valuable and musically compelling on one occasion, may disturb us in another one. At this point we still need to control certain parameters in the algorithm, we are not ready to let Kim-Auto loose. It is still early in the upbringing. Currently, 10 months into the project, Kim-Auto’s aesthetics are strongly directed by both the initial enquiry to the programmer, the programmer’s decisions, our need for control, as well as the sounds it has recorded from us. We are looking towards that moment where we can consider Kim-Auto "grown up", where we can let it loose. Then we can hide the options we have to control it and let it mature on its own, develop its own style. 

 

Kim-Auto’s building blocks

Roughly, we can sum up Kim-Auto’s planned architecture to consist of an archiving module, a listening/learning module and a generative module. In short, the archiving module records and stores musical material from the humans it plays with. This module is more or less finished and will be described below. The listening/learning module is not developed yet. However, the idea is that it should gather and store data from its input (either the direct input from the humans it plays with or the archiving module), and it should be capable of using this input to reconstruct sound or to control different parameters in the other modules. If we are capable of implementing an intelligence in a functional way, the learning module should be able to send reprogramming messages to the other modules, making Kim-Auto change its identity, personality or behaviour over time. The generative module still is in the pipeline. We imagine a generative module that can constitute an autonomous machine voice, not depending on a reconfiguration of human sound input. This sound output could be some kind of synthesis or a mechanic generation of sound using for example solenoid hammers controlled by midi. There has already been some prototype research for these elements in the project, using a listening based sound generator and also routing the archiver to generate midi controlling a solenoid hammer. 

 

The archiver module

The archiver module of Kim-Auto is programmed in Max. It consists of 4 building blocks: A file pool section, a section for recording, one for play and one for playback speed probabilities. The patch is constantly monitoring sound input from its designated sound input through an audio interface. One window of the patch is linked to one file pool and a stereo output. It is possible to have several windows linked both to or the same file pools. 

 

Both the playback and the record section have sliders to control length of the recording/playback (RECrandomwaitrange) and wait time between recording/playback (total REC time relative to RECrandomwaittime). The playback section also has a setting deciding how fast the playback jumps back and forth internally in the file which is loaded into the playback buffer. The patch, depending on these settings, now and then decides to record, and the recorded file is stored into an archive. The archive is saved to disk, ready to use in later encounters. When the patch has built an archive with one or more files, the playback section can start to choose files for playback according to the settings of the sliders. The sliders may also be randomized using the HighRandom button. The probability section consists of 5 sliders. On every slider one can decide a probability for getting a playback speed within a given range. For example; if one slider is set to 100 % and the values are -1 and 1, there is a 100 % chance that the playback speeds will be between -1 and 1 (unpitched reverse and unpitched forward). If we set one slider to 80% with values -1 and 1 and another slider to 20% with the value 2 in both fields, there will be 20 % chance of getting a playback speed an octave higher (2) and 80% chance of getting unpitched reverse and unpitched forward (-1 and 1). 

 

Turbulent mirror

The archiver module constantly builds its source of material and outputs this when it improvises. These contributions to the improvisatory affects the timing and adds elements of disruption. This happens on a micro level: we get non-human entries from a machine, timing without human aesthetic considerations, that will “feel” different from what a human could or would do. We will also get effects on a macro-level: the archiving module’s output will be a mix of cut-up sound material from different moments in a longer timeline, starting from the moment where the archive is first established (5 minutes ago or 5 months ago) up until real-time. The archiver can present material from the current moment, or it can throw back material from earlier encounters or even other performers, depending on how it is set up and the content of the archive. Kim-Auto chews the archived material, cuts it up, reshuffles it, slows it down, changes direction or speeds it up or down. The output resonates with the term "turbulent mirror" used in chaos theory, and adapted by David Borgo in Sync or Swarm (2005), to describe the blurred mimesis that could appear between performers improvising together. Kim Auto’s output is mimetic, yet abstracted. How can we make sure the machine is more than a mimetic response to a human, more autonomous, that Kim-Auto has a machine-like generative voice? What should the relation between the human improviser and the machine’s output be? How abstracted and in which ways? It should feel like we are improvising together, human and machine, right?

 

The displacement/replacement of time, place and person challenges the idea of the improvisation as something happening in the moment, and promotes the idea of an “extended moment”. The replacement also forces the counterpart (human) to respond differently, thus relating to and taking responsibility for what has been presented earlier on in the long-term cooperation with the machine. We may learn new human improvisational behavior from this situation, and this will continue to be one of our main focuses in our research and reflections in GI.

 

Francois Pachet (2006), introduced the term “Interactive Reflexive Music System” (IRMS) describing music technology that is capable of generating material as a mimetic response or as an intelligent mirror to what is played by a human musician. According to Pachet, conditions for an IRMS are stated like this:

 

  • It must produce an impression of similarity (to the human input)
  • It must conform incrementally to the personality of the user (the human)
  • It must be intimately controllable 

 

In Goodbye Intuition we want a more autonomous machine than described by Patchet. Our criteria may look like this:  

 

  • It must produce an impression of listening and communicating with the human improviser
  • It must question and contrast the personality of the user
  • It must not be intimately controllable, in order to increase the element of surprise in the improvisation. 

 

Configure to reconfigure

To create a true interplay-partner in a machine, we have to program a machine that in one way or another is capable of changing itself without a direct human reprogramming intervention. A machine that can learn or reprogram itself will constitute more of a surprise in the interplay, and therefore push our limits as improvisers further. Andrew Brown (2018), the constructor of the Controlling Interactive Music – system (CIM), conducted a user-study interviewing performers using CIM, a software-based duet partner with midi output. CIM listens to and responds to human performance input data. Brown points out the same problem as we try to resolve addressing the short-term nature of the machines reflexive process, and the lack of a long-term change or learning at the machines end. This premise may lead to a situation where the performer urges to take control over the machine and take the responsibility for the long-term development in the music. 

 

However, as noted elsewhere, one challenge of the short-term nature of the reflexive process is to maintain longer-term interest and development. For the performers using CIM, a common approach to managing larger-scale form was to conceive of the performance in sections, and to use the ‘flush’ function, activated by one of the piano pedals, to clear CIM’s memory and set a new direction for the work.

 

As Brown states, a machine that doesn´t change over time constitutes an archetype that is easy to read. The sound of Kim-Auto is now easily interpreted, and we are frustrated over the fact that the machine doesn´t change its behaviour over time – neither during an improvisation or between improvisations. The exchange of identities, the empathy, the fellow journey towards change and development is missing when we improvise with Kim-Auto. This far into the project, only one of the participators are changing.

 

We have experienced that once the machine has the ability to reconfigure during the improvisation, the dialogue with Kim-Auto gets more – in lack for better words – "interesting", because it is not so easily read. How Kim-Auto should develop and reconfigure, and the balance between human-made and machine-made decisions in this process is something we will investigate further in our project. 

 

Literature 

Rowe, Robert. 1992. Interactive Music Systems: Machine Listening and Composing, Cambridge: MIT Press

 

Brown, Andrew. 2018. "Improvisation with a Reflexive Musical Bot." Digital Creativity, 29(1), 5-18
http://doi.org/10.1080/14626268.2017.1419979 

 

Pachet, François. 2006. "Enhancing Individual Creativity with Interactive Musical Reflexive Systems." Musical Creativity: Multidisciplinary Research in Theory and Practice. New York: Psychology Press

© Goodbye Intuition


contact: igrydeland (at) nmh.no