Piecing it back together, by relying on notes from the process. Here’s one:
1:
Re-enact important parts
recognize progress
show pleasures you allow yourself in the practice
finish something
2:
use everyday sounds
resemble the world
use a protagonist
make a point
**
“What have I done?”
In my ongoing creative process, assembling fragments—sounds, words, ideas—offers a path to new artistic territory. It’s not about crafting a seamless whole from the start but gathering what’s at hand and letting it take shape. I’ve written somewhere in a note that creativity often means heading into darkness, piecing together bits without knowing the endpoint (#70). This isn’t a linear puzzle with a fixed solution; it’s a method where the act of combining parts reveals something unexpected. A lyric scribbled late at night, a rhythm tapped out in the studio, a stray thought—they don’t arrive as a plan but as fragments I stitch together, each collision sparking a facet I couldn’t predict.
This approach thrives on constraints. I’ve noted how boundaries, even silly ones like fitting instruments on a tabletop, funnel ideas into focus. Assemblage here isn’t random; it’s deliberate piecing within limits—vocals layered over a percussive pulse, a cut-up text reshuffled into song (#74). The result isn’t polished perfection but a living item, born from tension.
I’ve found that narrowing down—stripping an instrument to its essence or splicing lyrics with noise—forces new outlets, not less creation (#85). It’s like building with broken stones: the cracks dictate the form, and the form becomes something.
This method avoids predictable.
I’ve mused over the notion that standard forms can (and should) be tweaked—skipping rests, jumping ahead—to break free, but I’ve never wanted to break the form completely (#140).
Piecing fragments together in this way keeps the work fluid and open to surprise. It’s not about good or bad; it’s about what emerges when parts meet—wether a soundscape from everyday clatter or a story from scattered words (#109). Each assembly is a gamble, a trust that meaning lies in the overlap, not the blueprint. By gathering what floats by and letting it melt together, I uncover artistic facets that feel alive, unscripted, and mine.
Fibosine is the title of my first Python script. It was made to generate sound based on a rule, originally a Fibonacci sequence*, hence the name. The idea was abandoned quickly, replaced by another rule. The rule itself is simple: generate a sine wave from a given frequency, then add overtones to this every X second. When adding the 6th overtone, fold down the frequency so it’s played within a given octave and start the process over.
I later added a drone for bass, and even a soloist part.
* (In music, a Fibonacci sequence uses numbers where each is the sum of the two before (1, 1, 2, 3, 5, 8...). It shapes rhythms, note durations, or phrase lengths, creating organic, expanding patterns that mirror nature’s growth, adding a subtle, evolving structure.
From note #15
Someone makes the tool
Someone makes practical items with the tool as it was intended
Someone has fun with the tool, and this sometimes leads to unexpected places
From note #5
Gear and the rabbit hole of new gear is a continuing source of inspiration and a key element in the search for new music and outputs.
The difference between coding yourself and buying a new box is that the modularity of features, happenings and output becomes greater, and less predefined, for good or worse.
In my situation, it was the next logical step, because I've never liked to press play on a drum machine, or start up a sequencer, and sit back and relate to it as an instrument that has its own value.
Coding the Fibosine, however, is different, because I'm part of the curating all the way through, there is not a nice sound + an unused sequencer in this, there is not a new algorithm + some feature I don't understand. It's a composition, really, and in that sense it's the endgame of hunting for gear. It will take a lot for me to buy new stuff now.
**
At this state, I build tools—mostly Python scripts—to break free from borrowed frames, GUIs and functions of design from others.
Coding my own stuff cuts through that. It’s not about pressing play on their machine; it’s piecing my fragments—loops, logic, need—into something I can control.
Constraints still apply: there has to be a task to solve, a boundary of some kind. But within that, I assemble freely—no sequencer I don’t grasp, no feature I didn’t shape. The script is loaded with my intent only.
This is the first testing of my fibosine-script, with another musician. It’s now in V5, meaning I have been through many stages, and many variations. Mainly, they consist of:
- Abandoning the idea of using real-time machine-learning systems to generate musical output. Why? Because I don’t want to replace a human. Also because it requires prompting with audio, and not text, which isn’t really something I can solve. I am a musician, not a computer engineer.
- Simplifying the output. Why? Because I know now that I need to carve out spaces in which I, and my fellow musician, can connect the dots and contribute. In V3 I was building a complete piece, an algorithmic composition, that needed no additional input.
- Developing the code base a lot, one step forward, one step backward. Using machine learning systems, for efficiency and for code output.
This particular version has flaws and drawbacks, but we manage to make something with it. Like any design, I don’t want to showcase that it’s not this or not that. I want the result to be good, or not good. The key takeaway in this version is that it works, and it allows for something to be created.
Ensemble³ is a project where two musicians and a Python script collaborate in real-time, creating a dynamic system that listens, interprets, and responds to live musical input. This isn’t just about adding AI to the creative process—it’s about building an equal third voice in the ensemble, one capable of agency, unpredictability, and musical sensitivity.
At its core, and at the time of writing this, Ensemble³ uses three interconnected Python scripts: one generates an evolving soundscape sent as MIDI to a receiving synth, another handles rhythmic analysis and output, and a master script manages both, integrating AI for interpretation and real-time control. Live polyphonic audio is analyzed, extracting pitch, rhythm, amplitude, and envelope data, which the AI processes to generate meaningful musical responses.
This project isn't merely about novelty—it addresses critical questions about the relationship between human musicians and intelligent systems. How can an AI respond with emotional nuance? How can it maintain rhythmic and harmonic consistency without becoming repetitive or mechanical? Ensemble³ pushes these boundaries by prioritizing expressiveness, adaptability, and musicality in its AI responses.
Within the framework of EU cultural policy, Ensemble³ stands as an exemplar of cross-disciplinary and cross-border collaboration. Programs such as Creative Europe and Horizon Europe emphasize innovation, technological integration, and cultural resilience. AI is identified as a key focus area for future cultural and creative industries, and Ensemble³directly addresses this by trying to use AI as a meaningful collaborator rather than a background tool.
Moreover, the project reflects broader cultural shifts. As live performance spaces increasingly adopt hybrid physical-digital formats, Ensemble³ offers a glimpse into how musicians and intelligent systems can share the creative space without diminishing human artistry. It promotes an equitable relationship between human intuition and machine intelligence, creating something neither could achieve independently.
In essence, Ensemble³ isn't just about technology—it’s about trying to build a new kind of creative partnership with the technology currently available. At present it’s a wireframe model for future collaborations where musicians and AI co-create, at some points crossing into traditions of algorithmic composition and composition through scripts that are self-supporting in terms of generating musical output.
I am still piecing my practice together, but I am more sure that I will be able to do so than I was 6 months ago. It’s not really significant to the research process as such, but I mention it because the reason for being more certain now comes from acknowledging the fact that information does not disappear, and the realization that leaving out, holding back, as often noted by Karl Seglem, a long time collaborator and friend of mine, do not mean abandoning prior ideas or stopping to do something.
What it means is that the artistic practice, and the visible results from such a practice, is a palimpsest structure, where all prior knowledge is visible in new creations. It’s there, even though it’s not there. It gives shape, even if it has been removed. In a way it allows for interplay with a historic, or archival, self, either when composing, or when performing.
Ensemble³ is a project where two musicians and a Python script collaborate, in real time, to perform a coherent piece of music, partly improvised and partly composed. The codebase produces generative audio (improvisation) based on a set of programmed rules (composition). The result is not the same from time to time, but stays within the framework of a pre-defined tonal and rhythmic language that makes it possible to respond musically. The musicians' task is to improvise over the machine-generated sound, in addition to performing composed passages and having overall control, in order to create a meaningful sonic result.
Initially, the project was about adding AI to the creative process, as an active component, but has changed and is now about using available (AI) tools to build a third voice, a voice with stubborn will, a certain degree of unpredictability and musical autonomy. Since what is interesting in a collaboration between man and machine ultimately revolves around how the parties' performing interest in an ongoing musical process can be maintained, and how this can result in a result that has a satisfactory artistic value, programmed algorithms are a tool that is complex enough. This also means that I avoid a number of ethical issues.
Ensemble³ will have a permanent participant (Jonas Sjøvaag), a participant with local roots, in addition to the programmed side. There is thus also a conceptual side where the goal is to be able to perform the works with different musicians and to allow them to influence the artistic result.