Explorations in the generative control of notes and rhythms, scales and modes — from human to algorithmic gesture
Composer Giuseppe Tamborrino uses computer tools not just for timbre, but also for the production of events in deferred time or in real time. This idea forms the basis for generative music — examples of which can be found throughout music history, for example Mozart’s Musical Dice Game.
In his research, Tamborrino carries out this process in various ways and with different software, but the goal is always the same: the generation of instructions for instruments — which he calls an “Electronic score”.
Here’s an example from one of his generative scores:
As part of the process, Tamborrino has always designed in a certain degree of variability, using different stochastic or totally random objectives to speed up the process of abstraction and improvisation that, once launched, are invariant. Often, though, this way of working resulted in small sections that he wanted to cut or correct or improve.
This motivated him to use Kyma to pursue a new research direction — called post-controlled generative events — with the aim of being able to correct and manage micro events.
This is three-step process:
A generational setting phase (pre-time)
A performance phase, recording all values in the Timeline (real time)
A post-editing phase of automatic “generative (after time) events
Tamborrino shared some of the results of his research on the Kyma Discord, and he invites others to experiment with his approach and to engage in an online discussion of these ideas.
Motion Parallels, a collaboration between composer James Rouvelle and visual artist Lili Maya was developed earlier this year in connection with a New York Arts Program artist’s residency in NYC.
For the live performance Rouvelle compressed and processed the output of a Buchla Easel through a Kyma Timeline; on a second layer, the Easel was sampled, granulated and spectralized and then played via a midi-keyboard. The third layer of Sounds was entirely synthesized in Kyma and played via Kyma Control on the iPad.
For their performative works Rouvelle and Maya develop video and audio structures that they improvise through. Both the imagery and sound incorporate generative/interactive systems, so the generative elements can respond to them, and vice versa.
Rouvelle adds, “I work with Kyma every day, and I love it!”
Anssi Laiho is the sound designer and performer for Laboratorio — a concept developed by choreographer Milla Virtanen and video artist Leevi Lehtinen as a collection of “experiments” that can be viewed either as modules of the same piece or as independent pieces of art, each with its own theme. The first performance of Laboratorio took place in November 2021 in Kuopio, Finland.
Laboratorio Module 24, featuring Anssi performing a musical saw with live Kyma processing was performed in the Armastuse hall at Aparaaditehas in Tartu, Estonia and is dedicated to theme of identity and inspiration.
Anssi’s hardware setup, both in the studio and live on stage, consists of a Paca connected to a Metric Halo MIO2882 interface via bidirectional ADAT in a 4U mobile rack. Laiho has used this system for 10 years and finds it intuitive, because Metric Halo’s MIOconsole mixer interface gives him the opportunity to route audio between Kyma, the analog domain, and the computer in every imaginable way. When creating content as a sound designer, he often tries things out in Kyma in real-time by opening a Kyma Sound with audio input and listening to it on the spot. If it sounds good, he can route it back to his computer via MIOconsole and record it for later use.
His live setup for Laboratorio Module 24 is based on the same system setup. The aim of the hardware setup was to have as small a physical footprint as possible, because he was sharing the stage with two dancers. On stage, he had a fader-controller for the MIOconsole (to control feedback from microphones), an iPad running Kyma Control displaying performance instructions, a custom-made Raspberry Pi Wi-Fi footswitch sending OSC messages to Kyma, and a musical saw.
The instrument used in the performance is a Finnish Pikaterä Speliplari musical saw (speliplari means ‘play blade’). The instrument is designed by the Finnish musician Aarto Viljamaa. The plaintive sound of the saw is routed to Kyma through 2 microphones, which are processed by a Kyma Timeline. A custom-made piezo-contact microphone and preamp is used to create percussive and noise elements for the piece, and a small diaphragm shotgun microphone is employed for the softer harmonic material.
The way Anssi works with live electronics is by recording single notes or note patterns with multiple Kyma MemoryWriter Sounds. These sound recordings are then sampled in real-time or kept for later use in a Kyma timeline. He likes to think of this as a way of reintroducing a motive of the piece as is done in classical music composition. This also breaks the inherent tendency of adding layers when using looping samplers, which, in Anssi’s opinion, often becomes a burden for the listener at some point.
The Kyma sounds used in the performance Timeline are focused on capturing and resampling the sound played on the saw and controlling the parameters of these Sounds live, in timeline automation, presets, or through algorithmic changes programmed in Capytalk.
Laiho’s starting point for the design was to create random harmonies and arpeggiations that could then be used as accompaniment for an improvised melody. For this, he used the Live Looper from the Kyma Sound Library and added a Capytalk expression to its Rate parameter that selects a new frequency from a predefined selection of frequencies (intervals relative to a predefined starting note) to create modal harmony. He also created a quadrophonic version of the Looper and controlled the Angle parameter of each loop with a controlled random Capytalk expression that makes each individual note travel around the space.
Another Sound used in the performance is one he created a long time ago named Retrosampler. This sound captures only a very short sample of live sound and creates 4 replicated loops, each less than 1 second long. Each replicated sample has its own parameters that he controls with presets. This, together with the sine wave quality of the saw, creates a result that resembles a beeping sine wave analog synthesizer. The sound is replicated four times so he has the possibility to play 16 samples if he to presses “capture” 4 times.
The Retrosampler sound is also quadraphonic and its parameters are controlled by presets. His favorite preset is called “Line Busy” which is exactly what it sounds like. [Editor’s note: the question is whichbusy signal?]
For the noise and percussion parts of the performance, he used a sound called LiveCyclicGrainSampler, which is a recreation of an example from Jeffrey Stolet’s Kyma and the SumOfSines Disco Club book. This sound consists of a live looping MemoryWriter as a source for granular reverb and 5 samples with individual angle and rate parameter settings. These parameters were then controlled with timeline automation to create variation in the patterns they create.
Anssi also used his two favorite reverbs in the live processing: the NeverEngine Labs Stereo Verb, and Johannes Regnier’s Dattorro Plate.
Kyma is also an essential part of Laiho’s sound design work in the studio. One of the tracks in the performance is called “Experiment 0420” and it is his “Laboratory experiment” of Kyma processing the sound of an aluminum heat sink from Intel i5 3570K CPU played with a guitar pick. Another scene of the performance contains a song called “Tesseract Song” that is composed of an erratic piano chord progression and synthetic noise looped in Kyma and accompanied by Anssi singing through a Kyma harmonizer.
The sound design for the 50-minute performance consists of 11-12 minutes of live electronics, music composed in the studio, and “Spring” by Antonio Vivaldi. The overall design goal was to create a kaleidoscopic experience where the audience is taken to new places by surprising turns of events.
Why is it that a Picasso painting can be widely known and understood by everyone, while sound abstractions are still considered academic and incomprehensible?
Tamborrino’s answer to that question, Acrylic Sounds, was born in January 2024 in the Laterza province of Taranto – Puglia – Italy in the garage of the professor and composer of electro-acoustic music.
Between 2019-2021 (in the period of COVID-19), Tamborrino created a series of abstract paintings using some of the same algorithms he has been using to generate CSound scores for the past 10 years. His idea was to bring his students closer to the concept of sound abstraction by applying the same principles of abstraction to paintings as to musical scores.
Tamborrino does not call himself a painter and has always argued that painting is painting, sound is sound, and sculpture is sculpture; each has a different role, but sometimes they use the same concepts.
While waiting for his score generation algorithms to compile, Tamborrino engaged in creative outbursts with a sponge and a brush as he drew lines or experimented with random color transformations obtained by sponging, likening it to techniques of sound morphing.
Taking these 60 semi/casual paintings as inspiration, he then realized them as Sounds in Kyma. For each painting, he created a formal pre-design and customized Smalltalk scripts to get closer to the meaning of the picture under analysis.
For the sonorisation of the painting “La Sinusoide” he used a Capytalk expression that allows you to control the formants of a filter and the index of the formants with the Dice tool of the Kyma Virtual Control Surface (VCS), generating several layers gradually and quickly with the “smooth” Capytalk function.
He also used the Kyma RE Analysis Tool for the generation of a Resonator-Exciter filter, creating transformations of the classic sine wave with the human voice.
He used a real melismatic choir, because the painting represents a talking machine…
For “Le Radio”, Tamborrino tried to simulate the search for the right radio station, transforming songs between them. To do this he reiterated several times sounds and music produced by a group of songs in the same family using a simple “ring modulator” to suggest AM radio and used a Capytalk expression to emulate the gradual spectral transformation effect of switching radio stations combined with random gestures to simulate the classic noise between the station and the music. Finally, he used granular synthesis to create glitch rhythmic transitions and figurations and combined selected abstract material as multiple tracks of a Timeline.
“The Mask of the Seagulls” was inspired by the observation of an elderly man annoyed by the anti-COVID mask and some seagulls that repeatedly circled around him, chanting and emitting verses as if nature were making fun of him.
To express the annoyance of the man, Tamborrino simply recorded his own breath recorded through a mask.
For the creation of this syneathesia, Tamborrino emulated the behavior of cheerful and playful seagulls, with a script for the management of the density, frequency, and duration of the grains of granular synthesis; in exponential mode with decelerations and accelerations and friction functions for physics-based controls and swarming.
All of these layers were then assembled in a multi-track Timeline.
Tamborrino plans to publish the work as a book of paintings with QR codes for listening and will exhibit the work as paintings paired with performances on an acousmonium.