Generative sound design in Kyma

By augmenting traditional sample-based sound design with generative models, you gain additional parameters that you can perform to picture or control with a data stream from a model world — like a game engine. Once you have a parameterized model, you can generate completely imaginary spaces populated by hallucinatory sound-generating objects and creatures.

Thanks to Mark Ali and Matt Jefferies for capturing and editing the lecture on Generative Sound Design in Kyma presented last November by Carla Scaletti for Charlie Norton’s students at the University of West London.

Sons of Chipotle at Saariaho Festival

Anssi Kartunnen & John Paul Jones posing in front of a prickly pear cactus in New Mexico
Sons of Chipotle: Anssi Kartunnen & John Paul Jones

John Paul Jones and Anssi Karttunen are the Sons of Chipotle, described as “what happens when two free spirits join forces for an unbridled sonic expedition…” Known for their curiosity and a willingness to learn and explore, when Sons of Chipotle improvise, boundaries and preconceptions disappear.

Jones and Karttunen were both friends of Kaija Saariaho, and the admiration was mutual. Karttunen had been involved with all of Saariaho’s cello works since they had both moved to Paris in the 1980s.

In memory of Kaija, the Sons of Chipotle will perform with piano, cello, a wide variety of controllers, and live Kyma processing on Saturday 15 March 2025 at the Muziekgebouw in Amsterdam as part of the first Saariaho Festival (13-16 March 2025).

Metropolis in Geneva

Laurel and Harder climbing an organ pipe
Poster for the 11th edition of The Organ Makes its Cinema festival in Geneva

Organist/composer Franz Danksagmüller will perform a live Ciné-concert combining Kyma electronics with a 1937 Wurlitzer organ to accompany Fritz Lang’s classic silent film Metropolis at Collége Claparède in Geneva on 27 March at 8 pm.

The day before the concert, Danksagmüller is presenting a masterclass on organ and electronics improvisation for live cinema: 26 March 2025 at 2 pm.

The Ciné-concert and masterclass are part of the eleventh edition of “L’orgue fait son cinéma” festival which promises a rich and varied program — great films, great organists, regional roots, international influences, a program for young people and families, original combinations, great repertoire and contemporary creation — there’s something for everyone at this festival centered on the 1937 Wurlitzer organ of the Collège Claparède (a middle-school named for 19th century Swiss neurologist Édouard Claparède).

L’orgue fait son cinéma’s fusion of tradition and innovation is perfectly reflected by the school’s motto,

Intelligence is the ability to solve new problems

At Collége Claparède, chemin de Fossard 61, 1231 Conches, Genève

Public Art Biennial in Abu Dhabi

Kyma artist Hasan Hujairi was invited to join an event at Christopher Joshua Benton’s installation for the inaugural Public Art Biennial in Abu Dhabi: Where Lies My Carpet Is Thy Home (2024). Participating artists included Safeya Alblooshi, Hasan Hujairi, Manic Mundane, and Espervene.

Installed on land gifted to the carpet sellers of Abu Dhabi by city founder, Sheikh Zayed bin Sultan Al Nahyan, Benton’s gigantic astroturf kilim is so large that it is best viewed from the air. It doubles as a public park and is surrounded by the carpet sellers’ shops and upstairs residences.

Photo from Architectural Digest article, courtesy of Abu Dhabi Culture and Tourism

Sound Unbound by Halo Halo Experiment on 11 January 2025 was described as four, 30-minute sonic explorations that transcend the boundaries of sound and redefine the concept of “home” at Christopher Joshua Benton’s installation.

Hasan Hajairi’s setup for ‘Sound Unbound’ at Christopher Joshua Benton’s public art installation in Abu Dhabi: Where Lies My Carpet Is Thy Home (2024)

Hujairi writes,

“I wanted to challenge myself by making this the first time I performed live with Kyma. I created a very simple Kyma Multigrid, and since I love the possibilities of the TimeIndex, I had one sound object cycling through a folder of about 150 short sound samples from analog electronic equipment controlled from the Wacom tablet. I loved how sometimes it would sound like granular synthesis, sometimes it would sound like short loops. I could change the playback rate, giving many options to play with sound on the fly and make decisions while performing. I was assigned 30 minutes to perform and by using the Multigrid, I was able to perform at different tempos and with different tools.”

To begin the performance, Hajairi used the macOS text-to-speech function to invite the audience to come closer and to listen closely.

Umano Post Umano

On 19 December 2024, composer Agostino Di Scipio utilized live Kyma signal processing in his ambitious intermedia work premiered at the Nuova Consonanza festival.

Umano Post Umano is, in some sense, a review of Di Scipio’s artistic work, exposing the full range of performance practices he has explored over the last 25 years (since he first introduced Audible Ecosystemics). Umano Post Umano features music, electroacoustic environments and digital audio processing by Agostino Di Scipio, video projections and stage design by Matias Guerra, texts collected from job advertisements, legal and financial advertising and from Hesiod’s Theogony, as well as multiple performers on acoustic instruments, each of whom also operates a live camera.

The work has its origins in a 2008 commission by the Società dei Concerti “Barattelli” in L’Aquila, but work on the project had to be suspended and cancelled as a result of the devastating L’Aquila earthquake of April 2009. Revived and rethought for the Nuova Consonanza 2024 festival and the Pelanda space, the chamber music theater project evolved and emerged with a new title and concept: Umano post umana*.

Central to the work is the concept of sound as the interface. Sound events are never of solely human origin but are always hybrid and distributed according to ecosystemic dynamics where each component is in contact with every other part. The piece reflects, through the experience of sound, today’s exasperating conditions of precarious work, an expression of a conception of the human as a resource managed by mechanistic and algorithmic agents.

Each aspect of the performance – according to a reduced economy of instrumental, electroacoustic, computer and telematic means – evolves in relative autonomy but as a function of the specific performance space – “in real time” but above all “in real space”. What emerges is a coherent ensemble that is nevertheless precarious and subject to disorientation, drifts and possible failures. The overall whole emerges from the material interdependence of performers, shared spaces and creative appropriation of means.

Microphones and loudspeakers are irregularly scattered throughout the space and light, semi-transparent sheets cut the space irregularly. In the soft light, there are workstations with laptops, small percussion instruments and various accessories – as well as musical instruments: flutes, cello, bass clarinet, timpani (all equipped with electronic prostheses that “increase” but also “decrease” and over-determine the instrumental gestures). Each workstation is an autonomous instrumental-electroacoustic-computer chain, not subject to centralized direction or management.

The instruments themselves are mechanical components of a system, from whose functioning they are not independent. The performers have their own service lights (headlamps) and low-resolution webcams through which they watch (monitor?) each other. Lights and images pass through the sheets, projecting onto the walls, onto the performers, and onto the listeners. Some operators wander around in the dim light providing “emergency” technical maintenance to the various workstations.

In the economy of means thus designed, the occurrence and articulation of sounds remain tied to the here-and-now of performative circumstances and contingencies: is it possible that – from silence, from background noise, from acoustic residues of the place, from the mere co-presence of humans and machines – frictions and contacts are formed, that signals and a meaning arise? The sounds take shape from distributed relations, from uncertain and open co- and inter-dependencies, heard at times as atmospheric textures, and, at other times, as clear transient gestures.

Music is made first of all by listening: listening is an active part of the performative dynamic. One acts and is constantly acted upon, one is bound by what one intends to bind: here “music” is the tension of this being more and less of oneself. Performative tension in unstable equilibrium, for whose precariousness (tragic) we must be grateful. In sound we listen (welcome) the conditions of the happening of events and the conditions of our welcoming them (listening).

 

On 8 November 2024, Di Scipio performed in Essen Germany with his former student Dario Sanfilippo as part of the Philharmonie Essen NOW! Laissez vibrer Late Night Concert Machine Milieu. Machine Milieu is a joint live electronics project in which the two performers’ computer music systems are networked with each other. The idea is to view the human performer, the equipment, and the performance space as three places connected to each other through the medium of sound. According to Di Scipio and Sanfilippo, “Machine Milieu” can “develop an integral and potentially autonomous performance ecosystem based exclusively on location-specific acoustic information.” For this performance, the acoustic details were provided by the RWE Pavilion in Essen.


Di Scipio is the chair of the Electronic Music department at Conservatory of L’Aquila, and the chief coordinator of the doctoral board (supervising all PhDs in “artistic research in music”).

In 2020, he completed his PhD at University of Paris VIII with the dissertation, What is « living » in live electronics performance ? : an ecosystemic perspective on sound art and music creative practices, in which he explores the question of “liveness” in the performance of live electronic music, particularly in view of the fact that any performance approach today relies on a large set of heterogeneous technological infrastructures. He proposes a “systemic” view of liveness, and describes the operational details of his own artistic research endeavors. Finally, moving from the “living” character of electronic performance to the “lived” experience of sound, he poses the question of an “ecosystem consciousness” in the cognitive process of listening, particularly as it relates to compositional and sound art practices based on a strict economy of means.

Kyma and the space of computable sound @ ADC24

“…each instrument, each tool… implies an imaginable and explorable universe” — Jacques Attali

To Symbolic Sound co-founder, Carla Scaletti, every tool — from a user interface to a programming language, to an LLM — is a “map” to some underlying functionality. How do the design choices we make affect what people can imagine creating with those maps? How can we begin to navigate an abstract, completely unknown (and potentially infinite) space — like the space of all computable sound? How (and why) is the Kyma Sound graph radically different from other “visual” programming languages for sound and music?

Find out by watching her keynote address, presented at the 2024 Audio Developer Conference (ADC 24).

Violins abducted by aliens

They come in peace!

Anssi Laiho’s Teknofobia Ensemble is a live-electronics piece that combines installation and concert forms: an installation, because its sound is generated by machines; a concert piece, because it has a time-dependent structure and musical directions for performers. The premiere was 13 November 2024 at Valvesali in Oulu, Finland.

Laiho views technophobia, the fear of new technological advancements, as a subcategory of xenophobia, the fear of the unknown or of outsiders. His goal was to present both of these phobias in an absurd setting.

The composer writes that “the basic concept of technophobia — that ‘machines will replace us and make us irrelevant’— is particularly relevant today, as programs using artificial intelligence are becoming mainstream and are widely used across many industries.”

Teknofobia Ensemble poses the question: What if there were a planet inhabited by a mechanical species, and these machines came to Earth and tried to communicate with us via music? What would the music sound like, and would they first try to learn and imitate our culture in order to communicate with us?

Laiho’s aim was to reproduce the live-electronics environment he would normally work in, but to replace the human musicians with robots — not androids or simulants but “mechanical musicians”.

He asked himself, “What would it mean for my music and creative process if this basic assumption were to become true? As a composer living in the 2020s, do I still need musicians to perform my compositions? Wouldn’t it be easier to work with machines that always fulfill my requests? Can a mechanical musician interpret a musical piece on an emotional level, as a human being does, or does it simply apply virtuosity to the technical execution of the task?”

He then set out to prove himself wrong!

Teknofobia Ensemble consists of five prepared violins, each equipped with a Raspberry Pi that controls various types of electronic motors (solenoids, DC motors, stepper motors, and servos) through a Python program. This program converts OSC commands received from Kyma into PWM signals on the Raspberry Pi pins, which are connected to motor drivers.

In live performances, Kyma acts as the conductor for the ensemble, while Laiho views his role as primarily that of a “mixer for the band”.

The piece is structured as a 26-minute-long Kyma timeline, consisting of OSC instructions (the musical notation of the piece) for the mechanical violins. The live sound produced by the violins is routed back to Kyma via custom-made contact microphones for live electronic processing.

Toe and Shell (IRL)

Remember the lockdowns? Throughout those bleak days when musicians couldn’t perform in public, a small group of Kyma artists managed to find a way to continue meeting and making music together via Zoom in two virtual concerts they called Toe and Shell. In early November 2024 they decided to meet in person at Splendor, a former bathhouse that Anne La Berge has converted into a musical mecca in the center of Amsterdam, The Netherlands.

Setting up for a multi-system jam session

The first order of business was for the participants to collectively agree on a schedule of presentations, discussions, and public concerts. Then, over the course of the three day meetup, everyone had a chance to ask questions, experiment, share expertise, and improvise together.

Pete Johnston, Anne La Berge, Charlie Norton sharing expertise

As part of the Toe and Shell, Anne La Berge hosted three public concerts at Splendor.

Alan Jackson performing with Anne La Berge
Steve Ricks improvisation with trombone and Kyma

Alan Jackson conducting the collective schedule-making session

 

 

Emergent life, mind, and music

At the IRCAM Forum Workshops @Seoul 6-8 November 2024, composer Steve Everett presented a talk on the compositional processes he used to create FIRST LIFE: a 75-minute mixed media performance for string quartet, live audio and motion capture video, and audience participation.

FIRST LIFE is based on work that Everett carried out at the Center of Chemical Evolution, a NSF/NASA funded project at multiple universities to examine the possibility of the building blocks of life forming in early Earth environments. He worked with stochastic data generated by Georgia Tech biochemical engineer Martha Grover and mapped them to standard compositional structures (not as a scientific sonification, but to help educate the public about the work of the center through a musical performance).

Data from IRCAM software and PyMOL were mapped to parameters of physical models of instrumental sounds in Kyma. For example, up to ten data streams generated by the formation of monomers and polymers in Grover’s lab were used to control parameters of the “Somewhat stringish” model in Kyma (such as delay rate, BowRate, position, decay, etc). Everett presented a poster about this work at the 2013 NIME Conference in Seoul, and has uploaded some videos from the premiere of First Life at Emory University.

Currently on the music composition faculty of the City University of New York (CUNY), Professor Everett is teaching a doctoral seminar on timbre in the spring (2025) semester and next fall he will co-teach a course on music and the brain with Patrizia Casaccia, director of the Neuroscience Initiative at the CUNY Advanced Science Research Center.

Anne La Berge at the Cortona Sessions

A highlight of this year’s Cortona Sessions for New Music will be Special Guest Artist, Anne La Berge. Known for her work blending composed and improvised music, sound art, and storytelling, Anne will be working closely with composers and will be coaching performers on improvisation with live Kyma electronics!

Anne La Berge at KISS2017

The Cortona Sessions for New Music is scheduled for 20 July – 1 August 2025 in Ede, Netherlands, and includes twelve days of intensive exploration of contemporary music, collaboration, and discussions on what it takes to make a career as a 21st-century musician.

Anne is eager to work with instrumentalists and composers looking to expand their solo or ensemble performances through live electronics, so if you or someone you know is interested in working with Anne this summer, consider applying for the 2025 Cortona Sessions!

Applications are open now (Deadline: 1 February 2025). You can apply as a Composer, a Performer, or as a Groupie (auditor). A full-tuition audio/visual fellowship is available for applicants who can provide audio/visual documentation services and/or other technological support.