By augmenting traditional sample-based sound design with generative models, you gain additional parameters that you can perform to picture or control with a data stream from a model world — like a game engine. Once you have a parameterized model, you can generate completely imaginary spaces populated by hallucinatory sound-generating objects and creatures.
Poster for the 11th edition of The Organ Makes its Cinema festival in Geneva
Organist/composer Franz Danksagmüller will perform a live Ciné-concert combining Kyma electronics with a 1937 Wurlitzer organ to accompany Fritz Lang’s classic silent film Metropolis at Collége Claparède in Geneva on 7 March at 8 pm.
The day before the concert, Danksagmüller is presenting a masterclass on organ and electronics improvisation for live cinema: 6 March 2025 at 2 pm.
The Ciné-concert and masterclass are part of the eleventh edition of “L’orgue fait son cinéma” festival which promises a rich and varied program — great films, great organists, regional roots, international influences, a program for young people and families, original combinations, great repertoire and contemporary creation — there’s something for everyone at this festival centered on the 1937 Wurlitzer organ of the Collège Claparède (a middle-school named for 19th century Swiss neurologist Édouard Claparède).
L’orgue fait son cinéma’s fusion of tradition and innovation is perfectly reflected by the school’s motto,
In the 6 January 2025 ToneBenders sound design podcast, Timothy Muirhead and the sound team delve into the details of how sound design guides audience sentiment during each of the gladiatorial contests — with the “crowd sound” taking on a role typically served by the musical score.
Director Scott’s challenge to the sound team was that the crowd itself should function as a major character in the film — telling the story of the rise and fall of each of the other characters — and symbolizing the fall of Rome as the populace loses faith in the emperors.
“So much detail and effort went into making the crowd reactions in the Roman Colosseum…narrate the emotions of the battles”
During the podcast, the sound team — Paul Massey (Dialog/Music Re-Recording Mixer), Danny Sheehan (Supervising Sound Editor), Matt Collinge (Supervising Sound Editor & SFX/Foley Re-Recording Mixer) and Stéphane Bucher (Production Sound Mixer) recount their unusual approach of including the post-production team in the production phase so they could coordinate their efforts. This gave the post-production sound designers an opportunity to “direct” the crowd extras and to ensure that they could capture the raw material needed for post production crowd-enlargement.
In an IBC behind-the-scenes article, supervising sound editors Matthew Collinge and Danny Sheehan (co-founders of Phaze UK) describe how they generated the sound of 10,000 spectators by layering the sound of extras on the set with recordings of cheers and jeers from bullfights, cricket matches, rugby and baseball games and then “…transformed them into a cohesive roar using a Kyma workstation,” according to Sheehan.
During an interview with A Sound Effect, Matthew Collinge describes the sound design for the gladiatorial contests involving animals: “We manipulated actual animal sounds to highlight their aggression and power. For the baboons, we morphed chimp calls and then combined them with the screeches of other animals to create a unique and very intimidating sound.”
In that same interview, Collinge describes how Rob Prynne and the team enhanced the sounds of the arrow trajectories:
Rob Prynne used these recordings to model a patch in the Kyma where we took the amplitude variation between the L and the R in the recordings and used this to create an algorithm that we could apply to other samples. We then mixed in animal and human screams and screeches which had their pitch mapped to this algorithm and made it feel as one with the original recordings.
When you see Gladiator II — whether in a theatre or via your favorite streaming service — you’ll be rewarded with a bit of extra information if you stick with it through the closing credits!
Sound design & Kyma Consultancy credits. Photo submitted by an anonymous movie-goer who patiently sat through the end credits.
At the invitation of UWL Lecturer Charlie Norton, Carla Scaletti presented a lecture/demonstration on Generative Sound Design in Kyma for students, faculty and guests at University of West London on 14 November 2024. As an unanticipated prelude, Pete Townshend (who, along with Joseph Townshend, works extensively with Kyma) welcomed the Symbolic Sound co-founders to his alma mater and invited attendees to tour the Townshend Studio following the lecture.
After the seminar, graduating MA students Vinayak Arora and Sabin Pavel (hat) posed with Kurt Hebel & Carla Scaletti (center) and Charlie Norton (distant upper right background)UWL Professor of Music Production Justin Paterson and Trombonist/Composer/Kyma Sound Installation artist Robert Jarvis discuss the extensive collection of instruments in the Townshend Studio
It seems that anywhere you look in the Townshend Studio, you see another rock legend. John Paul Jones (whose most recent live Kyma collaborations include Sons of Chipotle, Minibus Pimps, and Supersilent among others) recognized an old friend from across the room: a Yamaha GX-1 (1975), otherwise known as ‘The Dream Machine’ — the same model JPJ played when touring with Led Zeppelin and when recording the 1979 album “In Through The Out Door”. Yamaha’s first foray into synthesizers, only 10 were ever manufactured; it featured a ribbon controller and a keyboard that could also move laterally for vibrato. Other early adopters included ELP, Stevie Wonder and Abba.
JPJ recollects his days of touring with the GX1 and the roadie who took up temporary accommodation in its huge flight case, as Alan, two students, Bruno and Robert look on.Charlie Norton with Alan Jackson (back), JP Jones (at Yamaha GX-1), Carla Scaletti & Kurt Hebel in the Townshend StudioAlan Jackson and Pete Johnston pondering the EMS VocoderComposer Bruno Liberda (the tall one) with Symbolic Sound co-founder Carla Scaletti
Portland-based multimedia artist William Selman’s 8 November 2024 album, The Light Moves Between, poses the question: “Can scenes from the Pacific Northwest of America connect us to entities in distant universes, or are we listening to William’s unique processing of his own surroundings?”
Various Kyma techniques can be heard on each track, with cross-synthesis emerging as Selman’s personal favorite. Here are a few details you can listen for on the album:
“Outshone the Sun”
Sunspots are caused by intense magnetic fields emerging from the interior. Image created by NASA.
Inspired by sunspots and electromagnetic interference reminiscent of the sounds of popping air bubbles that barnacles make during low tide, Selman used a frequency shifter in Kyma to modulate the Serge New Timbral Oscillator for the sharp droning tones near the beginning to match the character of the shortwave radio sounds. Much of the bed of textures in the second half of the piece is made by cross-filtering field recordings of barnacles against shortwave radio recordings.
“Kept in Banks and Vessels”
For the triangle and singing bowl sounds, Selman designed a Jaap-Vink-inspired feedback Sound in Kyma to create the drones. For the final section, he cross-filtered Serge sounds with frog sounds that he recorded in Hawaii. The warbly animal call sounds are made in Kyma with an Oscillator and a LossyIntegrator.
“Flutter at the False Light”
Here you can hear a Kyma frequency shifter modulating various electromagnetic field recordings made near the composer’s house. The bed of textures for the ambience are cross-filtered sounds from recordings made with contact mics in his studio space and on his windows.
“New Topographics”
The last track was inspired by American landscape photographers and photos of empty, non-places primarily in the American West. Selman recorded sounds such as birds, wind storms, electrical lines, and water wells in the Central Oregon high desert. He used a cross filter patch made by Pete Johnston and Alan Jackson during the Kyma Kata sessions, which he finds particularly useful for creating a stream of sounds that flow seamlessly from one into another. Other instrumental elements include a Serge synthesizer, organ, and bowed (and hit) vibraphone.
The Light Moves Between represents Selman’s return to visual work following a long hiatus — this time around, he is bringing his visuals together with his sound work. The first three tracks on the album were written to accompany three short films. Although the first one (“Outshone the Sun”) has been set aside, the second and third pieces are complete and can be viewed on Vimeo:
In October 2024, Franz Danksagmüller, along with his students from the Royal Academy of Music, performed a live soundtrack for pipe organ and Kyma to accompany the silent film Nosferatu twice in London: The first performance, on 4 October 2024, was at the Royal Academy, and the second performance was on Halloween at the cathedral in St. Albans to a sold-out audience and was, by all accounts, a fantastic success (resulting in some very happy students).
St. Albans organ loft during live soundtrack for “Nosferatu” with Kyma Control on the iPadAudience for the sold-out performance of Nosferatu at St. Albans (view from behind the projection screen)Students of Franz Danksagmüller performing the live soundtrack for Nosferatu at St. AlbansEntryway to St. Albans Cathedral
At the Musikhochschule Lübeck, Danksagmüller’s students in the new master’s program in Hyper-organ, Improvisation, Composition & New Media are presenting a concert for organ, Kyma and live-video on 8 December 2024 in the main concert hall at the MHK. Composer/performers include:
Sarah Proske: Emerson, Lake and Palmer´s “Fanfare for the common man for organ, percussion, live electronics
Patrycja Olszewska: Liquid Motion for organ, live electronics, video
Lennart Pries: Improvisation for the silent film “The Fall of the House of Usher (USA 1927) for organ and live electronics
Karin Lorenz: (Un-)Known for organ and electronics
Sarah Proske: smoke for organ, live electronics and video
Valentin Manß & Wojciech Buczyński: crowds, live & death, Music for (historical) silent film excerpts for organ, live electronics, video
Fabio Paiano: á la ELP for organ and live electronics
Did you know that you could study for a degree in sound design and work with Kyma at Carnegie Mellon University? Joe Pino, professor of sound design in the School of Drama at Carnegie Mellon University, teaches conceptual sound design, modular synthesis, Kyma, film sound design, ear training and audio technology in the sound design program.
Sound design works in the spaces between reality and abstraction. They are less interesting as a collection of triggers for giving designed worlds reality. They are more effective when they trigger emotional responses and remembered experiences.
“Ben Burtt is a pioneer and visionary who has fundamentally changed the way we perceive sound in cinema.”
In this interview, Burtt shares some of his experiences as Sound Designer for several iconic films, including his discovery of “a kind of sophisticated music program called the Kyma” which he used in the creation of the voice of WALL-E.
The interviewer asked Ben about the incredible voices he created for WALL-E and EVA:
Well, Andrew Stanton who was the creator of the WALL-E character in the story; apparently he jokingly referred to his movie as the R2-D2 movie. He wanted to develop a very affable robot character that didn’t speak or had very limited speech that was mostly sound effects of its body moving and a few strange kind of vocals, and someone (his producer, I think — Jim Morris) said well why don’t you just talk to Ben Burtt, the guy who did R2-D2, so they got in touch with me.
Pixar is in the Bay Area (San Francisco) so it was nearby, and I went over and looked at about 10 minutes that Andrew Stanton had already put together with just still pictures — storyboards of the beginning of the film where WALL-E’s out on his daily work activities boxing up trash and so on and singing and playing his favorite music, and of course I was inspired by it and I thought well here’s a great challenge and I took it on.
This was a few months before they had to actually greenlight the project. I didn’t find this out until later but there was some doubt at that time about whether you could make a movie in which the main characters don’t really talk in any kind of elaborate way; they don’t use a lot of words. Would it sustain the audience’s interest? The original intention in the film that I started working on was that there was no spoken language in the film that you would understand at all; that was a goal at one point…
So I took a little bit of the R2 idea to come up with a voice where human performance would be part of it but it had to have other elements to it that made it seem electronic and machine-like. But WALL-E wasn’t going to Beep and Boop and Buzz like R2; it had to be different, so I struggled along trying different things for a few months and trying different voices — a few different character actors. And I often ended up experimenting on myself because I’m always available. You know it’s like the scientist in his lab takes the potion because there’s no one else around to test it: Jekyll and Hyde, I think that’s what it is. So I took the potion and and turned into Mr Hyde…
Photo from Comicon by Miguel Discart Bruxelles, Belgique This file is licensed under the Creative Commons Attribution-Share Alike 2.0 Generic license
The idea was to always give the impression of what WALL-E was thinking through sound…
But eventually it ended up that I had a program — it was a kind of sophisticated music program called the Kyma and it had one sound in it — a process where it it would synthesize a voice but it [intentionally] didn’t do very well; the voice had artifacts that had funny distortions in it and extra noises. It didn’t work perfectly as a pure voice but I took advantage of the fact that the artifacts and mistakes in it were useful and interesting and could be used and I worked out a process where you could record sounds, starting with my own voice, and then process them a second time and do a re-performance where, as it plays back, you can stretch or compress or repitch the sounds in real time.
So you can take the word “Wall-E” and then you could make it have a sort of envelope of electronic noise around it; it gave it a texture that made it so it wasn’t human and that’s where it really worked. And of course it was in combination with the little motors in his arms and his head and his treads — everything was part of his expression.
The idea was to always give the impression of what WALL-E was thinking through sound — just as if you were playing a musical instrument and you wanted to make little phrases of music which indicated the feeling for what kind of communication was taking place.
In early 2025, listen closely to the soundtrack of Lady of the Dunes, the new true crime series airing on the Oxygen Network. The series looks at a recently solved 50 year old cold case.
The director wanted the music to have a haunting quality to tie into the mystery and brutality of the case. That’s when they called on composer/editor John Balcom.
Balcom started out by recording plaintive, mysterious sounds, like wine glasses and a waterphone. Then he took sections from those recordings and treated them using Kyma’s Multigrid. There was a performative aspect to all of this; he would manipulate the sounds going into Kyma, as well as adjusting sounds in real time in the Multigrid, resulting in some disturbingly haunting sounds. Balcom then incorporated those sounds to build out full compositions.
Here are some audio examples of the textures he created using Kyma, along with a couple of tracks that incorporate those textures (Warning: the following clips contain haunting sounds and may be disturbing for all audiences. Listener discretion is advised! ⚠️ ):
Here’s a screen shot of one of Balcom’s Multigrids:
Balcolm concludes by saying: “The music would not have been the same without Kyma!”
Motion Parallels, a collaboration between composer James Rouvelle and visual artist Lili Maya was developed earlier this year in connection with a New York Arts Program artist’s residency in NYC.
For the live performance Rouvelle compressed and processed the output of a Buchla Easel through a Kyma Timeline; on a second layer, the Easel was sampled, granulated and spectralized and then played via a midi-keyboard. The third layer of Sounds was entirely synthesized in Kyma and played via Kyma Control on the iPad.
For their performative works Rouvelle and Maya develop video and audio structures that they improvise through. Both the imagery and sound incorporate generative/interactive systems, so the generative elements can respond to them, and vice versa.
Rouvelle adds, “I work with Kyma every day, and I love it!”