Tormento and Depeche Mode M

Sound designer Javier Umpierrez has two sound-intensive films coming out this month and next.

Olallo Rubio’s Tormento stars Natalia Solián as an exhausted security guard with some guilty secrets on her first day on the job covering the graveyard shift at the morgue — what could go wrong?

Umpierrez’ sound design for the Tormento trailer is terrifying! He builds and builds and builds to a climax when it suddenly goes silent! Followed by a strangely disembodied closeup of Solián’s scream. Opens on 13 November 2025 only in cinemas.

Javier writes:

Tormento doesn’t have a lot of dialogue, so I had a lot of space to have fun with sound design. There are a lot of dissonant and tonal sounds, practically a score, and all the sounds were generated with Kyma. So fun to work with Kyma like this, a lot of density and grit , the director is beyond happy with the results.

In his trailer for Depeche Mode M, Umpierriez captures the insane excitement of a live DM show, the sounds of the massive crowd, and the music. Reach out, touch me! Dance is the cure.

Directed by Fernando Frías, the film captures Depeche Mode’s 2023 Memento Mori tour and the three sold-out shows in Mexico City attended by over 200,000 fans. DEPECHE MODE: M delves into the profound connections between music, pain, memory, joy, and dance.

Limited screenings begin on 28 October 2025 in cinemas and IMAX.

 

Light Time Delay

“Light Time Delay” is a term-of-art used to express the comms delay between the earth and spacecraft. It’s also the stage name for Kyma artist Théa-Martine Gauthier’s live performance project, recently featured on Modular Seattle’s Modular Nights 13 April 2025, a free monthly event at the Substation venue in Seattle’s Ballard neighborhood.

Opening with phasey delays and a rumbling sub, evocative of acceleration, the performance explodes into metallic rhythmic analog chirps paired with images evoking navigation, PC board layouts, and gravitational manifolds. Obsessively accelerating loops and fragments of texts close in on us in ever-accelerating spirals until the tension finally resolves into a watery, peaceful texture with floaty vocals, underlaid by ominous sub rumbling, electric frog creaks, and shimmery ringing filter chords over a deep thrumming fundamental, and concluding with a sense of wonder and reflection on the improbability of this experience we call “life”:
Continue reading “Light Time Delay”

Driving the user experience with sound

Interview with Lowell Pickett, Senior Audio Engineer at Zoox
A man with light brown curly hair wearing a cap and orange hoodie, smiling and holding a microphone, posing a question after a conference presentation
Lowell Pickett at the Kyma International Sound Symposium in Brussels (KISS 2013)

Every company should have an audio professional on staff!

—Lowell Pickett, Senior Audio Engineer at Zoox / Sound Lab

Eighth Nerve (EN): Yes! I agree 100%! Could I ask you to reflect on the skills and experience that someone from professional audio can bring to other industries (i.e., alternative applications of sound and audio)?

Lowell Pickett (LP): I often take my skills for granted, but the difference between a good-enough and a well-polished audio asset can truly impact engagement and can help to define a product when done well.

Sound is a ubiquitous component of our day-to-day, media-filled experience, and we often simply accept whatever audio quality might be convenient at any given moment – but sound has the potential to offer so much delight…  In many situations where people have become accustomed to marginal audio quality – an old, well-used comms system perhaps or a family video that’s been watched many times – some applied audio knowledge can truly improve the (audio) quality of their life.

A well considered audio experience improves workplaces and recreational spaces alike – and a positive audio association with a brand or personal reputation can be a distinguishing characteristic.
Continue reading “Driving the user experience with sound”

Generative sound design in Kyma

By augmenting traditional sample-based sound design with generative models, you gain additional parameters that you can perform to picture or control with a data stream from a model world — like a game engine. Once you have a parameterized model, you can generate completely imaginary spaces populated by hallucinatory sound-generating objects and creatures.

Thanks to Mark Ali and Matt Jefferies for capturing and editing the lecture on Generative Sound Design in Kyma presented last November by Carla Scaletti for Charlie Norton’s students at the University of West London.

Sound design drives the narrative

Phaze UK sound designers — Rob Prynne, Matt Collinge & Alyn Sclosa along with Kyma consultant Alan M. Jackson — have been getting some well-deserved attention recently for their work on Ridley Scott’s film Gladiator II, work that has been shortlisted for an Academy Award nomination and a BAFTA.

In the 6 January 2025 ToneBenders sound design podcast, Timothy Muirhead and the sound team delve into the details of how sound design guides audience sentiment during each of the gladiatorial contests — with the “crowd sound” taking on a role typically served by the musical score.

Director Scott’s challenge to the sound team was that the crowd itself should function as a major character in the film — telling the story of the rise and fall of each of the other characters — and symbolizing the fall of Rome as the populace loses faith in the emperors.

“So much detail and effort went into making the crowd reactions in the Roman Colosseum…narrate the emotions of the battles”

During the podcast, the sound team — Paul Massey (Dialog/Music Re-Recording Mixer), Danny Sheehan (Supervising Sound Editor), Matt Collinge (Supervising Sound Editor & SFX/Foley Re-Recording Mixer) and Stéphane Bucher (Production Sound Mixer) recount their unusual approach of including the post-production team in the production phase so they could coordinate their efforts. This gave the post-production sound designers an opportunity to “direct” the crowd extras and to ensure that they could capture the raw material needed for post production crowd-enlargement.

In an IBC behind-the-scenes article, supervising sound editors Matthew Collinge and Danny Sheehan (co-founders of Phaze UK) describe how they generated the sound of 10,000 spectators by layering the sound of extras on the set with recordings of cheers and jeers from bullfights, cricket matches, rugby and baseball games and then “…transformed them into a cohesive roar using a Kyma workstation,” according to Sheehan.

During an interview with A Sound Effect, Matthew Collinge describes the sound design for the gladiatorial contests involving animals: “We manipulated actual animal sounds to highlight their aggression and power. For the baboons, we morphed chimp calls and then combined them with the screeches of other animals to create a unique and very intimidating sound.”

In that same interview, Collinge describes how Rob Prynne and the team enhanced the sounds of the arrow trajectories:

Rob Prynne used these recordings to model a patch in the Kyma where we took the amplitude variation between the L and the R in the recordings and used this to create an algorithm that we could apply to other samples. We then mixed in animal and human screams and screeches which had their pitch mapped to this algorithm and made it feel as one with the original recordings.

GLADIATOR II: Behind The Glorious Sound – with Matthew Collinge & Danny Sheehan


When you see Gladiator II — whether in a theatre or via your favorite streaming service — you’ll be rewarded with a bit of extra information if you stick with it through the closing credits!

Sound design & Kyma Consultancy credits. Photo submitted by an anonymous movie-goer who patiently sat through the end credits.

Generative sound design at University of West London

At the invitation of UWL Lecturer Charlie Norton, Carla Scaletti presented a lecture/demonstration on Generative Sound Design in Kyma for students, faculty and guests at University of West London on 14 November 2024. As an unanticipated prelude, Pete Townshend (who, along with Joseph Townshend, works extensively with Kyma) welcomed the Symbolic Sound co-founders to his alma mater and invited attendees to tour the Townshend Studio following the lecture.

After the seminar, graduating MA students Vinayak Arora and Sabin Pavel (hat) posed with Kurt Hebel & Carla Scaletti (center) and Charlie Norton (distant upper right background)
UWL Professor of Music Production Justin Paterson and Trombonist/Composer/Kyma Sound Installation artist Robert Jarvis discuss the extensive collection of instruments in the Townshend Studio

It seems that anywhere you look in the Townshend Studio, you see another rock legend. John Paul Jones (whose most recent live Kyma collaborations include Sons of Chipotle, Minibus Pimps, and Supersilent among others) recognized an old friend from across the room: a Yamaha GX-1 (1975), otherwise known as ‘The Dream Machine’ — the same model JPJ played when touring with Led Zeppelin and when recording the 1979 album “In Through The Out Door”. Yamaha’s first foray into synthesizers, only 10 were ever manufactured; it featured a ribbon controller and a keyboard that could also move laterally for vibrato. Other early adopters included ELP, Stevie Wonder and Abba.

JPJ recollects his days of touring with the GX1 and the roadie who took up temporary accommodation in its huge flight case, as Alan, two students, Bruno and Robert look on.
Charlie Norton with Alan Jackson (back), JP Jones (at Yamaha GX-1), Carla Scaletti & Kurt Hebel in the Townshend Studio
Alan Jackson and Pete Johnston pondering the EMS Vocoder
Composer Bruno Liberda (the tall one) with Symbolic Sound co-founder Carla Scaletti

 

Kyma Sound design studies at CMU

Did you know that you could study for a degree in sound design and work with Kyma at Carnegie Mellon University? Joe Pino, professor of sound design in the School of Drama at Carnegie Mellon University, teaches conceptual sound design, modular synthesis, Kyma, film sound design, ear training and audio technology in the sound design program.

Sound design works in the spaces between reality and abstraction. They are less interesting as a collection of triggers for giving designed worlds reality. They are more effective when they trigger emotional responses and remembered experiences.

Thinking through sound — Ben Burtt and the voice of WALL-E

Ben Burtt was recently awarded the 2024 Vision Award Ticinomoda, which describes him as:

“Ben Burtt is a pioneer and visionary who has fundamentally changed the way we perceive sound in cinema.”

In this interview, Burtt shares some of his experiences as Sound Designer for several iconic films, including his discovery of “a kind of sophisticated music program called the Kyma” which he used in the creation of the voice of WALL-E.

The interviewer asked Ben about the incredible voices he created for WALL-E and EVA:

Well, Andrew Stanton who was the creator of the WALL-E character in the story; apparently he jokingly referred to his movie as the R2-D2 movie. He wanted to develop a very affable robot character that didn’t speak or had very limited speech that was mostly sound effects of its body moving and a few strange kind of vocals, and someone (his producer, I think — Jim Morris) said well why don’t you just talk to Ben Burtt, the guy who did R2-D2, so they got in touch with me.

Pixar is in the Bay Area (San Francisco) so it was nearby, and I went over and looked at about 10 minutes that Andrew Stanton had already put together with just still pictures — storyboards of the beginning of the film where WALL-E’s out on his daily work activities boxing up trash and so on and singing and playing his favorite music, and of course I was inspired by it and I thought well here’s a great challenge and I took it on.

This was a few months before they had to actually greenlight the project. I didn’t find this out until later but there was some doubt at that time about whether you could make a movie in which the main characters don’t really talk in any kind of elaborate way; they don’t use a lot of words. Would it sustain the audience’s interest? The original intention in the film that I started working on was that there was no spoken language in the film that you would understand at all; that was a goal at one point…

So I took a little bit of the R2 idea to come up with a voice where human performance would be part of it but it had to have other elements to it that made it seem electronic and machine-like. But WALL-E wasn’t going to Beep and Boop and Buzz like R2; it had to be different, so I struggled along trying different things for a few months and trying different voices — a few different character actors. And I often ended up experimenting on myself because I’m always available. You know it’s like the scientist in his lab takes the potion because there’s no one else around to test it: Jekyll and Hyde, I think that’s what it is. So I took the potion and and turned into Mr Hyde…

Photo from Comicon by Miguel Discart Bruxelles, Belgique This file is licensed under the Creative Commons Attribution-Share Alike 2.0 Generic license

The idea was to always give the impression of what WALL-E was thinking through sound…

But eventually it ended up that I had a program — it was a kind of sophisticated music program called the Kyma and it had one sound in it — a process where it it would synthesize a voice but it [intentionally] didn’t do very well; the voice had artifacts that had funny distortions in it and extra noises. It didn’t work perfectly as a pure voice but I took advantage of the fact that the artifacts and mistakes in it were useful and interesting and could be used and I worked out a process where you could record sounds, starting with my own voice, and then process them a second time and do a re-performance where, as it plays back, you can stretch or compress or repitch the sounds in real time.

So you can take the word “Wall-E” and then you could make it have a sort of envelope of electronic noise around it; it gave it a texture that made it so it wasn’t human and that’s where it really worked. And of course it was in combination with the little motors in his arms and his head and his treads — everything was part of his expression.

The idea was to always give the impression of what WALL-E was thinking through sound — just as if you were playing a musical instrument and you wanted to make little phrases of music which indicated the feeling for what kind of communication was taking place.

Generative events

Explorations in the generative control of notes and rhythms, scales and modes — from human to algorithmic gesture

Composer Giuseppe Tamborrino uses computer tools not just for timbre, but also for the production of events in deferred time or in real time. This idea forms the basis for generative music — examples of which can be found throughout music history, for example Mozart’s Musical Dice Game.

In his research, Tamborrino carries out this process in various ways and with different software, but the goal is always the same: the generation of instructions for instruments — which he calls an “Electronic score”.

Here’s an example from one of his generative scores:

 

As part of the process, Tamborrino has always designed in a certain degree of variability, using different stochastic or totally random objectives to speed up the process of abstraction and improvisation that, once launched, are invariant. Often, though, this way of working resulted in small sections that he wanted to cut or correct or improve.

This motivated him to use Kyma to pursue a new research direction — called post-controlled generative events — with the aim of being able to correct and manage micro events.

This is three-step process:

  • A generational setting phase (pre-time)
  • A performance phase, recording all values in the Timeline (real time)
  • A post-editing phase of automatic “generative (after time) events

Tamborrino shared some of the results of his research on the Kyma Discord, and he invites others to experiment with his approach and to engage in an online discussion of these ideas.