Thinking through sound — Ben Burtt and the voice of WALL-E

Ben Burtt was recently awarded the 2024 Vision Award Ticinomoda, which describes him as:

“Ben Burtt is a pioneer and visionary who has fundamentally changed the way we perceive sound in cinema.”

In this interview, Burtt shares some of his experiences as Sound Designer for several iconic films, including his discovery of “a kind of sophisticated music program called the Kyma” which he used in the creation of the voice of WALL-E.

The interviewer asked Ben about the incredible voices he created for WALL-E and EVA:

Well, Andrew Stanton who was the creator of the WALL-E character in the story; apparently he jokingly referred to his movie as the R2-D2 movie. He wanted to develop a very affable robot character that didn’t speak or had very limited speech that was mostly sound effects of its body moving and a few strange kind of vocals, and someone (his producer, I think — Jim Morris) said well why don’t you just talk to Ben Burtt, the guy who did R2-D2, so they got in touch with me.

Pixar is in the Bay Area (San Francisco) so it was nearby, and I went over and looked at about 10 minutes that Andrew Stanton had already put together with just still pictures — storyboards of the beginning of the film where WALL-E’s out on his daily work activities boxing up trash and so on and singing and playing his favorite music, and of course I was inspired by it and I thought well here’s a great challenge and I took it on.

This was a few months before they had to actually greenlight the project. I didn’t find this out until later but there was some doubt at that time about whether you could make a movie in which the main characters don’t really talk in any kind of elaborate way; they don’t use a lot of words. Would it sustain the audience’s interest? The original intention in the film that I started working on was that there was no spoken language in the film that you would understand at all; that was a goal at one point…

So I took a little bit of the R2 idea to come up with a voice where human performance would be part of it but it had to have other elements to it that made it seem electronic and machine-like. But WALL-E wasn’t going to Beep and Boop and Buzz like R2; it had to be different, so I struggled along trying different things for a few months and trying different voices — a few different character actors. And I often ended up experimenting on myself because I’m always available. You know it’s like the scientist in his lab takes the potion because there’s no one else around to test it: Jekyll and Hyde, I think that’s what it is. So I took the potion and and turned into Mr Hyde…

Photo from Comicon by Miguel Discart Bruxelles, Belgique This file is licensed under the Creative Commons Attribution-Share Alike 2.0 Generic license

The idea was to always give the impression of what WALL-E was thinking through sound…

But eventually it ended up that I had a program — it was a kind of sophisticated music program called the Kyma and it had one sound in it — a process where it it would synthesize a voice but it [intentionally] didn’t do very well; the voice had artifacts that had funny distortions in it and extra noises. It didn’t work perfectly as a pure voice but I took advantage of the fact that the artifacts and mistakes in it were useful and interesting and could be used and I worked out a process where you could record sounds, starting with my own voice, and then process them a second time and do a re-performance where, as it plays back, you can stretch or compress or repitch the sounds in real time.

So you can take the word “Wall-E” and then you could make it have a sort of envelope of electronic noise around it; it gave it a texture that made it so it wasn’t human and that’s where it really worked. And of course it was in combination with the little motors in his arms and his head and his treads — everything was part of his expression.

The idea was to always give the impression of what WALL-E was thinking through sound — just as if you were playing a musical instrument and you wanted to make little phrases of music which indicated the feeling for what kind of communication was taking place.

Audience choice: SEAMUS 2023

You can hear Kyma sounds in two of the selections of Volume 33 of audience-selected works from the 2023 SEAMUS National Conference in New York City.

Composer Chi Wang describes her instrumentation in Transparent Affordance as a “data-driven instrument.” Data is derived from touching, titling, and rotating an iPad along with a “custom made box” of her own design. Near the end of the piece Wang uses the “box” to great effect. She places the iPad in the box and taps on the outside. Data is still transmitted to Kyma even without direct interaction, though the sounds are now generated in a much more sparse manner as though the iPad needs to rest or the performer decides to contain it. Finally the top is put on the box as one last flurry of sound escapes.

Scott L. Miller describes his piece Eidolon as “…a phantom film score I thought I heard on a transatlantic flight…” Although the film may not exist to guide us through a narrative, the soundtrack does exist and takes the place of visual storytelling. Eidolon, scored for flute, violin, clarinet, and percussion with Kyma, blurs the lines between what the performers are creating and what the electronic sounds are, an aspect of how we hear the world: we can alternately isolate and focus on sounds to identify them and we can also widen our ears to absorb a sonic texture or a soundscape. Electronic sounds were synthesized in Kyma using additive synthesis, mostly partials 17 – 32 of sub-audio fundamentals.

Willful Devices in Dublin

Composer Scott Miller’s Kyma control surface

Ecosystemics — the guiding principle for much of composer Scott L. Miller’s work over past two decades, constitutes an ecological approach to composition in which form is a dynamic process that is intimately tied to the ambience of the space in which the music occurs. In a live ecosystemic environment, Kyma Sounds are parametrically coupled with the environment via sound. As Miller explains in his two-part article for INSIGHTs magazine — Ecosystemic Programming and Composition:

In ecosystemic music, change in the sonic environment is continuously measured by Kyma with input from microphones. This change produces data that is mapped to control the production of sound. Environmental change may be instigated by performers, audience members, sound produced by the computer itself, and the ambience of the space(s) in general.

Sam Wells and Adam Vidiksis, collaborators on Miller’s new album of telematic ecosystemic music, Human Capital, describe performing with Miller’s Kyma environments as “like interacting with a living entity”.

On 2 August 2024, Scott will be joined by clarinetist Pat O’Keefe to perform as Willful Devices (a clarinet plus Kyma duo) at ClarinetFest in Dublin.

Collaborators since 2003, the duo’s name derives from the fact that both of them manipulate devices — one a clarinet, one a computer — to generate music. And that, despite their best efforts, these devices are never fully under their control, at times almost seeming to have a mind of their own. Rather than bemoaning this fact, Scott and Pat welcome the potential for unimagined sonic discoveries inherent in this unpredictability.

Friday’s setlist includes:

  • Piano – Forte I, Piano – Forte II, and Piano – Forte III telematic collaborations
  • Semai Seddi-Araban by Tanburi Cemil Bey, the premiere of the duo’s take on a classic Turkish semai.
  • Mirror Inside from Shape Shifting (2004), for clarinet and Kyma
  • Fragrance of Distant Sundays, the duo’s tribute to Carei Thomas, the Minneapolis improviser/composer who passed away in 2020

New vocal samples from Andrea Young

Composer/performer Andrea Young has contributed several vocalises (melodies without words) to the Kyma library — perfect for experimenting with feature extraction or as spectrally-rich source material for manipulation and analysis/resynthesis.

In her artistic work, composer/performer Andrea Young explores the full range of potential interactions between voice and computer: acoustic, amplified, deconstructed/reconstructed, and extracting features from the voice to use as control signals for synthesis and processing algorithms.

Composer/Performer Dr. Andrea Young uses Kyma in her live performances

Although amplification and live processing are widely used in both experimental and commercial music, the last option — feature-extraction and remapping to (potentially unrelated) sound parameter controls — is much less explored.

Andrea has contributed several vocalises (melodies without words) to the Kyma library — perfect for experimenting with feature extraction or as spectrally-rich source material for manipulation and analysis/resynthesis. Check your Kyma 3rd party Samples folder, in the sub-folder Andrea Young.

Dr. Young studied vocal performance and composition at The University of Victoria, electronic music at The Institute of Sonology in The Hague, and holds a Performer-Composer doctorate from The California Institute of the Arts.

Kybone excursions and precipitations

When composer/performer Steve Ricks upgraded his trombone to a B.A.C. Paseo and paired it with Kyma, it opened the door on a new world of improvisation with live electronics. You can hear some of those results as tracks in his new album Solo Excursions. (Listen for Kyma on tracks: 2, 6, 8, 9, and 11).

Ricks is particularly fond of the SampleCloud which he uses to create an overall texture in nearly every track. In track 6, “Kybone Study 16 Harrison Bergeron” he added several ring modulation sounds with filtering and delay/reverb to create additional frequencies and a “metallic” sound, suggestive of the metallic “handicaps” forced on the protagonist of Kurt Vonnegut’s Harrison Bergeron: a short story set in a dystopian near-future where anyone who is too smart, too beautiful or too athletic is required to wear “handicaps” to make them “more equal” to everyone else.

In the music, there is a sense of resistance that the performer has to push against as the sound is coming out of the trombone, through the microphone and into Kyma.

THE YEAR WAS 2081, and everybody was finally equal. They weren’t only equal before God and the law. They were equal every which way. Nobody was smarter than anybody else. Nobody was better looking than anybody else. Nobody was stronger or quicker than anybody else. All this equality was due to the 211th, 212th, and 213 th Amendments to the Constitution, and to the unceasing vigilance of agents of the United States Handicapper General.
Kurt Vonnegut

On Steve Ricks’ May 2023 album, released in conjunction with Ron Coulter, Precipitations (New Focus Recordings), Track 2, Late Night Call, features Ricks improvising with a single SampleCloud Sound on one of his own audio files, while Ron improvises on percussion and lo-fi electronic devices.

According to the liner notes:

Late Night Call revels in the subtle timbral distinctions in non-pitched electronic sounds; early dial-up modems, bad telephone connections, and poor TV or radio reception come to mind as we listen to this ecology of resistors, currents, and connections. Mechanics’ Choice establishes a four note ostinato as a pad over which Coulter improvises on found objects and gongs. A reverse processing effect turns the texture inside out, distorting it as the sound envelopes fold back upon themselves.

The Precipitations tracks 4 and 6, Charming Ways and I-Se3m, also feature the same Kybone (Kyma + trombone) setup.

Composer/performer Steven Ricks trombowing

Another Coulter/Ricks project, Beside the Avoid, was released in 2022 by Coulter under his Kreating SounD label. According to Ricks, Kyma-created Sounds can be heard throughout the album. In a departure from the way Ricks typically uses Kyma, the track Wow, Why & Wot takes a random walk with the morphing sound and Steve’s spoken recordings of the words “Wow,” “Why,” and “What”.

Kyma (7.43) Delivers Performance Enhancements and Expanded Functionality

The latest Kyma release (version 7.43) focuses on performance optimization and expanded functionality.

29 June 2024, Champaign, IL . The latest Kyma release (version 7.43) focuses on performance optimization and expanded functionality.

Performance Boost: Optimizations to the Smalltalk Interpreter and garbage collector have resulted in noticeable performance improvements for Kyma running on both macOS and Windows, especially under low-memory conditions.

Here’s a short video of the in-house tool we developed for monitoring and fine-tuning the dynamics of the garbage collector (Note: when monitoring memory usage, Kyma runs at half speed — this is just for in-house tweaking, not a run-time tool):

Expanded Functionality: Among other enhancements and additions, Kyma now allows you to bind Strings and Collections to ?greenVariables in a MapEventValues. This provides greater flexibility and modularity for “lifted” Sounds as well as signal-flow-wide references to features like file durations, names, collection sizes, and more.

Setting ?firstFileName to a String in all Sounds to the left of the MapEventValues
Binding a Collection of Strings to ?displayNames

The update is free: you can download it from the Help menu in Kyma. As always, a full list of changes and additions is included with the update.

Leaving a trace…

Composer Tom Williams used Kyma to extensively transform the sonic materials in his 2023 acousmatic composition Piano Trace, performed at multiple festivals including Jauna Muzika 2023 in Vilnius, Noisefloor 2024 in Lisbon; Spatial Audio Gathering, DMU, Leicester; NYCEMF2024, New York. Williams, who has a doctorate in music composition from Boston University, currently heads the Music Production MA program at Coventry University.

Piano Trace is conceived as an unfolding of trace material that marks its original source: soundings made on an upright piano. Williams writes:

This is my piano. The piano that has been by my side as a tool for composition but never, until now, the actual source of my composition. A pocketful of playful recordings from the soundboard, piano keys, pedals and strings are the sonic roots. Throughout, and within the transformations and messing-up of the source sounds, there lies an inherent trace, a timbral DNA, a semblance of sonic integrity that is the ephemeral body of Piano Trace.

Most, but potentially all Composed by Jim O’Rourke

Jim O’Rourke & Eivind Lønning on Boomkat (Artwork, Blank Blank)

 
Longtime collaborators trumpeter Eivind Lønning and composer Jim O’Rourke have just released an intoxicating new album on Boomkat: Most, but potentially all Composed by Jim O’Rourke.

Lønning provides exquisite acoustic trumpet sounds and O’Rourke’s Kyma transformations, described by Boomkat as “often gentle and illusory, and sometimes utterly lacerating – lift the sounds into completely new territory.”

The entire album was composed, mixed and mastered by O’Rourke and is based solely on material from Lønning’s virtuosic performance.

Trumpet virtuoso Eivind Lønning

 

 

In the words of the Boomkat reviewer, it’s a “piece that shifts the dial on contemporary experimental music; dizzyingly complex but never showy, it’s the kind of record you can spin repeatedly and hear something different each time… as a progression of electro-acoustic compositional techniques, it draws a deep trench in the sand, setting a new standard.”

Lønning is touring with the material starting with a 29 May 29 2024 concert at the Edvard Munch museum.

Interface to the flow state

The flow state (otherwise known as “in the zone”) is what every real-time performer seeks. Charlie Norton‘s research is on how to design musical interfaces to optimize the flow state.

Norton and colleagues, Daniel Pratt and Justin Paterson, describe their work in “Performance mapping and control: Enhanced musical connections and a strategy to optimise flow-state”, a chapter in the new book: Innovation in Music: Technology and Creativity, published by Routledge.

Norton’s goal is to design control interfaces that are independent of the target system and which do not require reconfiguration of the sound-generation structure for each new performance apparatus.

Employing Symbolic Sound’s Kyma sound-design platform as a host for generating a dynamic set of sound structures with a variety of control types, Max and Node.js (JavaScript) servers are then used to map, combine, and route control signals which can be assigned, merged, and swapped in real-time without interrupting the sound processing or performer flow. This system allows one or more performers and their director to interact with the same system, turning an offline logical configuration process into a real-time reflexive act.

Here’s a link to the abstract (and institutional access to the chapter text).

If you use Kyma and Max8 and are interested in beta testing MaxVCS, a zeroconf bi-directional portal for Max8 to control the Kyma VCS by parameter name, contact Charlie at this address.

Milani launches Cosmic Charges

Matteo Milani’s (Unidentified Sound Object) “COSMIC CHARGES” is a library of tonal blasts and impacts for video games, films, TV series, and more.

Each impact is split into layers, allowing a sound designer to decompose and reconstitute sub-timbres to create unique sounds.

Milani was inspired by innovative sound designers Ben Burtt, Randy Thom, and Gary Rydstrom to continue to push the boundaries of his own work. It was through Ben Burtt’s book, Galactic Phrase Book & Travel Guide, that Milani first discovered Kyma, which he used in this project to add musical layers to each impact — thus imparting a unique sonic identity and intensifying the emotional resonance and dynamics.

In addition to their SFX libraries, Milani’s Unidentified Sound Object offers a range of services including complex sound design, composition, multilingual localization, games and VR development, location recording, and more.