Light Time Delay

“Light Time Delay” is a term-of-art used to express the comms delay between the earth and spacecraft. It’s also the stage name for Kyma artist Théa-Martine Gauthier’s live performance project, recently featured on Modular Seattle’s Modular Nights 13 April 2025, a free monthly event at the Substation venue in Seattle’s Ballard neighborhood.

Opening with phasey delays and a rumbling sub, evocative of acceleration, the performance explodes into metallic rhythmic analog chirps paired with images evoking navigation, PC board layouts, and gravitational manifolds. Obsessively accelerating loops and fragments of texts close in on us in ever-accelerating spirals until the tension finally resolves into a watery, peaceful texture with floaty vocals, underlaid by ominous sub rumbling, electric frog creaks, and shimmery ringing filter chords over a deep thrumming fundamental, and concluding with a sense of wonder and reflection on the improbability of this experience we call “life”:
Continue reading “Light Time Delay”

Driving the user experience with sound

Interview with Lowell Pickett, Senior Audio Engineer at Zoox
A man with light brown curly hair wearing a cap and orange hoodie, smiling and holding a microphone, posing a question after a conference presentation
Lowell Pickett at the Kyma International Sound Symposium in Brussels (KISS 2013)

Every company should have an audio professional on staff!

—Lowell Pickett, Senior Audio Engineer at Zoox / Sound Lab

Eighth Nerve (EN): Yes! I agree 100%! Could I ask you to reflect on the skills and experience that someone from professional audio can bring to other industries (i.e., alternative applications of sound and audio)?

Lowell Pickett (LP): I often take my skills for granted, but the difference between a good-enough and a well-polished audio asset can truly impact engagement and can help to define a product when done well.

Sound is a ubiquitous component of our day-to-day, media-filled experience, and we often simply accept whatever audio quality might be convenient at any given moment – but sound has the potential to offer so much delight…  In many situations where people have become accustomed to marginal audio quality – an old, well-used comms system perhaps or a family video that’s been watched many times – some applied audio knowledge can truly improve the (audio) quality of their life.

A well considered audio experience improves workplaces and recreational spaces alike – and a positive audio association with a brand or personal reputation can be a distinguishing characteristic.
Continue reading “Driving the user experience with sound”

Generative sound design in Kyma

By augmenting traditional sample-based sound design with generative models, you gain additional parameters that you can perform to picture or control with a data stream from a model world — like a game engine. Once you have a parameterized model, you can generate completely imaginary spaces populated by hallucinatory sound-generating objects and creatures.

Thanks to Mark Ali and Matt Jefferies for capturing and editing the lecture on Generative Sound Design in Kyma presented last November by Carla Scaletti for Charlie Norton’s students at the University of West London.

Sound design drives the narrative

Phaze UK sound designers — Rob Prynne, Matt Collinge & Alyn Sclosa along with Kyma consultant Alan M. Jackson — have been getting some well-deserved attention recently for their work on Ridley Scott’s film Gladiator II, work that has been shortlisted for an Academy Award nomination and a BAFTA.

In the 6 January 2025 ToneBenders sound design podcast, Timothy Muirhead and the sound team delve into the details of how sound design guides audience sentiment during each of the gladiatorial contests — with the “crowd sound” taking on a role typically served by the musical score.

Director Scott’s challenge to the sound team was that the crowd itself should function as a major character in the film — telling the story of the rise and fall of each of the other characters — and symbolizing the fall of Rome as the populace loses faith in the emperors.

“So much detail and effort went into making the crowd reactions in the Roman Colosseum…narrate the emotions of the battles”

During the podcast, the sound team — Paul Massey (Dialog/Music Re-Recording Mixer), Danny Sheehan (Supervising Sound Editor), Matt Collinge (Supervising Sound Editor & SFX/Foley Re-Recording Mixer) and Stéphane Bucher (Production Sound Mixer) recount their unusual approach of including the post-production team in the production phase so they could coordinate their efforts. This gave the post-production sound designers an opportunity to “direct” the crowd extras and to ensure that they could capture the raw material needed for post production crowd-enlargement.

In an IBC behind-the-scenes article, supervising sound editors Matthew Collinge and Danny Sheehan (co-founders of Phaze UK) describe how they generated the sound of 10,000 spectators by layering the sound of extras on the set with recordings of cheers and jeers from bullfights, cricket matches, rugby and baseball games and then “…transformed them into a cohesive roar using a Kyma workstation,” according to Sheehan.

During an interview with A Sound Effect, Matthew Collinge describes the sound design for the gladiatorial contests involving animals: “We manipulated actual animal sounds to highlight their aggression and power. For the baboons, we morphed chimp calls and then combined them with the screeches of other animals to create a unique and very intimidating sound.”

In that same interview, Collinge describes how Rob Prynne and the team enhanced the sounds of the arrow trajectories:

Rob Prynne used these recordings to model a patch in the Kyma where we took the amplitude variation between the L and the R in the recordings and used this to create an algorithm that we could apply to other samples. We then mixed in animal and human screams and screeches which had their pitch mapped to this algorithm and made it feel as one with the original recordings.

GLADIATOR II: Behind The Glorious Sound – with Matthew Collinge & Danny Sheehan


When you see Gladiator II — whether in a theatre or via your favorite streaming service — you’ll be rewarded with a bit of extra information if you stick with it through the closing credits!

Sound design & Kyma Consultancy credits. Photo submitted by an anonymous movie-goer who patiently sat through the end credits.

Generative sound design at University of West London

At the invitation of UWL Lecturer Charlie Norton, Carla Scaletti presented a lecture/demonstration on Generative Sound Design in Kyma for students, faculty and guests at University of West London on 14 November 2024. As an unanticipated prelude, Pete Townshend (who, along with Joseph Townshend, works extensively with Kyma) welcomed the Symbolic Sound co-founders to his alma mater and invited attendees to tour the Townshend Studio following the lecture.

After the seminar, graduating MA students Vinayak Arora and Sabin Pavel (hat) posed with Kurt Hebel & Carla Scaletti (center) and Charlie Norton (distant upper right background)
UWL Professor of Music Production Justin Paterson and Trombonist/Composer/Kyma Sound Installation artist Robert Jarvis discuss the extensive collection of instruments in the Townshend Studio

It seems that anywhere you look in the Townshend Studio, you see another rock legend. John Paul Jones (whose most recent live Kyma collaborations include Sons of Chipotle, Minibus Pimps, and Supersilent among others) recognized an old friend from across the room: a Yamaha GX-1 (1975), otherwise known as ‘The Dream Machine’ — the same model JPJ played when touring with Led Zeppelin and when recording the 1979 album “In Through The Out Door”. Yamaha’s first foray into synthesizers, only 10 were ever manufactured; it featured a ribbon controller and a keyboard that could also move laterally for vibrato. Other early adopters included ELP, Stevie Wonder and Abba.

JPJ recollects his days of touring with the GX1 and the roadie who took up temporary accommodation in its huge flight case, as Alan, two students, Bruno and Robert look on.
Charlie Norton with Alan Jackson (back), JP Jones (at Yamaha GX-1), Carla Scaletti & Kurt Hebel in the Townshend Studio
Alan Jackson and Pete Johnston pondering the EMS Vocoder
Composer Bruno Liberda (the tall one) with Symbolic Sound co-founder Carla Scaletti

 

Kyma Sound design studies at CMU

Did you know that you could study for a degree in sound design and work with Kyma at Carnegie Mellon University? Joe Pino, professor of sound design in the School of Drama at Carnegie Mellon University, teaches conceptual sound design, modular synthesis, Kyma, film sound design, ear training and audio technology in the sound design program.

Sound design works in the spaces between reality and abstraction. They are less interesting as a collection of triggers for giving designed worlds reality. They are more effective when they trigger emotional responses and remembered experiences.

Thinking through sound — Ben Burtt and the voice of WALL-E

Ben Burtt was recently awarded the 2024 Vision Award Ticinomoda, which describes him as:

“Ben Burtt is a pioneer and visionary who has fundamentally changed the way we perceive sound in cinema.”

In this interview, Burtt shares some of his experiences as Sound Designer for several iconic films, including his discovery of “a kind of sophisticated music program called the Kyma” which he used in the creation of the voice of WALL-E.

The interviewer asked Ben about the incredible voices he created for WALL-E and EVA:

Well, Andrew Stanton who was the creator of the WALL-E character in the story; apparently he jokingly referred to his movie as the R2-D2 movie. He wanted to develop a very affable robot character that didn’t speak or had very limited speech that was mostly sound effects of its body moving and a few strange kind of vocals, and someone (his producer, I think — Jim Morris) said well why don’t you just talk to Ben Burtt, the guy who did R2-D2, so they got in touch with me.

Pixar is in the Bay Area (San Francisco) so it was nearby, and I went over and looked at about 10 minutes that Andrew Stanton had already put together with just still pictures — storyboards of the beginning of the film where WALL-E’s out on his daily work activities boxing up trash and so on and singing and playing his favorite music, and of course I was inspired by it and I thought well here’s a great challenge and I took it on.

This was a few months before they had to actually greenlight the project. I didn’t find this out until later but there was some doubt at that time about whether you could make a movie in which the main characters don’t really talk in any kind of elaborate way; they don’t use a lot of words. Would it sustain the audience’s interest? The original intention in the film that I started working on was that there was no spoken language in the film that you would understand at all; that was a goal at one point…

So I took a little bit of the R2 idea to come up with a voice where human performance would be part of it but it had to have other elements to it that made it seem electronic and machine-like. But WALL-E wasn’t going to Beep and Boop and Buzz like R2; it had to be different, so I struggled along trying different things for a few months and trying different voices — a few different character actors. And I often ended up experimenting on myself because I’m always available. You know it’s like the scientist in his lab takes the potion because there’s no one else around to test it: Jekyll and Hyde, I think that’s what it is. So I took the potion and and turned into Mr Hyde…

Photo from Comicon by Miguel Discart Bruxelles, Belgique This file is licensed under the Creative Commons Attribution-Share Alike 2.0 Generic license

The idea was to always give the impression of what WALL-E was thinking through sound…

But eventually it ended up that I had a program — it was a kind of sophisticated music program called the Kyma and it had one sound in it — a process where it it would synthesize a voice but it [intentionally] didn’t do very well; the voice had artifacts that had funny distortions in it and extra noises. It didn’t work perfectly as a pure voice but I took advantage of the fact that the artifacts and mistakes in it were useful and interesting and could be used and I worked out a process where you could record sounds, starting with my own voice, and then process them a second time and do a re-performance where, as it plays back, you can stretch or compress or repitch the sounds in real time.

So you can take the word “Wall-E” and then you could make it have a sort of envelope of electronic noise around it; it gave it a texture that made it so it wasn’t human and that’s where it really worked. And of course it was in combination with the little motors in his arms and his head and his treads — everything was part of his expression.

The idea was to always give the impression of what WALL-E was thinking through sound — just as if you were playing a musical instrument and you wanted to make little phrases of music which indicated the feeling for what kind of communication was taking place.

Generative events

Explorations in the generative control of notes and rhythms, scales and modes — from human to algorithmic gesture

Composer Giuseppe Tamborrino uses computer tools not just for timbre, but also for the production of events in deferred time or in real time. This idea forms the basis for generative music — examples of which can be found throughout music history, for example Mozart’s Musical Dice Game.

In his research, Tamborrino carries out this process in various ways and with different software, but the goal is always the same: the generation of instructions for instruments — which he calls an “Electronic score”.

Here’s an example from one of his generative scores:

 

As part of the process, Tamborrino has always designed in a certain degree of variability, using different stochastic or totally random objectives to speed up the process of abstraction and improvisation that, once launched, are invariant. Often, though, this way of working resulted in small sections that he wanted to cut or correct or improve.

This motivated him to use Kyma to pursue a new research direction — called post-controlled generative events — with the aim of being able to correct and manage micro events.

This is three-step process:

  • A generational setting phase (pre-time)
  • A performance phase, recording all values in the Timeline (real time)
  • A post-editing phase of automatic “generative (after time) events

Tamborrino shared some of the results of his research on the Kyma Discord, and he invites others to experiment with his approach and to engage in an online discussion of these ideas.

The Seeing is Good


Rick Stevenson is a technology entrepreneur with an impressive track record: he co-founded three successful tech startups and played an instrumental role in the growth of several others. In addition to maintaining a nearly five decade relationship with the University of Queensland’s School of Information Technology and Electrical Engineering as a student, mentor, and industry advisor, Stevenson is also an accomplished astrophotographer, and one of his images was selected as the NASA Astronomy Picture of the Day.

NASA's Astronomy Picture of the Day


Eighth Nerve [EN] Your “Rungler” patch recently made a big splash on the Kyma Discord Community. Could you give a high-level explanation for how it works?

Rick Stevenson [RS]: The Rungler is a hardware component of a couple of instruments, the Benjolin and the Blippoo box, designed by Rob Hordijk. It’s based on an 8-bit shift register driven by two square wave oscillators. One oscillator is the clock for the shift register and the other is sampled to provide the binary data input to the shift register. When the clock signal becomes positive, the data signal is sampled to provide a new binary value. That new bit is pushed into the shift register, the rest of the bits are shuffled along and the oldest bit is discarded.

Rungler circuit

The value of the Rungler is read out of the shift register by a digital to analog converter. In the simplest version of this (the original Benjolin design) the oldest three bits in the shift register are interpreted as a binary number with a value between 0 and 7.

That part is fairly straightforward. The interesting wrinkle is that the frequency of each oscillator is modulated by the other oscillator and also the value of the Rungler. The result of this clever feedback architecture is that the Rungler exhibits an interesting controlled chaotic behavior. It settles into complex repeating (or almost repeating) patterns. Nudge the parameters and it will head off in a new direction before settling into a different pattern. Despite the simplicity of the design it can generate very interesting and intricate “melodies” and rhythms.

NOTE: Rick shared his Rungler Sound in the Kyma Community Library!


Artist/animator Rio Roye helped Rick test the Rungler Sound and came up with a pretty astounding range of sonic results!

 


[EN]: What is your music background? Is it something you’ve been interested in since childhood? Or a more recent development?

[RS]: I didn’t learn an instrument as a child. I’m not sure why. We had a piano in the house and my mother played. At high school I taught myself to play guitar and bass and I kept that up for a couple of years at University but eventually gave up due to lack of time and motivation. Quite a few years later when work demands became manageable and our children were semi-independent, I took up guitar again and started tinkering with valve amp building. These days I’m learning Touch Guitar and tinkering with synths and computer music. I spent some time learning Max and Supercollider then decided to take the plunge into Kyma.

[EN]: What were the best parts of your “new user experience” with Kyma?

[RS]: I really liked the wide range of sounds in the library. It’s great to be able to find an interesting sound, play with it, and then dig inside and try to figure out how it works.

I also appreciated the power of individual sounds coupled with the expressiveness of Capytalk. I spent quite a bit of time learning Max and moving to Kyma was a bit like switching from assembler to a high level language.


[EN]: I think you might be the first astrophotographer I’ve ever met! Could you please introduce us (as total novices) to the equipment and software that you use to capture & process images like these

False color image of the Helix Nebula, resembling a human iris
Helix Nebula – Narrowband bi-color + RGB stars. Image provided by Rick Stevenson.

[RS]: You can do astrophotography with a digital camera and conventional lens. A tripod is sufficient for nightscapes but you need some sort of simple tracking mount to do longer exposures. At the other end of the spectrum (haha) are telescopes, cameras and mounts specifically designed for astrophotography. The telescopes range from small refractors (with lenses) to large catadioptric scopes combining a large mirror (500mm in diameter or even more) with specialized glass optics.

Cameras consist of a CCD or CMOS sensor in a chamber protected from moisture, thermally bonded to a multistage Peltier cooler. Noise is the enemy of faint signals so it is not uncommon to cool the sensor to 30C or more below ambient temperature. The cameras are usually monochromatic and attached to a wheel containing filters. Mounts for astrophotography are solid, high precision devices that can throw around a heavy load and also track the movement of the sky with great accuracy. Summary: there’s lots of fancy hardware used for amateur astrophotography, ranging from relatively affordable to the price of a nice house!

On the software side, the main components are capture software which runs the mount, filterwheel and the camera(s), and processing software which turns the raw data into an image. Normal procedure is to take many exposures of a deep sky object from seconds to many minutes long and “stack” them to increase the signal to noise ratio. We also take a few different types of calibration images that are used to remove artifacts and nonlinearities from the camera sensor and optical train. Processing the raw data can be an involved process and is something of an art in itself. There are quite a few software packages and religious wars between their proponents are not uncommon.

[EN]: Speaking of software packages, what is PixInsight?

[RS]: PixInsight is the image processing and analysis software that I use. It has been developed by a small team of astronomers and software engineers based in Spain. It has a somewhat unfair reputation for being complicated and difficult to use. This is partly down to a GUI which is the same on Windows, Mac and Linux (but not native to any of them) and partly because it includes a wide range of different tools and the philosophy of the developers is to expose all of the useful parameters. When I started doing astrophotography I tried a few different software packages and PixInsight was the one that produced the best results for me. Some of the other commonly used packages hide a lot of the ugly details and offer a more guided experience which suits some types of imagers better. A more recent development is the use of machine learning in astronomical processing to do things like noise reduction and sharpening. I haven’t quite decided how I feel about that yet.

I think there are some interesting parallels between PixInsight and Kyma. Apart from the cross platform problem, both have to maintain that careful balance that offers a complex, highly technical set of features in a way that can satisfy the gamut of users from casual to expert.

[EN]: Do you have your own telescope? Do you visit telescopes in other parts of the world?

I have a few telescopes. Unfortunately, I live in the city and the light pollution prevents me from doing much data collection from home. When I get the opportunity I have a few favorite locations not too far away where I can set up under dark skies. Apart from dark skies, a steady atmosphere is important for high resolution imaging. This is called the “seeing” and it’s usually not that great around here.

Rick Stevenson and his C300

For several years I have been a member of a few small teams with automated telescopes in remote locations. That solves a lot of problems apart from fixing things when they go wrong! My first experience was at Sierra Remote Observatory near Yosemite in California. I also shared a scope with some friends in New Mexico and now I’m using one in the Atacama Desert in Chile. The skies in the Atacama are very dark and the seeing is amazing! I haven’t ever visited any of the sites but I did catch up with some team mates on a couple of business trips.

[EN]: Does your camera give you access to data files? From some of the caption descriptions, it sounds like your camera has sensors for “non-visible” regions of the spectrum, is that true?

[RS]: The local and the remote scopes all deliver the raw image and calibration data in an image format called “FITS” which is basically a header followed by 2D arrays of data values. A single image will almost always be produced from very many individual sub-exposures. The normal process is to calibrate and stack the data for each filter before combining the stacks to produce a color image.

The camera sensors will usually detect some UV and near-infrared as well as visible light, but they aren’t commonly used in ground-based imaging. I use red, green and blue filters for true color imaging, or I image in narrowband. Narrowband uses filters that pass very narrow frequency ranges corresponding to the emissions from specific ionized elements, usually Hydrogen alpha (a reddish color), Oxygen III (greenish blue) and Sulphur II (a different reddish color). Narrowband has the advantage of working even in light-polluted skies (and to some extent rejecting moonlight) and can show details in the structure of astronomical objects not visible in RGB images. The downside is that you need very long exposure times and narrowband images are false color. Many of the Hubble images you’ve undoubtedly seen are false color images using the SHO palette (red is Sulphur II, green is Hydrogen alpha and blue is Oxygen III).

Tarantula region in SHO palette, where SHO is an Acronym that stands for Sulphur II (mapped to Red), Hydrogen Alpha (mapped to Green) & Oxygen III (mapped to Blue). Three images are captured sequentially using 3 narrowband filters and then combined to create a false color image.
Tarantula region in SHO palette. Image provided by Rick Stevenson

[EN]: What is your connection with the Astronomical Association of Queensland?

[RS]: I have been a member of the AAQ for well over a decade. From 2013 to 2023 I was the director of the Astrophotography section. I helped members learn how to do astrophotography, organized annual astrophotography competitions and curated images for the club calendar.


[EN]: Have you ever seen a platypus in real life?

[RS]: Several times at zoos. Only a handful of times in the wild. They are mostly nocturnal and also very shy!

[EN]: In New Mexico (where I grew up), we all learned a song in elementary school with the lyrics: “Kookaburra sits in the old gum tree, merry merry king of the bush is he, Laugh Kookaburra, laugh Kookaburra, gay your life must be”. Is it a pseudo-Australian song invented for American kids or did you learn it in Australia too?

[RS]: I don’t know the origin but we have the same song. I have a toddler grandson who sings it! We have Kookaburras in the bushland behind our house that start doing their group calls around 3 or 4am.

Speaking of the Kookaburra song there was a story about it in the news a few years ago when the people who own the rights to the song won a plagiarism lawsuit against the band, Men at Work.

Listen for the Kookaburra song in the flute riff!

[EN]: What was the best part about growing up in Australia that those of us growing up in other parts of the world missed out on?

[RS]: Vegemite, perhaps? 🙂

The area around Brisbane in South-East Queensland has a lot going for it. The climate is pleasant and mild (except for some hot and humid days in summer). There are great beaches within an hour or so as well as forested areas for hiking. The educational and health systems are good, even the public / “free” parts (true in most of the major centres in Australia). Brisbane is large enough to have some culture and work opportunities without being too busy and fast-paced. It’s pretty good, but not perfect.


[EN]: What was your favorite course to teach at the School of Information Technology and Electrical Engineering, University of Queensland St Lucia?

[RS]: I was an Adjunct Professor at UQ ITEE for 21 years but I didn’t teach any courses. I acted as an industry advisor, was involved in bids to set up research centers in embedded systems and security, and I hired a lot of their graduates!

[EN]: Do you have a “favorite” programming language or family of languages?

[RS]: I rather liked Simula 67 though I only used it for one (grad student) project. I think it was one of the first OO languages if not the first. Most of my work programming experience was in assembler on microprocessors and C on micro and minicomputers. I was very comfortable in C but wouldn’t say I loved it. These days I quite like the Lisp family of languages.

[EN]: Are you one of the founders of OpenGear? In that role, do you do a lot of international travel to the company’s other offices?

[RS]: I was one of the founders and worked in a few roles up until Opengear was acquired in late 2019. Having stayed on after acquisitions in a couple of previous lives I was pleased that the new owners didn’t want me to hang around and I exited just in time for COVID. While at Opengear and in previous startups I made regular business trips, mostly to North America but also the UK and Europe and occasional trips to South-East Asia.

[EN]: What do you identify as the top three “open problems” or “grand challenges” in technology right now?

[RS]: In no particular order and not claiming to have any deep insights…

  • Achieving AGI [artificial general intelligence] is still an open problem and one that, in my opinion, is not going to get solved as quickly as a lot of people think. Maybe that’s a good thing?
  • I spent about a decade working on computer security products starting in the early 2000s and despite a lot of money and effort spent on band-aid solutions, things have only become worse since then. Fixing our whack-a-mole computer security model is certainly a grand challenge, not helped by the incentives that security vendors have to protect their recurring revenues. The recent outage caused by the CrowdStrike security agent crashing Microsoft computers demonstrates the fragility of our current approach.
  • The ability to create practical quantum computers still seems a bit of a reach though I hear that Wassenaar Arrangement countries are all quietly introducing export controls on quantum computers. Perhaps they know something I don’t.

[EN]: What’s next on the horizon for you? What Kyma project(s) are you planning to tackle next?

[RS]: I have a long list of potential projects but I think the next two I’ll tackle are some baby steps into sonification of astronomical data (thanks for the encouragement) and a Blippoo Box – another of Rob Hordijk’s chaotic generative synths.

[EN]: What are you most looking forward to learning about Kyma over the next year?

[RS]: There are many areas of Kyma that I have only dabbled in and even more that I haven’t touched at all. I’d like to get a lot more fluent with Smalltalk and Capytalk, do some projects with the Spectral and Morphing sounds and also plumb the mysteries (to me) of the Timeline and Multigrid. That should keep me busy for a while!

[EN]: Outside of Kyma, what would you most like to learn more about in the coming year?

[RS]: I have a couple of relatively new synths, a Synclavier Regen and a Buchla Easel, that I would like to spend a lot more time learning my way around. I also want to keep progressing with my Touch Guitar studies.


[EN]: Rick, thank you for the thought-provoking discussion! Can people get in touch with you on the Kyma Discord if they have questions, feedback, or proposals for collaboration?

[RS]: Yes!