Driving the user experience with sound

Interview with Lowell Pickett, Senior Audio Engineer at Zoox
A man with light brown curly hair wearing a cap and orange hoodie, smiling and holding a microphone, posing a question after a conference presentation
Lowell Pickett at the Kyma International Sound Symposium in Brussels (KISS 2013)

Every company should have an audio professional on staff!

—Lowell Pickett, Senior Audio Engineer at Zoox / Sound Lab

Eighth Nerve (EN): Yes! I agree 100%! Could I ask you to reflect on the skills and experience that someone from professional audio can bring to other industries (i.e., alternative applications of sound and audio)?

Lowell Pickett (LP): I often take my skills for granted, but the difference between a good-enough and a well-polished audio asset can truly impact engagement and can help to define a product when done well.

Sound is a ubiquitous component of our day-to-day, media-filled experience, and we often simply accept whatever audio quality might be convenient at any given moment – but sound has the potential to offer so much delight…  In many situations where people have become accustomed to marginal audio quality – an old, well-used comms system perhaps or a family video that’s been watched many times – some applied audio knowledge can truly improve the (audio) quality of their life.

A well considered audio experience improves workplaces and recreational spaces alike – and a positive audio association with a brand or personal reputation can be a distinguishing characteristic.

EN: What is Zoox? How long have you been there, and what is the accomplishment you’re most proud of so far?

LP: I’ve been working for Zoox, an autonomous ride-hailing company, for about 6 years – and the culmination of that effort is something that I can quite literally look back at now and appreciate.  Today, if you’re in the right part of San Francisco, Las Vegas or Foster City, you can hear these vehicles approach – broadcasting audio I’ve helped to develop – and turn around and look back at them!

In working on this project, my worlds have collided in ways that I never would have anticipated: I’ve been obsessed with cars/trucks/vehicles since I can remember, and I’ve been able to build off past audio explorations on a platform that would have been fantasy just a decade ago.

EN: What’s a typical day for you like at Zoox?

LP: On a typical day, I bounce back and forth between the studio at the company HQ in Foster City and assorted test platforms in the engineering wing, relating our designs from studio prototypes to real life on-hardware applications. I balance a modest schedule of meetings with hands-on audio/UX feature development projects.

Our exterior audio projects demand quite a bit of outdoor testing and this brings a welcome shift from the typically-indoor audio life as my teams regularly travel to vehicle testing venues.

EN: Can you describe a few of the sounds the vehicles make? What are some of your favorite sonic elements that people can be listening for? Are there different sets of sounds for the exterior versus the interior of the vehicle?

LP: We’ve been developing an interior experience for our riders as well as exterior audio behaviors that help our vehicle interact with the world at large.

One of my favorite sounds is one we call ‘Aura’ – it’s a relaxing, slowly swirling ambient bed that can be heard as a rider approaches the waiting vehicle.  The exterior elements are different from what you hear on the inside, but complementary – so the auditory experience of entering the vehicle is a smooth and welcoming transition. This sound bookends the rider experience, so you would hear it as you exit as well.

I’m also fond of our exterior door chime – it plays on either end of the vehicle, directed towards the operating side. It’s a short puff of a sound that sits pleasantly in urban environments while effectively alerting those nearby to the moving door.

EN: Do you think that non-speech audio can convey meaning? Please say yes! 😉 Can you give some examples?

LP: Yes!  And this is something my team is exploring! As our vehicle navigates it will need to communicate with the world around it – and we aim to apply audio with as little speech as necessary.

EN: Does Kyma play a role in your current work environment?

LP: Yes!  Kyma plays a big role… a lot of the audio you hear on the vehicle was first prototyped using Kyma.

Kyma has always played a substantial role at Zoox’s SoundLab.  When I started in 2019, there was already a Pacarana in the studio – a lot of early audio concept work was created in Kyma which has influenced the sound design present on the vehicle today. I gladly continued that tradition, and as our vehicle came to life, I persisted in prototyping various audio features using Kyma.

These days I use Kyma in several ways. On a basic level, I use Kyma to model systems within various hardware constraints; by building audio prototypes that consider specific control signal refresh rates and quantizations, I can try to exploit the reactive space within their confines.

Kyma has also played a role in our mix process where specific multi-channel panning and reverb processing is applied to complement our interior carriage seating arrangement and associated speaker placement.

On the exterior, Kyma was used to develop panning effects across our multi-speaker arrays for our passenger ingress and egress sequences. In these cases, it’s convenient to use the associated Kyma Control app on the iPad to create triggers and dial in parameters while connected to and working around our vehicle. It feels nimble to be able to work in the cabin or walk around the vehicle with just a lightweight control surface, and by extracting it’s GUI from the Kyma VCS, this pairing makes for quick control surface development for audio demonstrations.

EN: The symmetric design of the vehicle is super appealing! Did the symmetric cabin pose any special challenges (or offer opportunities) when it comes to audio?

LP: Yes! And actually, the inward facing arrangement of the cabin presented me with the perfect opportunity to apply some of the things I’d learned from my previous Kyma-based spatial performance environments!

EN: You mean like the portable immersive sound setup you presented at KISS 2010 in Vienna?

LP: Are you referring to ‘The Pentagon’?

EN: Yes! The Pentagon! Here’s an excerpt from your abstract (and a link to the video of your presentation):

The Pentagon – an experiment using Kyma for live 3D audio manipulation in the music venue of the future View Video

The purpose of 'The Pentagon' is to present a new type of performance space that provides the audience with a unique social and musical experience in surround sound. The performer is positioned at the center of The Pentagon (in the middle of the audience) and uses Kyma to spatially mix and manipulate audio - alternatively, an assortment of performers can be positioned around the edges of The Pentagon. This experiment aims to heighten the audience's awareness of the real and virtual space around them.

EN: How is Kyma different from other audio software that you’ve worked with? Is there a reason why you continue to use it?

LP: There is something about the process that Kyma enforces that speaks to me: build it in the software, get it working with the hardware, interact with what you made, repeat with refinements. You iterate on what you have been creating in a way that is unlike how you’d ever approach a DAW or other electronic instrument. I find that this is relatable to hardware development where firmware is updated such that features may be created and/or tuned to be more ideally reactive, improving how they express incoming control signals.

EN: Can you give an example of how Kyma has come in handy for prototyping and iterative tweaking?

LP: Sure!  When you’re working with automotive audio, many components run on a kind of local area network called the CAN bus. As a hypothetical, we might want to ensure control signals are going to work well at the lower update rate of the CAN bus. Something might sound great when the controls update once per millisecond, but we may also have to consider how that system sounds when the update rate is 0.5 s. In Kyma, we could simulate “downsampling” the control signals and tweak the smoothing and filtering until it sounds great in the target environment.

EN: Kind of like having a couple of small monitors in the studio to test that your music is still going to sound okay on broadcast radio?

LP: Exactly.

EN: Do you remember how you first heard about Kyma and why you decided to get it?

LP: Amongst the more curious publications that arrived at my house when I was growing up was the quarterly printing of Computer Music Journal from MIT press – my father had collected them for years, and they were archived on our bookshelves at home.  I had started collecting my own music-making equipment while I was in high school, and I became interested in the multi-colored volumes.  While I admit that a lot of the articles went over my head at the time, I always found myself flipping to the back pages where an assortment of somewhat recently released synths, samplers, interfaces or software titles were pictured and given a short review.  In one issue (perhaps in ‘98?), [the] unique script on the Capybara 320 caught my eye and I was intrigued by the mysteries within its black box.

So I sent a letter in the mail to Champaign, IL to request a demo CD and figure out what Kyma was all about.  That CD really blew my mind… I played it for all my friends – Carla’s voice resynthesized by a bank of oscillators, an assortment of spectral morphs, exotic filtering…

EN: Is this around the time when you were finishing high school or were you already at Cal Arts?

LP: I think this was around the same time I started my freshman year at CalArts.  I was playing my electric cello a lot and looking for interesting ways to process my performance. My ‘beige’ 266 MHz G3 was a fine machine for the time, but it still often provided a bit of latency and a somewhat glitchy experience once you tried to stack up FX while processing in real time. I had been learning MAX/MSP and a bit of Supercollider, but I became convinced that Kyma’s processing capabilities were what I really needed for live performance. I spent the following summer working as a studio assistant at a Jingle house and saved up for my first system…

EN: What’s the most important thing you learned from your time at CalArts?

LP: CalArts prides itself on helping their students become independent artists, and I believe that those skills can be applied universally. I think CalArts provided me with confidence to work independently towards succeeding in whatever endeavor I might choose to pursue, even if I need to educate myself more along the way.  I left feeling that I could coach myself through a creative process or other more technical developments. While I have faith in my ability to work independently, as a musician I’m also conditioned to work intuitively with peers as if we were in an ensemble.

EN: Any other fond memories of your time at CalArts?

LP: When I arrived on campus, I knew immediately that I was amongst my people. It was rumored that my freshman dorm room once was assigned to Tim Burton – the doorway had been uniquely stylized and the interior was equipped with student-built loft beds that made the room essentially a duplex.  My roommate Nolan (who also became a Kyma user!) and I transformed the place with camo netting, Christmas lights and an array of borrowed / purchased / liberated speakers and audio equipment.  It was a jam-room for live-electronics improv – I’m sure our neighbors hated us.

I had been playing classical cello since I was 7, but was looking for an exit plan.  My music skills brought me to the school as a performance major, but I quickly plotted a course to sidestep into the world of music technology.  I was fresh out of high school, armed with a small synth/sampler collection, a G3 computer, a new electric cello, and a ton of NYC attitude – bringing all that to the burbs of Valencia, CA was a bit of a culture shock.

Looking back, I remember training to achieve a state of flow while playing electric cello in Susie Allen’s improv classes – some of those sessions really surprised me in that there were moments where, out of nothing, a powerful musical moment was created that would only be captured in your personal memory – these are well carved in to my mind.

EN: Do you identify primarily as an artist? As an audio engineer? Both?

LP: [I identify as] an enthusiast – perhaps a professional enthusiast now – but it’s my enthusiasm for new audio experiences that drives me.  I don’t really know that I would naturally call myself an artist OR an engineer – while I’ve worn these titles, they both feel a little uncomfortable to be honest.  I can get caught up in making music or considering cinema – I can also get lost in living-room speaker placement and DSP-assisted room tuning – or listening to the ambience with my head tilted in various positions while my fingers bend my ears around…

EN: You’ve had a long career in the audio industry, and some of your previous jobs placed you in the role of “tech support” for a sound artist. How do you balance this role with your own creative work? Does technical support have some aspects in common with the role of mental health counselor? (As software developers, we often get to interact with customers when they are under their most stressful conditions — like just before a performance. Were you often working under similar conditions?)

LP: I absolutely relate to the role of mental health counselor!  Unfortunately, I’m not academically qualified to list that on my resumé.  When you are in a private setting with an artist for long periods of time, you are not just their tech, or engineer, or whatever… Some people foster an environment that supports feedback and discussion while others just want you to do what you were hired to do – and otherwise, keep quiet.

I try to keep my personal work experientially unique compared to what I am employed to do. That way, I feel like burn-out from one is somewhat fire-walled from the other.

EN: What’s your advice about how best to work in a supportive role for an artist?

LP: Listen a lot.  Present feedback but not direction.

EN: Without naming names… describe your worst “client from hell” experience.  What happened? How did you handle it? Would you do anything differently now?

LP: At some point, I had developed a bit of a reputation where my employers thought I was conditioned to deal with some of their most difficult artists or clients!  I can be hard to crack, but I certainly have my limits. I don’t like to describe those experiences as “the worst” though; it’s more fun and less negative to think of them as the oddest…

EN: So I read that you sometimes did “house calls” for Westlake Pro (it almost sounds like a doctor 😉 ) Did you have any crazy or unexpected experiences making house calls and studio calls in LA (again, without naming names!)

LP: I frequently was hired as a contractor by a music retailer – and did house calls to support setup or training on whatever a customer may have been encouraged to purchase. Not all were professionals, and often, customers had acquired items that they struggled to understand and operate. They had dreams of making masterpieces nonetheless.  I remember one such customer wanted me to teach them how to record with a mic into a DAW on their laptop – and following hours of slow and careful instruction, they procured a small crumpled leathery pouch.  This was a whale heart, they told me – and they wanted to introduce this rare instrument to the world. They began to hum into the pouch/wad and move it frantically around their lips and mouth – then this hum turned into more of a wail with a bit of a gargle. The energy in the room changed, and I felt a bit concerned…  After about 10 minutes, they were done with the take and put the ‘whale heart’ down. Happy with the results, they wanted me to come back the next week and record more of their songs with them, but I politely declined.  Maybe they have a hit now?

EN: Conversely, describe a mountaintop, peak experience you’ve had in the audio industry.

LP: I think one of my more appreciated industry achievements came while I was working as lead tech at Chalice Recording Studios in Hollywood.  After I started there, it became clear to me that our clients were critical of the main speaker systems in the flagship A and B mix suites.  I talked the manager into investing some money to change things up a bit – I pulled out all of the analog crossovers and EQs and integrated modern, concert-sound subwoofers with digital processors with our large Augspurger / TAD in-walls. This was a big leap for a studio that was known more as a reflection of golden studio arts than a new technology adopter. Once everything was installed, cleaned up and tuned, the results spoke for themselves.

Chalice’s clients were very happy with the sound quality and clean bass in our rooms – word got out quickly that we’d made some improvements.  Our manager had the rooms booked solid and I was getting asked quite a bit about the changes we’d made. There was a short feature on our update in an industry magazine and after a while, some studios across town started installing  similar equipment.

EN: Dante or AVB? Do you have a favorite and why?

LP: Dante – there are more accessory switches that work with Dante right out of the box making for an easier setup – and it seems to have more support from manufacturers here in the US.

EN: Do you have a favorite audio interface? What makes it your favorite?

LP: The latest generation of NTP/DAD/MTRX Thunderbolt interfaces with FPGA have been game-changers for multichannel routing and level control across multiple I/O formats. The ability to use a Stream Deck and DADman to control these interfaces has made them flexible and configurable to match personal workflow preferences.

A DAD Core256 can really tie a studio together – especially considering the Pacamara! I use the inexpensive MiniDSP MCHStreamer Box with the Pacamara – and work with 8 channels of 48k ADAT I/O from Kyma. The Core 256 (or larger interfaces) can bring that ADAT stream back into my DAW over Thunderbolt and/or route the signals to any other MADI/Dante connection point with remarkably low latency – all at the touch of a button/knob on my Stream Deck.

EN: Favorite near-field studio monitors?

LP: I have become partial to PMC’s Result 6 active monitors.  I like their fancier models as well, but their entry near-field has become a familiar tool for me that is easy to place on a desk for relatable results in bedroom studios and professional rooms alike.  I am impressed by how well they reproduce clean bass despite their size, apparently thanks to their ‘Advanced Transmission Line’.

EN: You also worked with Dr. Henry Nicholas (Broadcom founder) for a period of time. At SSC we think of him as Dr. Ethernet! What kinds of research and application development did you do for Henry Nicholas’ studio?

LP: I signed an NDA for Dr. Nick’s company when I left (it’s been over 10 years now)… But one feature of working at his facilities that I am allowed to talk about is that I had access to a fully loaded Capybara 320, installed to support a 7.1 surround mix room – that setup really got me hooked on working with Kyma and multi-speaker setups.

EN: What TV shows did you work on with composer David Vanacore?

LP: There were so many! But I’d say his work with Mark Burnett felt most exciting to engage with – I regularly re-arranged his themes to fit weekly segments of ‘Survivor’, and it was fun engaging with his massive collection of sampled instruments to accent spicy moments during the ‘Tribal Council’ scenes.

EN: Were you working with Howard Shore when he composed the score for Lord of the Rings? (That score is so beautifully melancholy — did it make you feel sad to be hearing that music all day at work?)

LP: I worked with Howard Shore not long after I graduated from CalArts – he was already working on the second movie in the series at that time, The Two Towers.  And yes, I think the melancholy nature of the music may have set some moods… especially as the seasons turned to fall and winter at his up-state New York facility.  Recently, I rewatched the trilogy with Lawrence; it had been over 2 decades since I’d experienced LOTR, and it felt as epic and exciting to me as it did originally!

EN: Can you describe the project you did with Allison Goodman involving an EEG headset and memory-evocative soundscapes. (I witnessed someone literally moved to tears while experiencing the sound installation you guys created for KISS 2013 in Brussels. Did you ever consider using the combination of sounds and EEG as a form of music therapy?)

Allison Goodman @ KISS 2012
Allison Goodman wearing an Emotiv EEG headset at the Kyma International Sound Symposium (KISS2012)

LP: This was our first time collaborating!  The concept was straightforward – Allison found or captured sound that appealed to her and related to personal memories – we had a small PCM recorder that she used around our neighborhood in LA to collect audio that she felt spoke to our life there.  These sounds were sequentially presented over a looping background arrangement of cello recordings I’d provided.  She was mixing the performance live in her head!  The idea was that sounds she could personally relate to could inspire an emotional response that was picked up on the EEG Headset – and then mapped to control a combination of volume, radius, reverb mix and rate of spin of various sound elements that whirled around the theater in Quad.

Allison is able to connect with people on an emotional level very quickly – and this headset seemed to amplify her powers! Working with EEG was an interesting experience, but after seeing more than one emotional response such as you described, we felt that there was something powerful going on that we needed to be careful with and understand a bit better.

EN: When your son Lawrence was born, did it change the way you look at life and the world? Or was it a more incremental change when he joined you on the journey?

LP: I’d say it was a massive change – and it really made me look at the life that my wife Allison and I had enjoyed as a couple in Los Angeles. We asked ourselves if that world was what we wanted young Lawrence to grow up in – and in the end we couldn’t convince ourselves that it was… so a year after he was born, we concluded that chapter of our story in LA.

EN: California, New York, or Texas? Which do you call home? (Or somewhere else not on this list?)

LP: I’ve spent more than half my life in California now – but not all in the same region.  I’d need to use your tachyon field to revisit the New York I was familiar with, yet still I’m way too much of a Yankee to spend too much time in Texas (they all can tell!).  California is home today!

EN: If a 19 old approached you today, seeking advice on the best route to working in the audio recording industry, would you discourage or encourage them? What route would you advise them to take?

LP: I would never intentionally discourage them – but I would be honest in relating my experience, whatever impact that might have. A lot has changed since I got started – there are now audio positions that didn’t exist when I was younger in industries that I hadn’t considered.  I would suggest that they cast a wider net and look beyond the audio recording industry.  There are amazing opportunities in technology and media development that are outside of the traditional entertainment industry where the culture can be different.  I will always be connected to the music biz, but it can feel like a service industry; that’s something young people should consider.

EN: Do you think your son, Lawrence, will be a musician? Or is he clearly headed down a different path?

LP: Music appreciation is in his bones – and it makes him sing and dance all day.  He’s not so interested in musical instruments or taking lessons, but his life seems to be scored to an internal soundtrack that he moves along to.  A joyous path to somewhere – maybe a career in music, maybe not – but along the path itself, there will be music and dancing!

EN: Do you believe that you will ever visit another planet? Will Lawrence?

LP: I will not be visiting any planets – but it’s not the planets’ fault, more the method of transport needs improvement.  I dread long flights – so spacecraft would have to advance to become much more enjoyable than commercial air travel, or even ocean cruise-lines.  It doesn’t look like the International Space Station or Dragon capsules are all that comfy today –  and we aren’t able to project ourselves instantaneously via some cosmic beam.  Lawrence on the other hand may be visiting planets already, behind my back without me knowing.

EN: What area outside of audio would you most like to learn about over the next 5 years?

LP: I’ve been living in the SF bay area for over 8 years now – I cross the bay almost every day and I am frequently along its perimeter.  I’m tired of being a spectator and I think I’d like to learn how to sail and operate a boat.

EN: What are you most looking forward to in the immediate future?

LP: I’m excited for these bots [Zoox automonous taxis] to make their way in the world and contribute to our urban sound-scapes – and I’m excited for people to experience the unique in-cabin sound experience on board as well, as we prepare to start welcoming our first customers!

EN: Were you guys at the Consumer Electronics Show in Las Vegas?

LP: Zoox did indeed have a booth at CES this year, and we’ll be welcoming our first public riders later this year in Las Vegas and San Francisco.

EN: I’m eager to give it a try! I think of you guys every time I look out over a vast parking lot filled with cars that go virtually unused for 8 hours a day. An on-call robo-taxi makes so much sense (and whoever designed your vehicle made it look almost cuddly and adorable, like a puppy!)

Generative sound design in Kyma

By augmenting traditional sample-based sound design with generative models, you gain additional parameters that you can perform to picture or control with a data stream from a model world — like a game engine. Once you have a parameterized model, you can generate completely imaginary spaces populated by hallucinatory sound-generating objects and creatures.

Thanks to Mark Ali and Matt Jefferies for capturing and editing the lecture on Generative Sound Design in Kyma presented last November by Carla Scaletti for Charlie Norton’s students at the University of West London.

Sound design drives the narrative

Phaze UK sound designers — Rob Prynne, Matt Collinge & Alyn Sclosa along with Kyma consultant Alan M. Jackson — have been getting some well-deserved attention recently for their work on Ridley Scott’s film Gladiator II, work that has been shortlisted for an Academy Award nomination and a BAFTA.

In the 6 January 2025 ToneBenders sound design podcast, Timothy Muirhead and the sound team delve into the details of how sound design guides audience sentiment during each of the gladiatorial contests — with the “crowd sound” taking on a role typically served by the musical score.

Director Scott’s challenge to the sound team was that the crowd itself should function as a major character in the film — telling the story of the rise and fall of each of the other characters — and symbolizing the fall of Rome as the populace loses faith in the emperors.

“So much detail and effort went into making the crowd reactions in the Roman Colosseum…narrate the emotions of the battles”

During the podcast, the sound team — Paul Massey (Dialog/Music Re-Recording Mixer), Danny Sheehan (Supervising Sound Editor), Matt Collinge (Supervising Sound Editor & SFX/Foley Re-Recording Mixer) and Stéphane Bucher (Production Sound Mixer) recount their unusual approach of including the post-production team in the production phase so they could coordinate their efforts. This gave the post-production sound designers an opportunity to “direct” the crowd extras and to ensure that they could capture the raw material needed for post production crowd-enlargement.

In an IBC behind-the-scenes article, supervising sound editors Matthew Collinge and Danny Sheehan (co-founders of Phaze UK) describe how they generated the sound of 10,000 spectators by layering the sound of extras on the set with recordings of cheers and jeers from bullfights, cricket matches, rugby and baseball games and then “…transformed them into a cohesive roar using a Kyma workstation,” according to Sheehan.

During an interview with A Sound Effect, Matthew Collinge describes the sound design for the gladiatorial contests involving animals: “We manipulated actual animal sounds to highlight their aggression and power. For the baboons, we morphed chimp calls and then combined them with the screeches of other animals to create a unique and very intimidating sound.”

In that same interview, Collinge describes how Rob Prynne and the team enhanced the sounds of the arrow trajectories:

Rob Prynne used these recordings to model a patch in the Kyma where we took the amplitude variation between the L and the R in the recordings and used this to create an algorithm that we could apply to other samples. We then mixed in animal and human screams and screeches which had their pitch mapped to this algorithm and made it feel as one with the original recordings.

GLADIATOR II: Behind The Glorious Sound – with Matthew Collinge & Danny Sheehan


When you see Gladiator II — whether in a theatre or via your favorite streaming service — you’ll be rewarded with a bit of extra information if you stick with it through the closing credits!

Sound design & Kyma Consultancy credits. Photo submitted by an anonymous movie-goer who patiently sat through the end credits.

Generative sound design at University of West London

At the invitation of UWL Lecturer Charlie Norton, Carla Scaletti presented a lecture/demonstration on Generative Sound Design in Kyma for students, faculty and guests at University of West London on 14 November 2024. As an unanticipated prelude, Pete Townshend (who, along with Joseph Townshend, works extensively with Kyma) welcomed the Symbolic Sound co-founders to his alma mater and invited attendees to tour the Townshend Studio following the lecture.

After the seminar, graduating MA students Vinayak Arora and Sabin Pavel (hat) posed with Kurt Hebel & Carla Scaletti (center) and Charlie Norton (distant upper right background)
UWL Professor of Music Production Justin Paterson and Trombonist/Composer/Kyma Sound Installation artist Robert Jarvis discuss the extensive collection of instruments in the Townshend Studio

It seems that anywhere you look in the Townshend Studio, you see another rock legend. John Paul Jones (whose most recent live Kyma collaborations include Sons of Chipotle, Minibus Pimps, and Supersilent among others) recognized an old friend from across the room: a Yamaha GX-1 (1975), otherwise known as ‘The Dream Machine’ — the same model JPJ played when touring with Led Zeppelin and when recording the 1979 album “In Through The Out Door”. Yamaha’s first foray into synthesizers, only 10 were ever manufactured; it featured a ribbon controller and a keyboard that could also move laterally for vibrato. Other early adopters included ELP, Stevie Wonder and Abba.

JPJ recollects his days of touring with the GX1 and the roadie who took up temporary accommodation in its huge flight case, as Alan, two students, Bruno and Robert look on.
Charlie Norton with Alan Jackson (back), JP Jones (at Yamaha GX-1), Carla Scaletti & Kurt Hebel in the Townshend Studio
Alan Jackson and Pete Johnston pondering the EMS Vocoder
Composer Bruno Liberda (the tall one) with Symbolic Sound co-founder Carla Scaletti

 

Kyma Sound design studies at CMU

Did you know that you could study for a degree in sound design and work with Kyma at Carnegie Mellon University? Joe Pino, professor of sound design in the School of Drama at Carnegie Mellon University, teaches conceptual sound design, modular synthesis, Kyma, film sound design, ear training and audio technology in the sound design program.

Sound design works in the spaces between reality and abstraction. They are less interesting as a collection of triggers for giving designed worlds reality. They are more effective when they trigger emotional responses and remembered experiences.

Thinking through sound — Ben Burtt and the voice of WALL-E

Ben Burtt was recently awarded the 2024 Vision Award Ticinomoda, which describes him as:

“Ben Burtt is a pioneer and visionary who has fundamentally changed the way we perceive sound in cinema.”

In this interview, Burtt shares some of his experiences as Sound Designer for several iconic films, including his discovery of “a kind of sophisticated music program called the Kyma” which he used in the creation of the voice of WALL-E.

The interviewer asked Ben about the incredible voices he created for WALL-E and EVA:

Well, Andrew Stanton who was the creator of the WALL-E character in the story; apparently he jokingly referred to his movie as the R2-D2 movie. He wanted to develop a very affable robot character that didn’t speak or had very limited speech that was mostly sound effects of its body moving and a few strange kind of vocals, and someone (his producer, I think — Jim Morris) said well why don’t you just talk to Ben Burtt, the guy who did R2-D2, so they got in touch with me.

Pixar is in the Bay Area (San Francisco) so it was nearby, and I went over and looked at about 10 minutes that Andrew Stanton had already put together with just still pictures — storyboards of the beginning of the film where WALL-E’s out on his daily work activities boxing up trash and so on and singing and playing his favorite music, and of course I was inspired by it and I thought well here’s a great challenge and I took it on.

This was a few months before they had to actually greenlight the project. I didn’t find this out until later but there was some doubt at that time about whether you could make a movie in which the main characters don’t really talk in any kind of elaborate way; they don’t use a lot of words. Would it sustain the audience’s interest? The original intention in the film that I started working on was that there was no spoken language in the film that you would understand at all; that was a goal at one point…

So I took a little bit of the R2 idea to come up with a voice where human performance would be part of it but it had to have other elements to it that made it seem electronic and machine-like. But WALL-E wasn’t going to Beep and Boop and Buzz like R2; it had to be different, so I struggled along trying different things for a few months and trying different voices — a few different character actors. And I often ended up experimenting on myself because I’m always available. You know it’s like the scientist in his lab takes the potion because there’s no one else around to test it: Jekyll and Hyde, I think that’s what it is. So I took the potion and and turned into Mr Hyde…

Photo from Comicon by Miguel Discart Bruxelles, Belgique This file is licensed under the Creative Commons Attribution-Share Alike 2.0 Generic license

The idea was to always give the impression of what WALL-E was thinking through sound…

But eventually it ended up that I had a program — it was a kind of sophisticated music program called the Kyma and it had one sound in it — a process where it it would synthesize a voice but it [intentionally] didn’t do very well; the voice had artifacts that had funny distortions in it and extra noises. It didn’t work perfectly as a pure voice but I took advantage of the fact that the artifacts and mistakes in it were useful and interesting and could be used and I worked out a process where you could record sounds, starting with my own voice, and then process them a second time and do a re-performance where, as it plays back, you can stretch or compress or repitch the sounds in real time.

So you can take the word “Wall-E” and then you could make it have a sort of envelope of electronic noise around it; it gave it a texture that made it so it wasn’t human and that’s where it really worked. And of course it was in combination with the little motors in his arms and his head and his treads — everything was part of his expression.

The idea was to always give the impression of what WALL-E was thinking through sound — just as if you were playing a musical instrument and you wanted to make little phrases of music which indicated the feeling for what kind of communication was taking place.

Generative events

Explorations in the generative control of notes and rhythms, scales and modes — from human to algorithmic gesture

Composer Giuseppe Tamborrino uses computer tools not just for timbre, but also for the production of events in deferred time or in real time. This idea forms the basis for generative music — examples of which can be found throughout music history, for example Mozart’s Musical Dice Game.

In his research, Tamborrino carries out this process in various ways and with different software, but the goal is always the same: the generation of instructions for instruments — which he calls an “Electronic score”.

Here’s an example from one of his generative scores:

 

As part of the process, Tamborrino has always designed in a certain degree of variability, using different stochastic or totally random objectives to speed up the process of abstraction and improvisation that, once launched, are invariant. Often, though, this way of working resulted in small sections that he wanted to cut or correct or improve.

This motivated him to use Kyma to pursue a new research direction — called post-controlled generative events — with the aim of being able to correct and manage micro events.

This is three-step process:

  • A generational setting phase (pre-time)
  • A performance phase, recording all values in the Timeline (real time)
  • A post-editing phase of automatic “generative (after time) events

Tamborrino shared some of the results of his research on the Kyma Discord, and he invites others to experiment with his approach and to engage in an online discussion of these ideas.

The Seeing is Good


Rick Stevenson is a technology entrepreneur with an impressive track record: he co-founded three successful tech startups and played an instrumental role in the growth of several others. In addition to maintaining a nearly five decade relationship with the University of Queensland’s School of Information Technology and Electrical Engineering as a student, mentor, and industry advisor, Stevenson is also an accomplished astrophotographer, and one of his images was selected as the NASA Astronomy Picture of the Day.

NASA's Astronomy Picture of the Day


Eighth Nerve [EN] Your “Rungler” patch recently made a big splash on the Kyma Discord Community. Could you give a high-level explanation for how it works?

Rick Stevenson [RS]: The Rungler is a hardware component of a couple of instruments, the Benjolin and the Blippoo box, designed by Rob Hordijk. It’s based on an 8-bit shift register driven by two square wave oscillators. One oscillator is the clock for the shift register and the other is sampled to provide the binary data input to the shift register. When the clock signal becomes positive, the data signal is sampled to provide a new binary value. That new bit is pushed into the shift register, the rest of the bits are shuffled along and the oldest bit is discarded.

Rungler circuit

The value of the Rungler is read out of the shift register by a digital to analog converter. In the simplest version of this (the original Benjolin design) the oldest three bits in the shift register are interpreted as a binary number with a value between 0 and 7.

That part is fairly straightforward. The interesting wrinkle is that the frequency of each oscillator is modulated by the other oscillator and also the value of the Rungler. The result of this clever feedback architecture is that the Rungler exhibits an interesting controlled chaotic behavior. It settles into complex repeating (or almost repeating) patterns. Nudge the parameters and it will head off in a new direction before settling into a different pattern. Despite the simplicity of the design it can generate very interesting and intricate “melodies” and rhythms.

NOTE: Rick shared his Rungler Sound in the Kyma Community Library!


Artist/animator Rio Roye helped Rick test the Rungler Sound and came up with a pretty astounding range of sonic results!

 


[EN]: What is your music background? Is it something you’ve been interested in since childhood? Or a more recent development?

[RS]: I didn’t learn an instrument as a child. I’m not sure why. We had a piano in the house and my mother played. At high school I taught myself to play guitar and bass and I kept that up for a couple of years at University but eventually gave up due to lack of time and motivation. Quite a few years later when work demands became manageable and our children were semi-independent, I took up guitar again and started tinkering with valve amp building. These days I’m learning Touch Guitar and tinkering with synths and computer music. I spent some time learning Max and Supercollider then decided to take the plunge into Kyma.

[EN]: What were the best parts of your “new user experience” with Kyma?

[RS]: I really liked the wide range of sounds in the library. It’s great to be able to find an interesting sound, play with it, and then dig inside and try to figure out how it works.

I also appreciated the power of individual sounds coupled with the expressiveness of Capytalk. I spent quite a bit of time learning Max and moving to Kyma was a bit like switching from assembler to a high level language.


[EN]: I think you might be the first astrophotographer I’ve ever met! Could you please introduce us (as total novices) to the equipment and software that you use to capture & process images like these

False color image of the Helix Nebula, resembling a human iris
Helix Nebula – Narrowband bi-color + RGB stars. Image provided by Rick Stevenson.

[RS]: You can do astrophotography with a digital camera and conventional lens. A tripod is sufficient for nightscapes but you need some sort of simple tracking mount to do longer exposures. At the other end of the spectrum (haha) are telescopes, cameras and mounts specifically designed for astrophotography. The telescopes range from small refractors (with lenses) to large catadioptric scopes combining a large mirror (500mm in diameter or even more) with specialized glass optics.

Cameras consist of a CCD or CMOS sensor in a chamber protected from moisture, thermally bonded to a multistage Peltier cooler. Noise is the enemy of faint signals so it is not uncommon to cool the sensor to 30C or more below ambient temperature. The cameras are usually monochromatic and attached to a wheel containing filters. Mounts for astrophotography are solid, high precision devices that can throw around a heavy load and also track the movement of the sky with great accuracy. Summary: there’s lots of fancy hardware used for amateur astrophotography, ranging from relatively affordable to the price of a nice house!

On the software side, the main components are capture software which runs the mount, filterwheel and the camera(s), and processing software which turns the raw data into an image. Normal procedure is to take many exposures of a deep sky object from seconds to many minutes long and “stack” them to increase the signal to noise ratio. We also take a few different types of calibration images that are used to remove artifacts and nonlinearities from the camera sensor and optical train. Processing the raw data can be an involved process and is something of an art in itself. There are quite a few software packages and religious wars between their proponents are not uncommon.

[EN]: Speaking of software packages, what is PixInsight?

[RS]: PixInsight is the image processing and analysis software that I use. It has been developed by a small team of astronomers and software engineers based in Spain. It has a somewhat unfair reputation for being complicated and difficult to use. This is partly down to a GUI which is the same on Windows, Mac and Linux (but not native to any of them) and partly because it includes a wide range of different tools and the philosophy of the developers is to expose all of the useful parameters. When I started doing astrophotography I tried a few different software packages and PixInsight was the one that produced the best results for me. Some of the other commonly used packages hide a lot of the ugly details and offer a more guided experience which suits some types of imagers better. A more recent development is the use of machine learning in astronomical processing to do things like noise reduction and sharpening. I haven’t quite decided how I feel about that yet.

I think there are some interesting parallels between PixInsight and Kyma. Apart from the cross platform problem, both have to maintain that careful balance that offers a complex, highly technical set of features in a way that can satisfy the gamut of users from casual to expert.

[EN]: Do you have your own telescope? Do you visit telescopes in other parts of the world?

I have a few telescopes. Unfortunately, I live in the city and the light pollution prevents me from doing much data collection from home. When I get the opportunity I have a few favorite locations not too far away where I can set up under dark skies. Apart from dark skies, a steady atmosphere is important for high resolution imaging. This is called the “seeing” and it’s usually not that great around here.

Rick Stevenson and his C300

For several years I have been a member of a few small teams with automated telescopes in remote locations. That solves a lot of problems apart from fixing things when they go wrong! My first experience was at Sierra Remote Observatory near Yosemite in California. I also shared a scope with some friends in New Mexico and now I’m using one in the Atacama Desert in Chile. The skies in the Atacama are very dark and the seeing is amazing! I haven’t ever visited any of the sites but I did catch up with some team mates on a couple of business trips.

[EN]: Does your camera give you access to data files? From some of the caption descriptions, it sounds like your camera has sensors for “non-visible” regions of the spectrum, is that true?

[RS]: The local and the remote scopes all deliver the raw image and calibration data in an image format called “FITS” which is basically a header followed by 2D arrays of data values. A single image will almost always be produced from very many individual sub-exposures. The normal process is to calibrate and stack the data for each filter before combining the stacks to produce a color image.

The camera sensors will usually detect some UV and near-infrared as well as visible light, but they aren’t commonly used in ground-based imaging. I use red, green and blue filters for true color imaging, or I image in narrowband. Narrowband uses filters that pass very narrow frequency ranges corresponding to the emissions from specific ionized elements, usually Hydrogen alpha (a reddish color), Oxygen III (greenish blue) and Sulphur II (a different reddish color). Narrowband has the advantage of working even in light-polluted skies (and to some extent rejecting moonlight) and can show details in the structure of astronomical objects not visible in RGB images. The downside is that you need very long exposure times and narrowband images are false color. Many of the Hubble images you’ve undoubtedly seen are false color images using the SHO palette (red is Sulphur II, green is Hydrogen alpha and blue is Oxygen III).

Tarantula region in SHO palette, where SHO is an Acronym that stands for Sulphur II (mapped to Red), Hydrogen Alpha (mapped to Green) & Oxygen III (mapped to Blue). Three images are captured sequentially using 3 narrowband filters and then combined to create a false color image.
Tarantula region in SHO palette. Image provided by Rick Stevenson

[EN]: What is your connection with the Astronomical Association of Queensland?

[RS]: I have been a member of the AAQ for well over a decade. From 2013 to 2023 I was the director of the Astrophotography section. I helped members learn how to do astrophotography, organized annual astrophotography competitions and curated images for the club calendar.


[EN]: Have you ever seen a platypus in real life?

[RS]: Several times at zoos. Only a handful of times in the wild. They are mostly nocturnal and also very shy!

[EN]: In New Mexico (where I grew up), we all learned a song in elementary school with the lyrics: “Kookaburra sits in the old gum tree, merry merry king of the bush is he, Laugh Kookaburra, laugh Kookaburra, gay your life must be”. Is it a pseudo-Australian song invented for American kids or did you learn it in Australia too?

[RS]: I don’t know the origin but we have the same song. I have a toddler grandson who sings it! We have Kookaburras in the bushland behind our house that start doing their group calls around 3 or 4am.

Speaking of the Kookaburra song there was a story about it in the news a few years ago when the people who own the rights to the song won a plagiarism lawsuit against the band, Men at Work.

Listen for the Kookaburra song in the flute riff!

[EN]: What was the best part about growing up in Australia that those of us growing up in other parts of the world missed out on?

[RS]: Vegemite, perhaps? 🙂

The area around Brisbane in South-East Queensland has a lot going for it. The climate is pleasant and mild (except for some hot and humid days in summer). There are great beaches within an hour or so as well as forested areas for hiking. The educational and health systems are good, even the public / “free” parts (true in most of the major centres in Australia). Brisbane is large enough to have some culture and work opportunities without being too busy and fast-paced. It’s pretty good, but not perfect.


[EN]: What was your favorite course to teach at the School of Information Technology and Electrical Engineering, University of Queensland St Lucia?

[RS]: I was an Adjunct Professor at UQ ITEE for 21 years but I didn’t teach any courses. I acted as an industry advisor, was involved in bids to set up research centers in embedded systems and security, and I hired a lot of their graduates!

[EN]: Do you have a “favorite” programming language or family of languages?

[RS]: I rather liked Simula 67 though I only used it for one (grad student) project. I think it was one of the first OO languages if not the first. Most of my work programming experience was in assembler on microprocessors and C on micro and minicomputers. I was very comfortable in C but wouldn’t say I loved it. These days I quite like the Lisp family of languages.

[EN]: Are you one of the founders of OpenGear? In that role, do you do a lot of international travel to the company’s other offices?

[RS]: I was one of the founders and worked in a few roles up until Opengear was acquired in late 2019. Having stayed on after acquisitions in a couple of previous lives I was pleased that the new owners didn’t want me to hang around and I exited just in time for COVID. While at Opengear and in previous startups I made regular business trips, mostly to North America but also the UK and Europe and occasional trips to South-East Asia.

[EN]: What do you identify as the top three “open problems” or “grand challenges” in technology right now?

[RS]: In no particular order and not claiming to have any deep insights…

  • Achieving AGI [artificial general intelligence] is still an open problem and one that, in my opinion, is not going to get solved as quickly as a lot of people think. Maybe that’s a good thing?
  • I spent about a decade working on computer security products starting in the early 2000s and despite a lot of money and effort spent on band-aid solutions, things have only become worse since then. Fixing our whack-a-mole computer security model is certainly a grand challenge, not helped by the incentives that security vendors have to protect their recurring revenues. The recent outage caused by the CrowdStrike security agent crashing Microsoft computers demonstrates the fragility of our current approach.
  • The ability to create practical quantum computers still seems a bit of a reach though I hear that Wassenaar Arrangement countries are all quietly introducing export controls on quantum computers. Perhaps they know something I don’t.

[EN]: What’s next on the horizon for you? What Kyma project(s) are you planning to tackle next?

[RS]: I have a long list of potential projects but I think the next two I’ll tackle are some baby steps into sonification of astronomical data (thanks for the encouragement) and a Blippoo Box – another of Rob Hordijk’s chaotic generative synths.

[EN]: What are you most looking forward to learning about Kyma over the next year?

[RS]: There are many areas of Kyma that I have only dabbled in and even more that I haven’t touched at all. I’d like to get a lot more fluent with Smalltalk and Capytalk, do some projects with the Spectral and Morphing sounds and also plumb the mysteries (to me) of the Timeline and Multigrid. That should keep me busy for a while!

[EN]: Outside of Kyma, what would you most like to learn more about in the coming year?

[RS]: I have a couple of relatively new synths, a Synclavier Regen and a Buchla Easel, that I would like to spend a lot more time learning my way around. I also want to keep progressing with my Touch Guitar studies.


[EN]: Rick, thank you for the thought-provoking discussion! Can people get in touch with you on the Kyma Discord if they have questions, feedback, or proposals for collaboration?

[RS]: Yes!

Come up to the Lab

Anssi Laiho is the sound designer and performer for Laboratorio — a concept developed by choreographer Milla Virtanen and video artist Leevi Lehtinen as a collection of “experiments” that can be viewed either as modules of the same piece or as independent pieces of art, each with its own theme. The first performance of Laboratorio took place in November 2021 in Kuopio, Finland.

Laboratorio Module 24, featuring Anssi performing a musical saw with live Kyma processing was performed in the Armastuse hall at Aparaaditehas in Tartu, Estonia and is dedicated to theme of identity and inspiration.

Anssi’s hardware setup, both in the studio and live on stage, consists of a Paca connected to a Metric Halo MIO2882 interface via bidirectional ADAT in a 4U mobile rack. Laiho has used this system for 10 years and finds it intuitive, because Metric Halo’s MIOconsole mixer interface gives him the opportunity to route audio between Kyma, the analog domain, and the computer in every imaginable way. When creating content as a sound designer, he often tries things out in Kyma in real-time by opening a Kyma Sound with audio input and listening to it on the spot. If it sounds good, he can route it back to his computer via MIOconsole and record it for later use.

His live setup for Laboratorio Module 24 is based on the same system setup. The aim of the hardware setup was to have as small a physical footprint as possible, because he was sharing the stage with two dancers. On stage, he had a fader-controller for the MIOconsole (to control feedback from microphones), an iPad running Kyma Control displaying performance instructions, a custom-made Raspberry Pi Wi-Fi footswitch sending OSC messages to Kyma, and a musical saw.

Kyma Control showing “Kiitos paljon” (“Thank you” in Finnish), Raspberry Pi foot switch electronics, rosin for the bow & foot switch controller

The instrument used in the performance is a Finnish Pikaterä Speliplari musical saw (speliplari means ‘play blade’). The instrument is designed by the Finnish musician Aarto Viljamaa. The plaintive sound of the saw is routed to Kyma through 2 microphones, which are processed by a Kyma Timeline. A custom-made piezo-contact microphone and preamp is used to create percussive and noise elements for the piece, and a small diaphragm shotgun microphone is employed for the softer harmonic material.

The way Anssi works with live electronics is by recording single notes or note patterns with multiple Kyma MemoryWriter Sounds. These sound recordings are then sampled in real-time or kept for later use in a Kyma timeline. He likes to think of this as a way of reintroducing a motive of the piece as is done in classical music composition. This also breaks the inherent tendency of adding layers when using looping samplers, which, in Anssi’s opinion, often becomes a burden for the listener at some point.

The Kyma sounds used in the performance Timeline are focused on capturing and resampling the sound played on the saw and controlling the parameters of these Sounds live, in timeline automation, presets, or through algorithmic changes programmed in Capytalk.

Laiho’s starting point for the design was to create random harmonies and arpeggiations that could then be used as accompaniment for an improvised melody. For this, he used the Live Looper from the Kyma Sound Library and added a Capytalk expression to its Rate parameter that selects a new frequency from a predefined selection of frequencies (intervals relative to a predefined starting note) to create modal harmony. He also created a quadrophonic version of the Looper and controlled the Angle parameter of each loop with a controlled random Capytalk expression that makes each individual note travel around the space.

Another Sound used in the performance is one he created a long time ago named Retrosampler. This sound captures only a very short sample of live sound and creates 4 replicated loops, each less than 1 second long. Each replicated sample has its own parameters that he controls with presets. This, together with the sine wave quality of the saw, creates a result that resembles a beeping sine wave analog synthesizer. The sound is replicated four times so he has the possibility to play 16 samples if he to presses “capture” 4 times.

The Retrosampler sound is also quadraphonic and its parameters are controlled by presets. His favorite preset is called “Line Busy” which is exactly what it sounds like. [Editor’s note: the question is which busy signal?]

For the noise and percussion parts of the performance, he used a sound called LiveCyclicGrainSampler, which is a recreation of an example from Jeffrey Stolet’s Kyma and the SumOfSines Disco Club book. This sound consists of a live looping MemoryWriter as a source for granular reverb and 5 samples with individual angle and rate parameter settings. These parameters were then controlled with timeline automation to create variation in the patterns they create.

Anssi also used his two favorite reverbs in the live processing: the NeverEngine Labs Stereo Verb, and Johannes Regnier’s Dattorro Plate.

Kyma is also an essential part of Laiho’s sound design work in the studio. One of the tracks in the performance is called “Experiment 0420” and it is his “Laboratory experiment” of Kyma processing the sound of an aluminum heat sink from Intel i5 3570K CPU played with a guitar pick. Another scene of the performance contains a song called “Tesseract Song” that is composed of an erratic piano chord progression and synthetic noise looped in Kyma and accompanied by Anssi singing through a Kyma harmonizer.

The sound design for the 50-minute performance consists of 11-12 minutes of live electronics, music composed in the studio, and “Spring” by Antonio Vivaldi. The overall design goal was to create a kaleidoscopic experience where the audience is taken to new places by surprising turns of events.

Kyma consultant Alan Jackson’s studio

Alan Jackson’s studio — designed for switching among a variety of sources, controllers, and ease of portability

Musician and Kyma consultant, Alan Jackson, known for his work with Phaze UK on the Witcher soundtrack, has wired his studio with an eye toward flexibility, making it easy for him to choose from among multiple sources, outputs, and controllers, and to detach a small mobile setup so he can visit clients in person and even continue working in Kyma during the train ride to and from this clients’ studios.

Jackson’s mobile studio sessions extend to the train to / from a gig

In his studio, his Pacamara Ristretto has a USB connection to the laptop and is also wired with quad analog in / out to the mixer. That way, Jackson can choose on a whim whether to route the Ristretto as an aggregate device through the DAW or do everything as analog audio through the mixer. Two additional speakers (not shown) are at the back of the studio and his studio is wired for quad by default.

The Faderfox UC4 is a cute and flexible MIDI controller plugged into the back of the Pacamara, ready at a moment’s notice to control the VCS, and a small Wacom tablet is plugged in and stashed to the left of the screen for controlling Kyma.

Jackson leaves his Pacamara on top so he can easily disconnect 5 cables from the back and run out the door with it… which leads to his “mobile” setup.

The Pacamara and its travel toiletry bag

Jackson’s travel setup is organized as a kit-within-a-kit.

The red, inner kit, is what he grabs if he just needs a minimal battery-powered Kyma setup, e.g., for developing stuff on the train, which includes:

  • a PD battery (good for about 3 hours when used with a MOTU Ultralite, longer with headphones)
  • a pair of tiny Sennheiser IE4 headphones
  • a couple of USB cables, and an Ethernet cable


The outer kit adds:

  • a 4 in / 4 out bus-powered Zoom interface
  • mains power for the Pacamara
  • more USB cables
  • a PD power adaptor cable, so he can run the MOTU Ultralite off the same battery as the Pacamara
  • a clip-on mic
  • the WiFi aerial

If you have any upcoming sound design projects you’d like to discuss, visit Alan’s Speakers on Strings website.

When not solving challenges for game and film sound designers, Alan performs his own music for live electronics.

Alan Jackson’s setup for a recent live improvisation for salad bowl and electronics