At the invitation of UWL Lecturer Charlie Norton, Carla Scaletti presented a lecture/demonstration on Generative Sound Design in Kyma for students, faculty and guests at University of West London on 14 November 2024. As an unanticipated prelude, Pete Townshend (who, along with Joseph Townshend, works extensively with Kyma) welcomed the Symbolic Sound co-founders to his alma mater and invited attendees to tour the Townshend Studio following the lecture.
It seems that anywhere you look in the Townshend Studio, you see another rock legend. John Paul Jones (whose most recent live Kyma collaborations include Sons of Chipotle, Minibus Pimps, and Supersilent among others) recognized an old friend from across the room: a Yamaha GX-1 (1975), otherwise known as ‘The Dream Machine’ — the same model JPJ played when touring with Led Zeppelin and when recording the 1979 album “In Through The Out Door”. Yamaha’s first foray into synthesizers, only 10 were ever manufactured; it featured a ribbon controller and a keyboard that could also move laterally for vibrato. Other early adopters included ELP, Stevie Wonder and Abba.
Did you know that you could study for a degree in sound design and work with Kyma at Carnegie Mellon University? Joe Pino, professor of sound design in the School of Drama at Carnegie Mellon University, teaches conceptual sound design, modular synthesis, Kyma, film sound design, ear training and audio technology in the sound design program.
Sound design works in the spaces between reality and abstraction. They are less interesting as a collection of triggers for giving designed worlds reality. They are more effective when they trigger emotional responses and remembered experiences.
“Ben Burtt is a pioneer and visionary who has fundamentally changed the way we perceive sound in cinema.”
In this interview, Burtt shares some of his experiences as Sound Designer for several iconic films, including his discovery of “a kind of sophisticated music program called the Kyma” which he used in the creation of the voice of WALL-E.
The interviewer asked Ben about the incredible voices he created for WALL-E and EVA:
Well, Andrew Stanton who was the creator of the WALL-E character in the story; apparently he jokingly referred to his movie as the R2-D2 movie. He wanted to develop a very affable robot character that didn’t speak or had very limited speech that was mostly sound effects of its body moving and a few strange kind of vocals, and someone (his producer, I think — Jim Morris) said well why don’t you just talk to Ben Burtt, the guy who did R2-D2, so they got in touch with me.
Pixar is in the Bay Area (San Francisco) so it was nearby, and I went over and looked at about 10 minutes that Andrew Stanton had already put together with just still pictures — storyboards of the beginning of the film where WALL-E’s out on his daily work activities boxing up trash and so on and singing and playing his favorite music, and of course I was inspired by it and I thought well here’s a great challenge and I took it on.
This was a few months before they had to actually greenlight the project. I didn’t find this out until later but there was some doubt at that time about whether you could make a movie in which the main characters don’t really talk in any kind of elaborate way; they don’t use a lot of words. Would it sustain the audience’s interest? The original intention in the film that I started working on was that there was no spoken language in the film that you would understand at all; that was a goal at one point…
So I took a little bit of the R2 idea to come up with a voice where human performance would be part of it but it had to have other elements to it that made it seem electronic and machine-like. But WALL-E wasn’t going to Beep and Boop and Buzz like R2; it had to be different, so I struggled along trying different things for a few months and trying different voices — a few different character actors. And I often ended up experimenting on myself because I’m always available. You know it’s like the scientist in his lab takes the potion because there’s no one else around to test it: Jekyll and Hyde, I think that’s what it is. So I took the potion and and turned into Mr Hyde…
The idea was to always give the impression of what WALL-E was thinking through sound…
But eventually it ended up that I had a program — it was a kind of sophisticated music program called the Kyma and it had one sound in it — a process where it it would synthesize a voice but it [intentionally] didn’t do very well; the voice had artifacts that had funny distortions in it and extra noises. It didn’t work perfectly as a pure voice but I took advantage of the fact that the artifacts and mistakes in it were useful and interesting and could be used and I worked out a process where you could record sounds, starting with my own voice, and then process them a second time and do a re-performance where, as it plays back, you can stretch or compress or repitch the sounds in real time.
So you can take the word “Wall-E” and then you could make it have a sort of envelope of electronic noise around it; it gave it a texture that made it so it wasn’t human and that’s where it really worked. And of course it was in combination with the little motors in his arms and his head and his treads — everything was part of his expression.
The idea was to always give the impression of what WALL-E was thinking through sound — just as if you were playing a musical instrument and you wanted to make little phrases of music which indicated the feeling for what kind of communication was taking place.
Explorations in the generative control of notes and rhythms, scales and modes — from human to algorithmic gesture
Composer Giuseppe Tamborrino uses computer tools not just for timbre, but also for the production of events in deferred time or in real time. This idea forms the basis for generative music — examples of which can be found throughout music history, for example Mozart’s Musical Dice Game.
In his research, Tamborrino carries out this process in various ways and with different software, but the goal is always the same: the generation of instructions for instruments — which he calls an “Electronic score”.
Here’s an example from one of his generative scores:
As part of the process, Tamborrino has always designed in a certain degree of variability, using different stochastic or totally random objectives to speed up the process of abstraction and improvisation that, once launched, are invariant. Often, though, this way of working resulted in small sections that he wanted to cut or correct or improve.
This motivated him to use Kyma to pursue a new research direction — called post-controlled generative events — with the aim of being able to correct and manage micro events.
This is three-step process:
A generational setting phase (pre-time)
A performance phase, recording all values in the Timeline (real time)
A post-editing phase of automatic “generative (after time) events
Tamborrino shared some of the results of his research on the Kyma Discord, and he invites others to experiment with his approach and to engage in an online discussion of these ideas.
Rick Stevenson is a technology entrepreneur with an impressive track record: he co-founded three successful tech startups and played an instrumental role in the growth of several others. In addition to maintaining a nearly five decade relationship with the University of Queensland’s School of Information Technology and Electrical Engineering as a student, mentor, and industry advisor, Stevenson is also an accomplished astrophotographer, and one of his images was selected as the NASA Astronomy Picture of the Day.
Eighth Nerve [EN] Your “Rungler” patch recently made a big splash on the Kyma Discord Community. Could you give a high-level explanation for how it works?
Rick Stevenson [RS]: The Rungler is a hardware component of a couple of instruments, the Benjolin and the Blippoo box, designed by Rob Hordijk. It’s based on an 8-bit shift register driven by two square wave oscillators. One oscillator is the clock for the shift register and the other is sampled to provide the binary data input to the shift register. When the clock signal becomes positive, the data signal is sampled to provide a new binary value. That new bit is pushed into the shift register, the rest of the bits are shuffled along and the oldest bit is discarded.
The value of the Rungler is read out of the shift register by a digital to analog converter. In the simplest version of this (the original Benjolin design) the oldest three bits in the shift register are interpreted as a binary number with a value between 0 and 7.
That part is fairly straightforward. The interesting wrinkle is that the frequency of each oscillator is modulated by the other oscillator and also the value of the Rungler. The result of this clever feedback architecture is that the Rungler exhibits an interesting controlled chaotic behavior. It settles into complex repeating (or almost repeating) patterns. Nudge the parameters and it will head off in a new direction before settling into a different pattern. Despite the simplicity of the design it can generate very interesting and intricate “melodies” and rhythms.
NOTE: Rick shared his Rungler Sound in the Kyma Community Library!
Artist/animator Rio Roye helped Rick test the Rungler Sound and came up with a pretty astounding range of sonic results!
[EN]: What is your music background? Is it something you’ve been interested in since childhood? Or a more recent development?
[RS]: I didn’t learn an instrument as a child. I’m not sure why. We had a piano in the house and my mother played. At high school I taught myself to play guitar and bass and I kept that up for a couple of years at University but eventually gave up due to lack of time and motivation. Quite a few years later when work demands became manageable and our children were semi-independent, I took up guitar again and started tinkering with valve amp building. These days I’m learning Touch Guitar and tinkering with synths and computer music. I spent some time learning Max and Supercollider then decided to take the plunge into Kyma.
[EN]: What were the best parts of your “new user experience” with Kyma?
[RS]: I really liked the wide range of sounds in the library. It’s great to be able to find an interesting sound, play with it, and then dig inside and try to figure out how it works.
I also appreciated the power of individual sounds coupled with the expressiveness of Capytalk. I spent quite a bit of time learning Max and moving to Kyma was a bit like switching from assembler to a high level language.
[EN]: I think you might be the first astrophotographer I’ve ever met! Could you please introduce us (as total novices) to the equipment and software that you use to capture & process images like these…
[RS]: You can do astrophotography with a digital camera and conventional lens. A tripod is sufficient for nightscapes but you need some sort of simple tracking mount to do longer exposures. At the other end of the spectrum (haha) are telescopes, cameras and mounts specifically designed for astrophotography. The telescopes range from small refractors (with lenses) to large catadioptric scopes combining a large mirror (500mm in diameter or even more) with specialized glass optics.
Cameras consist of a CCD or CMOS sensor in a chamber protected from moisture, thermally bonded to a multistage Peltier cooler. Noise is the enemy of faint signals so it is not uncommon to cool the sensor to 30C or more below ambient temperature. The cameras are usually monochromatic and attached to a wheel containing filters. Mounts for astrophotography are solid, high precision devices that can throw around a heavy load and also track the movement of the sky with great accuracy. Summary: there’s lots of fancy hardware used for amateur astrophotography, ranging from relatively affordable to the price of a nice house!
On the software side, the main components are capture software which runs the mount, filterwheel and the camera(s), and processing software which turns the raw data into an image. Normal procedure is to take many exposures of a deep sky object from seconds to many minutes long and “stack” them to increase the signal to noise ratio. We also take a few different types of calibration images that are used to remove artifacts and nonlinearities from the camera sensor and optical train. Processing the raw data can be an involved process and is something of an art in itself. There are quite a few software packages and religious wars between their proponents are not uncommon.
[EN]: Speaking of software packages, what is PixInsight?
[RS]: PixInsight is the image processing and analysis software that I use. It has been developed by a small team of astronomers and software engineers based in Spain. It has a somewhat unfair reputation for being complicated and difficult to use. This is partly down to a GUI which is the same on Windows, Mac and Linux (but not native to any of them) and partly because it includes a wide range of different tools and the philosophy of the developers is to expose all of the useful parameters. When I started doing astrophotography I tried a few different software packages and PixInsight was the one that produced the best results for me. Some of the other commonly used packages hide a lot of the ugly details and offer a more guided experience which suits some types of imagers better. A more recent development is the use of machine learning in astronomical processing to do things like noise reduction and sharpening. I haven’t quite decided how I feel about that yet.
I think there are some interesting parallels between PixInsight and Kyma. Apart from the cross platform problem, both have to maintain that careful balance that offers a complex, highly technical set of features in a way that can satisfy the gamut of users from casual to expert.
[EN]: Do you have your own telescope? Do you visit telescopes in other parts of the world?
I have a few telescopes. Unfortunately, I live in the city and the light pollution prevents me from doing much data collection from home. When I get the opportunity I have a few favorite locations not too far away where I can set up under dark skies. Apart from dark skies, a steady atmosphere is important for high resolution imaging. This is called the “seeing” and it’s usually not that great around here.
For several years I have been a member of a few small teams with automated telescopes in remote locations. That solves a lot of problems apart from fixing things when they go wrong! My first experience was at Sierra Remote Observatory near Yosemite in California. I also shared a scope with some friends in New Mexico and now I’m using one in the Atacama Desert in Chile. The skies in the Atacama are very dark and the seeing is amazing! I haven’t ever visited any of the sites but I did catch up with some team mates on a couple of business trips.
[EN]: Does your camera give you access to data files? From some of the caption descriptions, it sounds like your camera has sensors for “non-visible” regions of the spectrum, is that true?
[RS]: The local and the remote scopes all deliver the raw image and calibration data in an image format called “FITS” which is basically a header followed by 2D arrays of data values. A single image will almost always be produced from very many individual sub-exposures. The normal process is to calibrate and stack the data for each filter before combining the stacks to produce a color image.
The camera sensors will usually detect some UV and near-infrared as well as visible light, but they aren’t commonly used in ground-based imaging. I use red, green and blue filters for true color imaging, or I image in narrowband. Narrowband uses filters that pass very narrow frequency ranges corresponding to the emissions from specific ionized elements, usually Hydrogen alpha (a reddish color), Oxygen III (greenish blue) and Sulphur II (a different reddish color). Narrowband has the advantage of working even in light-polluted skies (and to some extent rejecting moonlight) and can show details in the structure of astronomical objects not visible in RGB images. The downside is that you need very long exposure times and narrowband images are false color. Many of the Hubble images you’ve undoubtedly seen are false color images using the SHO palette (red is Sulphur II, green is Hydrogen alpha and blue is Oxygen III).
[EN]: What is your connection with the Astronomical Association of Queensland?
[RS]: I have been a member of the AAQ for well over a decade. From 2013 to 2023 I was the director of the Astrophotography section. I helped members learn how to do astrophotography, organized annual astrophotography competitions and curated images for the club calendar.
[EN]: Have you ever seen a platypus in real life?
[RS]: Several times at zoos. Only a handful of times in the wild. They are mostly nocturnal and also very shy!
[EN]: In New Mexico (where I grew up), we all learned a song in elementary school with the lyrics: “Kookaburra sits in the old gum tree, merry merry king of the bush is he, Laugh Kookaburra, laugh Kookaburra, gay your life must be”. Is it a pseudo-Australian song invented for American kids or did you learn it in Australia too?
[RS]: I don’t know the origin but we have the same song. I have a toddler grandson who sings it! We have Kookaburras in the bushland behind our house that start doing their group calls around 3 or 4am.
Speaking of the Kookaburra song there was a story about it in the news a few years ago when the people who own the rights to the song won a plagiarism lawsuit against the band, Men at Work.
Listen for the Kookaburra song in the flute riff!
[EN]: What was the best part about growing up in Australia that those of us growing up in other parts of the world missed out on?
[RS]: Vegemite, perhaps? 🙂
The area around Brisbane in South-East Queensland has a lot going for it. The climate is pleasant and mild (except for some hot and humid days in summer). There are great beaches within an hour or so as well as forested areas for hiking. The educational and health systems are good, even the public / “free” parts (true in most of the major centres in Australia). Brisbane is large enough to have some culture and work opportunities without being too busy and fast-paced. It’s pretty good, but not perfect.
[EN]: What was your favorite course to teach at the School of Information Technology and Electrical Engineering, University of Queensland St Lucia?
[RS]: I was an Adjunct Professor at UQ ITEE for 21 years but I didn’t teach any courses. I acted as an industry advisor, was involved in bids to set up research centers in embedded systems and security, and I hired a lot of their graduates!
[EN]: Do you have a “favorite” programming language or family of languages?
[RS]: I rather liked Simula 67 though I only used it for one (grad student) project. I think it was one of the first OO languages if not the first. Most of my work programming experience was in assembler on microprocessors and C on micro and minicomputers. I was very comfortable in C but wouldn’t say I loved it. These days I quite like the Lisp family of languages.
[EN]: Are you one of the founders of OpenGear? In that role, do you do a lot of international travel to the company’s other offices?
[RS]: I was one of the founders and worked in a few roles up until Opengear was acquired in late 2019. Having stayed on after acquisitions in a couple of previous lives I was pleased that the new owners didn’t want me to hang around and I exited just in time for COVID. While at Opengear and in previous startups I made regular business trips, mostly to North America but also the UK and Europe and occasional trips to South-East Asia.
[EN]: What do you identify as the top three “open problems” or “grand challenges” in technology right now?
[RS]: In no particular order and not claiming to have any deep insights…
Achieving AGI [artificial general intelligence] is still an open problem and one that, in my opinion, is not going to get solved as quickly as a lot of people think. Maybe that’s a good thing?
I spent about a decade working on computer security products starting in the early 2000s and despite a lot of money and effort spent on band-aid solutions, things have only become worse since then. Fixing our whack-a-mole computer security model is certainly a grand challenge, not helped by the incentives that security vendors have to protect their recurring revenues. The recent outage caused by the CrowdStrike security agent crashing Microsoft computers demonstrates the fragility of our current approach.
The ability to create practical quantum computers still seems a bit of a reach though I hear that Wassenaar Arrangement countries are all quietly introducing export controls on quantum computers. Perhaps they know something I don’t.
[EN]: What’s next on the horizon for you? What Kyma project(s) are you planning to tackle next?
[RS]: I have a long list of potential projects but I think the next two I’ll tackle are some baby steps into sonification of astronomical data (thanks for the encouragement) and a Blippoo Box – another of Rob Hordijk’s chaotic generative synths.
[EN]: What are you most looking forward to learning about Kyma over the next year?
[RS]: There are many areas of Kyma that I have only dabbled in and even more that I haven’t touched at all. I’d like to get a lot more fluent with Smalltalk and Capytalk, do some projects with the Spectral and Morphing sounds and also plumb the mysteries (to me) of the Timeline and Multigrid. That should keep me busy for a while!
[EN]: Outside of Kyma, what would you most like to learn more about in the coming year?
[RS]: I have a couple of relatively new synths, a Synclavier Regen and a Buchla Easel, that I would like to spend a lot more time learning my way around. I also want to keep progressing with my Touch Guitar studies.
[EN]: Rick, thank you for the thought-provoking discussion! Can people get in touch with you on the Kyma Discord if they have questions, feedback, or proposals for collaboration?
Anssi Laiho is the sound designer and performer for Laboratorio — a concept developed by choreographer Milla Virtanen and video artist Leevi Lehtinen as a collection of “experiments” that can be viewed either as modules of the same piece or as independent pieces of art, each with its own theme. The first performance of Laboratorio took place in November 2021 in Kuopio, Finland.
Laboratorio Module 24, featuring Anssi performing a musical saw with live Kyma processing was performed in the Armastuse hall at Aparaaditehas in Tartu, Estonia and is dedicated to theme of identity and inspiration.
Anssi’s hardware setup, both in the studio and live on stage, consists of a Paca connected to a Metric Halo MIO2882 interface via bidirectional ADAT in a 4U mobile rack. Laiho has used this system for 10 years and finds it intuitive, because Metric Halo’s MIOconsole mixer interface gives him the opportunity to route audio between Kyma, the analog domain, and the computer in every imaginable way. When creating content as a sound designer, he often tries things out in Kyma in real-time by opening a Kyma Sound with audio input and listening to it on the spot. If it sounds good, he can route it back to his computer via MIOconsole and record it for later use.
His live setup for Laboratorio Module 24 is based on the same system setup. The aim of the hardware setup was to have as small a physical footprint as possible, because he was sharing the stage with two dancers. On stage, he had a fader-controller for the MIOconsole (to control feedback from microphones), an iPad running Kyma Control displaying performance instructions, a custom-made Raspberry Pi Wi-Fi footswitch sending OSC messages to Kyma, and a musical saw.
The instrument used in the performance is a Finnish Pikaterä Speliplari musical saw (speliplari means ‘play blade’). The instrument is designed by the Finnish musician Aarto Viljamaa. The plaintive sound of the saw is routed to Kyma through 2 microphones, which are processed by a Kyma Timeline. A custom-made piezo-contact microphone and preamp is used to create percussive and noise elements for the piece, and a small diaphragm shotgun microphone is employed for the softer harmonic material.
The way Anssi works with live electronics is by recording single notes or note patterns with multiple Kyma MemoryWriter Sounds. These sound recordings are then sampled in real-time or kept for later use in a Kyma timeline. He likes to think of this as a way of reintroducing a motive of the piece as is done in classical music composition. This also breaks the inherent tendency of adding layers when using looping samplers, which, in Anssi’s opinion, often becomes a burden for the listener at some point.
The Kyma sounds used in the performance Timeline are focused on capturing and resampling the sound played on the saw and controlling the parameters of these Sounds live, in timeline automation, presets, or through algorithmic changes programmed in Capytalk.
Laiho’s starting point for the design was to create random harmonies and arpeggiations that could then be used as accompaniment for an improvised melody. For this, he used the Live Looper from the Kyma Sound Library and added a Capytalk expression to its Rate parameter that selects a new frequency from a predefined selection of frequencies (intervals relative to a predefined starting note) to create modal harmony. He also created a quadrophonic version of the Looper and controlled the Angle parameter of each loop with a controlled random Capytalk expression that makes each individual note travel around the space.
Another Sound used in the performance is one he created a long time ago named Retrosampler. This sound captures only a very short sample of live sound and creates 4 replicated loops, each less than 1 second long. Each replicated sample has its own parameters that he controls with presets. This, together with the sine wave quality of the saw, creates a result that resembles a beeping sine wave analog synthesizer. The sound is replicated four times so he has the possibility to play 16 samples if he to presses “capture” 4 times.
The Retrosampler sound is also quadraphonic and its parameters are controlled by presets. His favorite preset is called “Line Busy” which is exactly what it sounds like. [Editor’s note: the question is whichbusy signal?]
For the noise and percussion parts of the performance, he used a sound called LiveCyclicGrainSampler, which is a recreation of an example from Jeffrey Stolet’s Kyma and the SumOfSines Disco Club book. This sound consists of a live looping MemoryWriter as a source for granular reverb and 5 samples with individual angle and rate parameter settings. These parameters were then controlled with timeline automation to create variation in the patterns they create.
Anssi also used his two favorite reverbs in the live processing: the NeverEngine Labs Stereo Verb, and Johannes Regnier’s Dattorro Plate.
Kyma is also an essential part of Laiho’s sound design work in the studio. One of the tracks in the performance is called “Experiment 0420” and it is his “Laboratory experiment” of Kyma processing the sound of an aluminum heat sink from Intel i5 3570K CPU played with a guitar pick. Another scene of the performance contains a song called “Tesseract Song” that is composed of an erratic piano chord progression and synthetic noise looped in Kyma and accompanied by Anssi singing through a Kyma harmonizer.
The sound design for the 50-minute performance consists of 11-12 minutes of live electronics, music composed in the studio, and “Spring” by Antonio Vivaldi. The overall design goal was to create a kaleidoscopic experience where the audience is taken to new places by surprising turns of events.
Musician and Kyma consultant, Alan Jackson, known for his work with Phaze UK on the Witcher soundtrack, has wired his studio with an eye toward flexibility, making it easy for him to choose from among multiple sources, outputs, and controllers, and to detach a small mobile setup so he can visit clients in person and even continue working in Kyma during the train ride to and from this clients’ studios.
In his studio, his Pacamara Ristretto has a USB connection to the laptop and is also wired with quad analog in / out to the mixer. That way, Jackson can choose on a whim whether to route the Ristretto as an aggregate device through the DAW or do everything as analog audio through the mixer. Two additional speakers (not shown) are at the back of the studio and his studio is wired for quad by default.
The Faderfox UC4 is a cute and flexible MIDI controller plugged into the back of the Pacamara, ready at a moment’s notice to control the VCS, and a small Wacom tablet is plugged in and stashed to the left of the screen for controlling Kyma.
Jackson leaves his Pacamara on top so he can easily disconnect 5 cables from the back and run out the door with it… which leads to his “mobile” setup.
Jackson’s travel setup is organized as a kit-within-a-kit.
The red, inner kit, is what he grabs if he just needs a minimal battery-powered Kyma setup, e.g., for developing stuff on the train, which includes:
a PD battery (good for about 3 hours when used with a MOTU Ultralite, longer with headphones)
a pair of tiny Sennheiser IE4 headphones
a couple of USB cables, and an Ethernet cable
The outer kit adds:
a 4 in / 4 out bus-powered Zoom interface
mains power for the Pacamara
more USB cables
a PD power adaptor cable, so he can run the MOTU Ultralite off the same battery as the Pacamara
a clip-on mic
the WiFi aerial
If you have any upcoming sound design projects you’d like to discuss, visit Alan’s Speakers on Strings website.
When not solving challenges for game and film sound designers, Alan performs his own music for live electronics.
Matteo Milani’s (Unidentified Sound Object) “COSMIC CHARGES” is a library of tonal blasts and impacts for video games, films, TV series, and more.
Each impact is split into layers, allowing a sound designer to decompose and reconstitute sub-timbres to create unique sounds.
Milani was inspired by innovative sound designers Ben Burtt, Randy Thom, and Gary Rydstrom to continue to push the boundaries of his own work. It was through Ben Burtt’s book, Galactic Phrase Book & Travel Guide, that Milani first discovered Kyma, which he used in this project to add musical layers to each impact — thus imparting a unique sonic identity and intensifying the emotional resonance and dynamics.
In addition to their SFX libraries, Milani’s Unidentified Sound Object offers a range of services including complex sound design, composition, multilingual localization, games and VR development, location recording, and more.
It’s not every day that a consultant/teacher gets a closing screen credit on a Netflix hit series, but check out this screenshot from the end of Witcher E7: Out of the Fire, into the Frying Pan:
Alan and his small band of sound designers meet regularly to share screens and help each other solve sound design challenges during hands-on sessions that last from 1-2 hours. Not only are these sessions an opportunity to deepen Kyma knowledge; they’re also an excuse to bring the sound design team together and share ideas. Alan views his role as a mentor or moderator, insisting that the sound designers do the hands-on Kyma work themselves.
There’s no syllabus; it’s the sound designers who set the agenda for each class, sometimes aiming to solve a specific problem, but more often, they aim to come up with a general solution to a situation that comes up repeatedly. They’ve taken to calling these solutions “little Kyma machines” that will generate an effect for them, not just for this project but for future projects as well. Over time, they’ve created several, reusable, custom “machines”, among them: ‘warping performer’, ‘doppler flanger’, ‘density stomp generator’, and ‘whisper wind’.
Rob Prynne is particularly enamored of a custom patch based on the “CrossFilter” module in Kyma (based on an original design by Pete Johnston), sometimes enhanced by one of Cristian Vogel’s NeverEngine formant-shifting patches. In an interview with A Sound Effect, Rob explains, “It’s like a convolution reverb but with random changes to the length and pitch of the impulse response computed in real-time. It gives a really interesting movement to the sound.” According to the interview, it was the tripping sequence in episode 7 where the CrossFilter really came into play.
When asked if there are any unique challenges posed by sound design, Alan observes, “The big things to note about sound design, particularly for TV / Film, is that it’s usually done quickly, there’s lots of it, you’re doing it at the instruction of a director; the director wants ‘something’ but isn’t necessarily nuanced in being able to talk about sound.”
A further challenge is that the sounds must “simultaneously sound ‘right’ for the audience while also being new or exciting. And the things that sound ‘right’, like the grammar of film editing, are based on a historically constructed sense of rightness, not physics.”
Alan summarizes the main advantages of Kyma for sound designers as: sound quality, workflow, originality, and suitability for working in teams. In what way does Kyma support “teams”?
With other sound design software and plugins you can get into all sorts of versioning and compatibility issues. A setup that’s working fine on my computer might not work on a teammate’s. Kyma is a much more reliable environment. A Sound that runs on my Paca[(rana | mara)] is going to work on yours.
Even more important for professional sound design is “originality”:
Kyma sounds different from “all the same VST plugins everyone else is using”. This is an important factor for film / TV sound designers. They need to produce sounds that meet the brief of the director but also sound fresh, exciting, new and different from everyone else. Using Kyma is their secret advantage. They are not using the same Creature Voice plugins everyone else is using but instead creating a Sound from scratch comes up with results unique to their studio.
For Witcher fans, the sound designers have provided a short cue list of examples for your listening pleasure:
Kyma Sound
Heard
Episode
Warping performer
on every creature
in every episode
Warping performer
on portal sounds and magic
S3E03
Warping performer
on the wild hunt vapour
S3E03
Doppler flanger spinning thing
on the ring of fire
S3E05 / S3E06
Density stomp generator
for group dancing
S3E05
Whisper wind convolved vocal
for wild hunt
S3E03
Visit Alan Jackson’s SpeakersOnStrings to request an invitation to the Kyma Kata — a (free) weekly Zoom-based peer-learning study group. Alan also offers individual and group Kyma coaching, teaching and mentoring.
Audiences will experience Simon Hutchinson‘s music projected from a linear array of 15 giant speakers positioned underneath the 420 ft Boggy Creek overpass in Rosewood park east of Austin Texas on November 19th, 2022 from 6:30-9:30 pm.
Hutchinson is one of six composers commissioned to create works inspired by the past, present and (imagined) future sounds of transportation, utilizing the dramatic sonic movement capabilities of a new sound system designed and built by the Rolling Ryot arts collective to create an immersive audio experience.
“It’s been very interesting to think about what ‘multichannel’ means when channels are spread out across 420 ft, and especially fun problem-solving this in Kyma,” Simon explains, “Using Kyma I’ve set up some sounds to translate across the system very quickly, but in a unified motion. At other times, the panning happens very slowly, so slow that someone walking could outpace the sound as they walk across the space.”
In other sections, Hutchinson treated the individual channels as the individual keys of a giant instrument, and the “melody” traverses the broad, 420-ft space, with each speaker assigned only a single note. More details on this process are revealed in a video on Simon’s youtube channel.
The sound materials for the piece are from field recordings and analog synthesis samples processed through Kyma. Simon “unfolded” ambisonic field recordings across the 15 channel space, worked with mid-side transformations of stereo recordings, and leveraged the multichannel panning of the Kyma Timeline.
The Ghost Line X event is free and open to the public. All ages are welcome, and the audience is encouraged to bring lawn chairs and blankets and to bike or ride-share to Rosewood park.