Kyma Sound design studies at CMU

Did you know that you could study for a degree in sound design and work with Kyma at Carnegie Mellon University? Joe Pino, professor of sound design in the School of Drama at Carnegie Mellon University, teaches conceptual sound design, modular synthesis, Kyma, film sound design, ear training and audio technology in the sound design program.

Sound design works in the spaces between reality and abstraction. They are less interesting as a collection of triggers for giving designed worlds reality. They are more effective when they trigger emotional responses and remembered experiences.

Thinking through sound — Ben Burtt and the voice of WALL-E

Ben Burtt was recently awarded the 2024 Vision Award Ticinomoda, which describes him as:

“Ben Burtt is a pioneer and visionary who has fundamentally changed the way we perceive sound in cinema.”

In this interview, Burtt shares some of his experiences as Sound Designer for several iconic films, including his discovery of “a kind of sophisticated music program called the Kyma” which he used in the creation of the voice of WALL-E.

The interviewer asked Ben about the incredible voices he created for WALL-E and EVA:

Well, Andrew Stanton who was the creator of the WALL-E character in the story; apparently he jokingly referred to his movie as the R2-D2 movie. He wanted to develop a very affable robot character that didn’t speak or had very limited speech that was mostly sound effects of its body moving and a few strange kind of vocals, and someone (his producer, I think — Jim Morris) said well why don’t you just talk to Ben Burtt, the guy who did R2-D2, so they got in touch with me.

Pixar is in the Bay Area (San Francisco) so it was nearby, and I went over and looked at about 10 minutes that Andrew Stanton had already put together with just still pictures — storyboards of the beginning of the film where WALL-E’s out on his daily work activities boxing up trash and so on and singing and playing his favorite music, and of course I was inspired by it and I thought well here’s a great challenge and I took it on.

This was a few months before they had to actually greenlight the project. I didn’t find this out until later but there was some doubt at that time about whether you could make a movie in which the main characters don’t really talk in any kind of elaborate way; they don’t use a lot of words. Would it sustain the audience’s interest? The original intention in the film that I started working on was that there was no spoken language in the film that you would understand at all; that was a goal at one point…

So I took a little bit of the R2 idea to come up with a voice where human performance would be part of it but it had to have other elements to it that made it seem electronic and machine-like. But WALL-E wasn’t going to Beep and Boop and Buzz like R2; it had to be different, so I struggled along trying different things for a few months and trying different voices — a few different character actors. And I often ended up experimenting on myself because I’m always available. You know it’s like the scientist in his lab takes the potion because there’s no one else around to test it: Jekyll and Hyde, I think that’s what it is. So I took the potion and and turned into Mr Hyde…

Photo from Comicon by Miguel Discart Bruxelles, Belgique This file is licensed under the Creative Commons Attribution-Share Alike 2.0 Generic license

The idea was to always give the impression of what WALL-E was thinking through sound…

But eventually it ended up that I had a program — it was a kind of sophisticated music program called the Kyma and it had one sound in it — a process where it it would synthesize a voice but it [intentionally] didn’t do very well; the voice had artifacts that had funny distortions in it and extra noises. It didn’t work perfectly as a pure voice but I took advantage of the fact that the artifacts and mistakes in it were useful and interesting and could be used and I worked out a process where you could record sounds, starting with my own voice, and then process them a second time and do a re-performance where, as it plays back, you can stretch or compress or repitch the sounds in real time.

So you can take the word “Wall-E” and then you could make it have a sort of envelope of electronic noise around it; it gave it a texture that made it so it wasn’t human and that’s where it really worked. And of course it was in combination with the little motors in his arms and his head and his treads — everything was part of his expression.

The idea was to always give the impression of what WALL-E was thinking through sound — just as if you were playing a musical instrument and you wanted to make little phrases of music which indicated the feeling for what kind of communication was taking place.

Generative events

Explorations in the generative control of notes and rhythms, scales and modes — from human to algorithmic gesture

Composer Giuseppe Tamborrino uses computer tools not just for timbre, but also for the production of events in deferred time or in real time. This idea forms the basis for generative music — examples of which can be found throughout music history, for example Mozart’s Musical Dice Game.

In his research, Tamborrino carries out this process in various ways and with different software, but the goal is always the same: the generation of instructions for instruments — which he calls an “Electronic score”.

Here’s an example from one of his generative scores:

 

As part of the process, Tamborrino has always designed in a certain degree of variability, using different stochastic or totally random objectives to speed up the process of abstraction and improvisation that, once launched, are invariant. Often, though, this way of working resulted in small sections that he wanted to cut or correct or improve.

This motivated him to use Kyma to pursue a new research direction — called post-controlled generative events — with the aim of being able to correct and manage micro events.

This is three-step process:

  • A generational setting phase (pre-time)
  • A performance phase, recording all values in the Timeline (real time)
  • A post-editing phase of automatic “generative (after time) events

Tamborrino shared some of the results of his research on the Kyma Discord, and he invites others to experiment with his approach and to engage in an online discussion of these ideas.

The Seeing is Good


Rick Stevenson is a technology entrepreneur with an impressive track record: he co-founded three successful tech startups and played an instrumental role in the growth of several others. In addition to maintaining a nearly five decade relationship with the University of Queensland’s School of Information Technology and Electrical Engineering as a student, mentor, and industry advisor, Stevenson is also an accomplished astrophotographer, and one of his images was selected as the NASA Astronomy Picture of the Day.

NASA's Astronomy Picture of the Day


Eighth Nerve [EN] Your “Rungler” patch recently made a big splash on the Kyma Discord Community. Could you give a high-level explanation for how it works?

Rick Stevenson [RS]: The Rungler is a hardware component of a couple of instruments, the Benjolin and the Blippoo box, designed by Rob Hordijk. It’s based on an 8-bit shift register driven by two square wave oscillators. One oscillator is the clock for the shift register and the other is sampled to provide the binary data input to the shift register. When the clock signal becomes positive, the data signal is sampled to provide a new binary value. That new bit is pushed into the shift register, the rest of the bits are shuffled along and the oldest bit is discarded.

Rungler circuit

The value of the Rungler is read out of the shift register by a digital to analog converter. In the simplest version of this (the original Benjolin design) the oldest three bits in the shift register are interpreted as a binary number with a value between 0 and 7.

That part is fairly straightforward. The interesting wrinkle is that the frequency of each oscillator is modulated by the other oscillator and also the value of the Rungler. The result of this clever feedback architecture is that the Rungler exhibits an interesting controlled chaotic behavior. It settles into complex repeating (or almost repeating) patterns. Nudge the parameters and it will head off in a new direction before settling into a different pattern. Despite the simplicity of the design it can generate very interesting and intricate “melodies” and rhythms.

NOTE: Rick shared his Rungler Sound in the Kyma Community Library!


Artist/animator Rio Roye helped Rick test the Rungler Sound and came up with a pretty astounding range of sonic results!

 


[EN]: What is your music background? Is it something you’ve been interested in since childhood? Or a more recent development?

[RS]: I didn’t learn an instrument as a child. I’m not sure why. We had a piano in the house and my mother played. At high school I taught myself to play guitar and bass and I kept that up for a couple of years at University but eventually gave up due to lack of time and motivation. Quite a few years later when work demands became manageable and our children were semi-independent, I took up guitar again and started tinkering with valve amp building. These days I’m learning Touch Guitar and tinkering with synths and computer music. I spent some time learning Max and Supercollider then decided to take the plunge into Kyma.

[EN]: What were the best parts of your “new user experience” with Kyma?

[RS]: I really liked the wide range of sounds in the library. It’s great to be able to find an interesting sound, play with it, and then dig inside and try to figure out how it works.

I also appreciated the power of individual sounds coupled with the expressiveness of Capytalk. I spent quite a bit of time learning Max and moving to Kyma was a bit like switching from assembler to a high level language.


[EN]: I think you might be the first astrophotographer I’ve ever met! Could you please introduce us (as total novices) to the equipment and software that you use to capture & process images like these

False color image of the Helix Nebula, resembling a human iris
Helix Nebula – Narrowband bi-color + RGB stars. Image provided by Rick Stevenson.

[RS]: You can do astrophotography with a digital camera and conventional lens. A tripod is sufficient for nightscapes but you need some sort of simple tracking mount to do longer exposures. At the other end of the spectrum (haha) are telescopes, cameras and mounts specifically designed for astrophotography. The telescopes range from small refractors (with lenses) to large catadioptric scopes combining a large mirror (500mm in diameter or even more) with specialized glass optics.

Cameras consist of a CCD or CMOS sensor in a chamber protected from moisture, thermally bonded to a multistage Peltier cooler. Noise is the enemy of faint signals so it is not uncommon to cool the sensor to 30C or more below ambient temperature. The cameras are usually monochromatic and attached to a wheel containing filters. Mounts for astrophotography are solid, high precision devices that can throw around a heavy load and also track the movement of the sky with great accuracy. Summary: there’s lots of fancy hardware used for amateur astrophotography, ranging from relatively affordable to the price of a nice house!

On the software side, the main components are capture software which runs the mount, filterwheel and the camera(s), and processing software which turns the raw data into an image. Normal procedure is to take many exposures of a deep sky object from seconds to many minutes long and “stack” them to increase the signal to noise ratio. We also take a few different types of calibration images that are used to remove artifacts and nonlinearities from the camera sensor and optical train. Processing the raw data can be an involved process and is something of an art in itself. There are quite a few software packages and religious wars between their proponents are not uncommon.

[EN]: Speaking of software packages, what is PixInsight?

[RS]: PixInsight is the image processing and analysis software that I use. It has been developed by a small team of astronomers and software engineers based in Spain. It has a somewhat unfair reputation for being complicated and difficult to use. This is partly down to a GUI which is the same on Windows, Mac and Linux (but not native to any of them) and partly because it includes a wide range of different tools and the philosophy of the developers is to expose all of the useful parameters. When I started doing astrophotography I tried a few different software packages and PixInsight was the one that produced the best results for me. Some of the other commonly used packages hide a lot of the ugly details and offer a more guided experience which suits some types of imagers better. A more recent development is the use of machine learning in astronomical processing to do things like noise reduction and sharpening. I haven’t quite decided how I feel about that yet.

I think there are some interesting parallels between PixInsight and Kyma. Apart from the cross platform problem, both have to maintain that careful balance that offers a complex, highly technical set of features in a way that can satisfy the gamut of users from casual to expert.

[EN]: Do you have your own telescope? Do you visit telescopes in other parts of the world?

I have a few telescopes. Unfortunately, I live in the city and the light pollution prevents me from doing much data collection from home. When I get the opportunity I have a few favorite locations not too far away where I can set up under dark skies. Apart from dark skies, a steady atmosphere is important for high resolution imaging. This is called the “seeing” and it’s usually not that great around here.

Rick Stevenson and his C300

For several years I have been a member of a few small teams with automated telescopes in remote locations. That solves a lot of problems apart from fixing things when they go wrong! My first experience was at Sierra Remote Observatory near Yosemite in California. I also shared a scope with some friends in New Mexico and now I’m using one in the Atacama Desert in Chile. The skies in the Atacama are very dark and the seeing is amazing! I haven’t ever visited any of the sites but I did catch up with some team mates on a couple of business trips.

[EN]: Does your camera give you access to data files? From some of the caption descriptions, it sounds like your camera has sensors for “non-visible” regions of the spectrum, is that true?

[RS]: The local and the remote scopes all deliver the raw image and calibration data in an image format called “FITS” which is basically a header followed by 2D arrays of data values. A single image will almost always be produced from very many individual sub-exposures. The normal process is to calibrate and stack the data for each filter before combining the stacks to produce a color image.

The camera sensors will usually detect some UV and near-infrared as well as visible light, but they aren’t commonly used in ground-based imaging. I use red, green and blue filters for true color imaging, or I image in narrowband. Narrowband uses filters that pass very narrow frequency ranges corresponding to the emissions from specific ionized elements, usually Hydrogen alpha (a reddish color), Oxygen III (greenish blue) and Sulphur II (a different reddish color). Narrowband has the advantage of working even in light-polluted skies (and to some extent rejecting moonlight) and can show details in the structure of astronomical objects not visible in RGB images. The downside is that you need very long exposure times and narrowband images are false color. Many of the Hubble images you’ve undoubtedly seen are false color images using the SHO palette (red is Sulphur II, green is Hydrogen alpha and blue is Oxygen III).

Tarantula region in SHO palette, where SHO is an Acronym that stands for Sulphur II (mapped to Red), Hydrogen Alpha (mapped to Green) & Oxygen III (mapped to Blue). Three images are captured sequentially using 3 narrowband filters and then combined to create a false color image.
Tarantula region in SHO palette. Image provided by Rick Stevenson

[EN]: What is your connection with the Astronomical Association of Queensland?

[RS]: I have been a member of the AAQ for well over a decade. From 2013 to 2023 I was the director of the Astrophotography section. I helped members learn how to do astrophotography, organized annual astrophotography competitions and curated images for the club calendar.


[EN]: Have you ever seen a platypus in real life?

[RS]: Several times at zoos. Only a handful of times in the wild. They are mostly nocturnal and also very shy!

[EN]: In New Mexico (where I grew up), we all learned a song in elementary school with the lyrics: “Kookaburra sits in the old gum tree, merry merry king of the bush is he, Laugh Kookaburra, laugh Kookaburra, gay your life must be”. Is it a pseudo-Australian song invented for American kids or did you learn it in Australia too?

[RS]: I don’t know the origin but we have the same song. I have a toddler grandson who sings it! We have Kookaburras in the bushland behind our house that start doing their group calls around 3 or 4am.

Speaking of the Kookaburra song there was a story about it in the news a few years ago when the people who own the rights to the song won a plagiarism lawsuit against the band, Men at Work.

Listen for the Kookaburra song in the flute riff!

[EN]: What was the best part about growing up in Australia that those of us growing up in other parts of the world missed out on?

[RS]: Vegemite, perhaps? 🙂

The area around Brisbane in South-East Queensland has a lot going for it. The climate is pleasant and mild (except for some hot and humid days in summer). There are great beaches within an hour or so as well as forested areas for hiking. The educational and health systems are good, even the public / “free” parts (true in most of the major centres in Australia). Brisbane is large enough to have some culture and work opportunities without being too busy and fast-paced. It’s pretty good, but not perfect.


[EN]: What was your favorite course to teach at the School of Information Technology and Electrical Engineering, University of Queensland St Lucia?

[RS]: I was an Adjunct Professor at UQ ITEE for 21 years but I didn’t teach any courses. I acted as an industry advisor, was involved in bids to set up research centers in embedded systems and security, and I hired a lot of their graduates!

[EN]: Do you have a “favorite” programming language or family of languages?

[RS]: I rather liked Simula 67 though I only used it for one (grad student) project. I think it was one of the first OO languages if not the first. Most of my work programming experience was in assembler on microprocessors and C on micro and minicomputers. I was very comfortable in C but wouldn’t say I loved it. These days I quite like the Lisp family of languages.

[EN]: Are you one of the founders of OpenGear? In that role, do you do a lot of international travel to the company’s other offices?

[RS]: I was one of the founders and worked in a few roles up until Opengear was acquired in late 2019. Having stayed on after acquisitions in a couple of previous lives I was pleased that the new owners didn’t want me to hang around and I exited just in time for COVID. While at Opengear and in previous startups I made regular business trips, mostly to North America but also the UK and Europe and occasional trips to South-East Asia.

[EN]: What do you identify as the top three “open problems” or “grand challenges” in technology right now?

[RS]: In no particular order and not claiming to have any deep insights…

  • Achieving AGI [artificial general intelligence] is still an open problem and one that, in my opinion, is not going to get solved as quickly as a lot of people think. Maybe that’s a good thing?
  • I spent about a decade working on computer security products starting in the early 2000s and despite a lot of money and effort spent on band-aid solutions, things have only become worse since then. Fixing our whack-a-mole computer security model is certainly a grand challenge, not helped by the incentives that security vendors have to protect their recurring revenues. The recent outage caused by the CrowdStrike security agent crashing Microsoft computers demonstrates the fragility of our current approach.
  • The ability to create practical quantum computers still seems a bit of a reach though I hear that Wassenaar Arrangement countries are all quietly introducing export controls on quantum computers. Perhaps they know something I don’t.

[EN]: What’s next on the horizon for you? What Kyma project(s) are you planning to tackle next?

[RS]: I have a long list of potential projects but I think the next two I’ll tackle are some baby steps into sonification of astronomical data (thanks for the encouragement) and a Blippoo Box – another of Rob Hordijk’s chaotic generative synths.

[EN]: What are you most looking forward to learning about Kyma over the next year?

[RS]: There are many areas of Kyma that I have only dabbled in and even more that I haven’t touched at all. I’d like to get a lot more fluent with Smalltalk and Capytalk, do some projects with the Spectral and Morphing sounds and also plumb the mysteries (to me) of the Timeline and Multigrid. That should keep me busy for a while!

[EN]: Outside of Kyma, what would you most like to learn more about in the coming year?

[RS]: I have a couple of relatively new synths, a Synclavier Regen and a Buchla Easel, that I would like to spend a lot more time learning my way around. I also want to keep progressing with my Touch Guitar studies.


[EN]: Rick, thank you for the thought-provoking discussion! Can people get in touch with you on the Kyma Discord if they have questions, feedback, or proposals for collaboration?

[RS]: Yes!

Come up to the Lab

Anssi Laiho is the sound designer and performer for Laboratorio — a concept developed by choreographer Milla Virtanen and video artist Leevi Lehtinen as a collection of “experiments” that can be viewed either as modules of the same piece or as independent pieces of art, each with its own theme. The first performance of Laboratorio took place in November 2021 in Kuopio, Finland.

Laboratorio Module 24, featuring Anssi performing a musical saw with live Kyma processing was performed in the Armastuse hall at Aparaaditehas in Tartu, Estonia and is dedicated to theme of identity and inspiration.

Anssi’s hardware setup, both in the studio and live on stage, consists of a Paca connected to a Metric Halo MIO2882 interface via bidirectional ADAT in a 4U mobile rack. Laiho has used this system for 10 years and finds it intuitive, because Metric Halo’s MIOconsole mixer interface gives him the opportunity to route audio between Kyma, the analog domain, and the computer in every imaginable way. When creating content as a sound designer, he often tries things out in Kyma in real-time by opening a Kyma Sound with audio input and listening to it on the spot. If it sounds good, he can route it back to his computer via MIOconsole and record it for later use.

His live setup for Laboratorio Module 24 is based on the same system setup. The aim of the hardware setup was to have as small a physical footprint as possible, because he was sharing the stage with two dancers. On stage, he had a fader-controller for the MIOconsole (to control feedback from microphones), an iPad running Kyma Control displaying performance instructions, a custom-made Raspberry Pi Wi-Fi footswitch sending OSC messages to Kyma, and a musical saw.

Kyma Control showing “Kiitos paljon” (“Thank you” in Finnish), Raspberry Pi foot switch electronics, rosin for the bow & foot switch controller

The instrument used in the performance is a Finnish Pikaterä Speliplari musical saw (speliplari means ‘play blade’). The instrument is designed by the Finnish musician Aarto Viljamaa. The plaintive sound of the saw is routed to Kyma through 2 microphones, which are processed by a Kyma Timeline. A custom-made piezo-contact microphone and preamp is used to create percussive and noise elements for the piece, and a small diaphragm shotgun microphone is employed for the softer harmonic material.

The way Anssi works with live electronics is by recording single notes or note patterns with multiple Kyma MemoryWriter Sounds. These sound recordings are then sampled in real-time or kept for later use in a Kyma timeline. He likes to think of this as a way of reintroducing a motive of the piece as is done in classical music composition. This also breaks the inherent tendency of adding layers when using looping samplers, which, in Anssi’s opinion, often becomes a burden for the listener at some point.

The Kyma sounds used in the performance Timeline are focused on capturing and resampling the sound played on the saw and controlling the parameters of these Sounds live, in timeline automation, presets, or through algorithmic changes programmed in Capytalk.

Laiho’s starting point for the design was to create random harmonies and arpeggiations that could then be used as accompaniment for an improvised melody. For this, he used the Live Looper from the Kyma Sound Library and added a Capytalk expression to its Rate parameter that selects a new frequency from a predefined selection of frequencies (intervals relative to a predefined starting note) to create modal harmony. He also created a quadrophonic version of the Looper and controlled the Angle parameter of each loop with a controlled random Capytalk expression that makes each individual note travel around the space.

Another Sound used in the performance is one he created a long time ago named Retrosampler. This sound captures only a very short sample of live sound and creates 4 replicated loops, each less than 1 second long. Each replicated sample has its own parameters that he controls with presets. This, together with the sine wave quality of the saw, creates a result that resembles a beeping sine wave analog synthesizer. The sound is replicated four times so he has the possibility to play 16 samples if he to presses “capture” 4 times.

The Retrosampler sound is also quadraphonic and its parameters are controlled by presets. His favorite preset is called “Line Busy” which is exactly what it sounds like. [Editor’s note: the question is which busy signal?]

For the noise and percussion parts of the performance, he used a sound called LiveCyclicGrainSampler, which is a recreation of an example from Jeffrey Stolet’s Kyma and the SumOfSines Disco Club book. This sound consists of a live looping MemoryWriter as a source for granular reverb and 5 samples with individual angle and rate parameter settings. These parameters were then controlled with timeline automation to create variation in the patterns they create.

Anssi also used his two favorite reverbs in the live processing: the NeverEngine Labs Stereo Verb, and Johannes Regnier’s Dattorro Plate.

Kyma is also an essential part of Laiho’s sound design work in the studio. One of the tracks in the performance is called “Experiment 0420” and it is his “Laboratory experiment” of Kyma processing the sound of an aluminum heat sink from Intel i5 3570K CPU played with a guitar pick. Another scene of the performance contains a song called “Tesseract Song” that is composed of an erratic piano chord progression and synthetic noise looped in Kyma and accompanied by Anssi singing through a Kyma harmonizer.

The sound design for the 50-minute performance consists of 11-12 minutes of live electronics, music composed in the studio, and “Spring” by Antonio Vivaldi. The overall design goal was to create a kaleidoscopic experience where the audience is taken to new places by surprising turns of events.

Kyma consultant Alan Jackson’s studio

Alan Jackson’s studio — designed for switching among a variety of sources, controllers, and ease of portability

Musician and Kyma consultant, Alan Jackson, known for his work with Phaze UK on the Witcher soundtrack, has wired his studio with an eye toward flexibility, making it easy for him to choose from among multiple sources, outputs, and controllers, and to detach a small mobile setup so he can visit clients in person and even continue working in Kyma during the train ride to and from this clients’ studios.

Jackson’s mobile studio sessions extend to the train to / from a gig

In his studio, his Pacamara Ristretto has a USB connection to the laptop and is also wired with quad analog in / out to the mixer. That way, Jackson can choose on a whim whether to route the Ristretto as an aggregate device through the DAW or do everything as analog audio through the mixer. Two additional speakers (not shown) are at the back of the studio and his studio is wired for quad by default.

The Faderfox UC4 is a cute and flexible MIDI controller plugged into the back of the Pacamara, ready at a moment’s notice to control the VCS, and a small Wacom tablet is plugged in and stashed to the left of the screen for controlling Kyma.

Jackson leaves his Pacamara on top so he can easily disconnect 5 cables from the back and run out the door with it… which leads to his “mobile” setup.

The Pacamara and its travel toiletry bag

Jackson’s travel setup is organized as a kit-within-a-kit.

The red, inner kit, is what he grabs if he just needs a minimal battery-powered Kyma setup, e.g., for developing stuff on the train, which includes:

  • a PD battery (good for about 3 hours when used with a MOTU Ultralite, longer with headphones)
  • a pair of tiny Sennheiser IE4 headphones
  • a couple of USB cables, and an Ethernet cable


The outer kit adds:

  • a 4 in / 4 out bus-powered Zoom interface
  • mains power for the Pacamara
  • more USB cables
  • a PD power adaptor cable, so he can run the MOTU Ultralite off the same battery as the Pacamara
  • a clip-on mic
  • the WiFi aerial

If you have any upcoming sound design projects you’d like to discuss, visit Alan’s Speakers on Strings website.

When not solving challenges for game and film sound designers, Alan performs his own music for live electronics.

Alan Jackson’s setup for a recent live improvisation for salad bowl and electronics

Milani launches Cosmic Charges

Matteo Milani’s (Unidentified Sound Object) “COSMIC CHARGES” is a library of tonal blasts and impacts for video games, films, TV series, and more.

Each impact is split into layers, allowing a sound designer to decompose and reconstitute sub-timbres to create unique sounds.

Milani was inspired by innovative sound designers Ben Burtt, Randy Thom, and Gary Rydstrom to continue to push the boundaries of his own work. It was through Ben Burtt’s book, Galactic Phrase Book & Travel Guide, that Milani first discovered Kyma, which he used in this project to add musical layers to each impact — thus imparting a unique sonic identity and intensifying the emotional resonance and dynamics.

In addition to their SFX libraries, Milani’s Unidentified Sound Object offers a range of services including complex sound design, composition, multilingual localization, games and VR development, location recording, and more.

Out of the fire, into the Kyma consultancy

It’s not every day that a consultant/teacher gets a closing screen credit on a Netflix hit series, but check out this screenshot from the end of Witcher E7: Out of the Fire, into the Frying Pan:

Screenshot of the closing credits for Witcher E7 crediting sound designers from Phaze UK and Alan Jackson, Kyma consultancy

Kyma consultant, Alan M. Jackson has been working with sound designers Matt Collinge, Rob Prynne and Alyn Sclosa from Phaze UK for a while now, and the students surprised their sensei with an end credit on The Witcher

Kyma Consultancy – Alan M Jackson

Alan and his small band of sound designers meet regularly to share screens and help each other solve sound design challenges during hands-on sessions that last from 1-2 hours. Not only are these sessions an opportunity to deepen Kyma knowledge; they’re also an excuse to bring the sound design team together and share ideas. Alan views his role as a mentor or moderator, insisting that the sound designers do the hands-on Kyma work themselves.

There’s no syllabus; it’s the sound designers who set the agenda for each class, sometimes aiming to solve a specific problem, but more often, they aim to come up with a general solution to a situation that comes up repeatedly. They’ve taken to calling these solutions “little Kyma machines” that will generate an effect for them, not just for this project but for future projects as well. Over time, they’ve created several, reusable, custom “machines”, among them: ‘warping performer’, ‘doppler flanger’, ‘density stomp generator’, and ‘whisper wind’.

Rob Prynne is particularly enamored of a custom patch based on the “CrossFilter” module in Kyma (based on an original design by Pete Johnston), sometimes enhanced by one of Cristian Vogel’s NeverEngine formant-shifting patches. In an interview with A Sound Effect, Rob explains, “It’s like a convolution reverb but with random changes to the length and pitch of the impulse response computed in real-time. It gives a really interesting movement to the sound.” According to the interview, it was the tripping sequence in episode 7 where the CrossFilter really came into play.

When asked if there are any unique challenges posed by sound design, Alan observes, “The big things to note about sound design, particularly for TV / Film, is that it’s usually done quickly, there’s lots of it, you’re doing it at the instruction of a director; the director wants ‘something’ but isn’t necessarily nuanced in being able to talk about sound.”

A further challenge is that the sounds must “simultaneously sound ‘right’ for the audience while also being new or exciting. And the things that sound ‘right’, like the grammar of film editing, are based on a historically constructed sense of rightness, not physics.”

Alan summarizes the main advantages of Kyma for sound designers as: sound quality, workflow, originality, and suitability for working in teams. In what way does Kyma support “teams”?

With other sound design software and plugins you can get into all sorts of versioning and compatibility issues. A setup that’s working fine on my computer might not work on a teammate’s. Kyma is a much more reliable environment. A Sound that runs on my Paca[(rana | mara)] is going to work on yours.

Even more important for professional sound design is “originality”:

Kyma sounds different from “all the same VST plugins everyone else is using”. This is an important factor for film / TV sound designers. They need to produce sounds that meet the brief of the director but also sound fresh, exciting, new and different from everyone else. Using Kyma is their secret advantage. They are not using the same Creature Voice plugins everyone else is using but instead creating a Sound from scratch comes up with results unique to their studio.

For Witcher fans, the sound designers have provided a short cue list of examples for your listening pleasure:

Kyma Sound Heard Episode
Warping performer on every creature in every episode
Warping performer on portal sounds and magic S3E03
Warping performer on the wild hunt vapour S3E03
Doppler flanger spinning thing on the ring of fire S3E05 / S3E06
Density stomp generator for group dancing S3E05
Whisper wind convolved vocal for wild hunt S3E03

 

Visit Alan Jackson’s SpeakersOnStrings to request an invitation to the Kyma Kata — a (free) weekly Zoom-based peer-learning study group. Alan also offers individual and group Kyma coaching, teaching and mentoring.

Sounds of transportation past, present, and imagined future

Audiences will experience Simon Hutchinson‘s music projected from a linear array of 15 giant speakers positioned underneath the 420 ft Boggy Creek overpass in Rosewood park east of Austin Texas on November 19th, 2022 from 6:30-9:30 pm.

Hutchinson is one of six composers commissioned to create works inspired by the past, present and (imagined) future sounds of transportation, utilizing the dramatic sonic movement capabilities of a new sound system designed and built by the Rolling Ryot arts collective to create an immersive audio experience.

“It’s been very interesting to think about what ‘multichannel’ means when channels are spread out across 420 ft, and especially fun problem-solving this in Kyma,” Simon explains, “Using Kyma I’ve set up some sounds to translate across the system very quickly, but in a unified motion. At other times, the panning happens very slowly, so slow that someone walking could outpace the sound as they walk across the space.”

In other sections, Hutchinson treated the individual channels as the individual keys of a giant instrument, and the “melody” traverses the broad, 420-ft space, with each speaker assigned only a single note. More details on this process are revealed in a video on Simon’s youtube channel.

The sound materials for the piece are from field recordings and analog synthesis samples processed through Kyma. Simon “unfolded” ambisonic field recordings across the 15 channel space, worked with mid-side transformations of stereo recordings, and leveraged the multichannel panning of the Kyma Timeline.

The Ghost Line X event is free and open to the public. All ages are welcome, and the audience is encouraged to bring lawn chairs and blankets and to bike or ride-share to Rosewood park.

Emerging from the Shatter Zone

As part of an on-going series of interviews with artists adapting to and emerging from the disruptions of 2020-22, we had a chance to speak with independent sound artist and photographer Will Klingenmeier to ask him about how he continued his creative work in spite of (or because of?) the restrictions last year, what he is up to now, and some of his hopes and inspirations for the future.

Will is a self-described omnivore of sound and noise, living as a borderline hermit and wanderer in order to focus on developing his unique artistic voice.


Thanks for taking the time to speak with us, Will!

Thank you so much for having me. I’m honored and happy to have the opportunity to talk about my work.

The colors and arrangement of your studio photo seem to capture just how central your studio is in your life (you’ve said that you spend more time there than in in any other place, and I think a lot of us can identify with that). Can you give us a brief description of how you’ve set up your studio workflow and say a bit about how you feel when you’re in the studio and in the flow?

My studio is really sacred ground for me. It’s a special place that I have spent thousands of hours in, as well as making. All of the acoustic treatment, gear racks, speaker stands and shelves I made with my dad over the course of many years. When I’m in the flow it’s like I’m on a tiny planet in my own little world, and it’s immensely satisfying. It’s a source of constant joy and a never-ending project.

At the moment, a lot of what I do is based in Kyma so it is the heart of my studio. I try to perform everything in real-time without editing afterwards; even if it takes longer to get there, it’s more satisfying. I have a few different set-ups including laptops and an old Mac Pro. I have mostly the same content on multiple computers not only for back-ups, but also to make it easier for traveling with a mobile rig. When I can’t avoid using a DAW, I have an old version of Pro Tools on my Mac Pro. Outside of Kyma, I have a handful of microphones, instruments, synthesizers, outboard gear and tape machines that I use. There’s a ton of cabling to the closet turned machine room which looks confusing, but my set-up is actually quite straightforward. Also, I leave all the knick-knacks out so my studio looks like a bricolage, but that way I know where everything is when I need it, and it’s only an arm’s reach away.

During the lockdowns and travel restrictions of spring 2020, you were caught about 7000 miles away from your studio. How did that come about?

I don’t go looking for tough situations, but I always seem to find them. On February 1st, 2020, I left Colorado for Armenia where I was scheduled to volunteer teaching the next generation of noisemakers; two week intensives at four locations around the country, including Artsakh, for a total of two months. I booked a one-way flight so I would be able to travel Armenia and the region, and I had another flight afterward to visit a friend in India, but needless to say that’s not what happened.

When I left the states COVID-19 was around but every day things seemed to stay about the same. For the first month in Armenia things were normal, there wasn’t even a documented case of COVID until March. I made it through two schools and I was at the third when I got a call that the World Health Organization had increased the epidemic to a pandemic. That’s when things changed. A driver came for me and took me back to Yerevan where I was told things were going to change and ultimately go into lockdown. The school said I could stay in their apartment or do whatever I felt I needed to, including going home. I didn’t really know what to do; who did? Throughout my travels I’ve learned not to be reactionary and instead concentrate on options and to realize a working solution. As such, I figured if the virus was going to spread it would be mostly in Yerevan, the biggest city, so I left Yerevan on my own and went to the Little Switzerland of Armenia, Dilijan. It’s an incredibly beautiful place in the mountains and I thought “I’ll lay low here for a few days and plan my next move.”

After five days I decided to go back to Yerevan, and found out that I could have a three bedroom apartment all to myself indefinitely. I thought, “that sounds a lot better than scrambling and traveling 7,000 miles across the world right now”, so I talked with my family and I decided to stay put. I was in that apartment by myself for six months and it was an incredible opportunity to turn inwards. As it turned out, I stayed until one day before my passport stamp expired so I took the experience as far as I possibly could. In hindsight, I don’t think it was ever impossible for me to leave, but I never actually pursued it until I absolutely had to. And I wouldn’t change any of it.

Can you describe how your sound work was modified by your “quarantine” in Armenia? How is it different from the way you normally work in Colorado? Has it changed the way you work now, even after you’ve returned home?

It turns out that an ironing board makes an excellent, height-adjustable desk for a sound artist in exile.

Basically everything I’m doing now has come out of exceedingly difficult situations that I persevered through. Several things happened that I’m aware of and probably even more that I’m unaware of and still digesting. I definitely breached a new threshold of understanding how to use my gear. And I absolutely learned to make the most of what’s on hand—to repurpose things, up-cycle and reimagine. I remember taking inventory of everything I had, laying it all out in front of me and considering what it was capable of doing, the obvious things to start like the various connection ports and so on, then I moved to the more subtle. By doing this I was able to accomplish several things that I didn’t think I could previously based solely on my short-sighted view.

This same awareness and curiosity has stayed with me and I’m really grateful for it. Essentially, it was realizing there’s a lot to be gained in working through discomfort. Moreover, I’ve heard that creativity can really blossom when you’re alone, so maybe that is some of what happened. Not that I held back much before this time, but now I really swing for the fences—I love surreal, far-out, subjective, ambiguous sounds. Creatively speaking, I’m most interested and focussed on doing something that’s meaningful to me, and then I might share it.

Once I returned to Colorado a different level of comfort and convenience came back into my life. Some things were left in Armenia and some things were gained in Colorado. I’ve made it a point to make the most of wherever I am with whatever I have. That’s something I’ve lived by, and now it is a part of me. It is nice to be back in my studio though. I’ve spent more time in this room than any other so I know it intimately. To be sure, I’ve always enjoyed my space and I thrive in solitude so having the situation I did in Armenia really brought out the best in me. Back in the states there are definitely more distractions, so since returning I’ve become even more of a night owl to help mitigate them.

Do you have Armenian roots? Can you explain for your readers what’s going on there right now?

I don’t have Armenian roots, at least none that I know of. That said, I have been told by my Armenian friends that I am Armenian by choice, and I will agree with that. They are a wonderful people and it breaks my heart with what is happening there now. A lot is going on and I don’t claim to understand it all, but there is definitely a humanitarian crisis—thousands dead, thousands of refugees and displaced peoples, and many severely injured people from a 44 day war launched by Azerbaijan. As a result, there are both internal and external problems including on-going hate from the two enemy nations Armenia is sandwiched between. It is a tough time and when I ask my Armenian friends about it they don’t even know how to put it into words. I can see the sadness and worry in their eyes though.

Throughout the pandemic, you’ve maintained connections and collaborations with multiple artists. Please talk about some of those connections, how you established them, how you maintained them, how you continued collaborating (both in terms of technological and human connections).

The first of these collaborations began when I received a ping from one of my good friends and fellow Kyma user Dr. Simon Hutchinson. He said he would like to talk about YouTube. I’d recently started ramping-up my channel (under the name Spectral Evolver) and he was looking to do the same. We discussed the potential for collaboration and cross-promoting our channels. At the time, I was making walking videos around Armenia and we eventually decided to create a glitch art series which used these videos as source material for Simon’s datamoshing. Additionally, we decided to encode the audio for binaural and to use Kyma as a part of the process. We started with short videos up to a minute long with varying content to run some tests, and then started doing longer videos around ten minutes once we figured out the process. We used Google Drive to share the files. It was immensely satisfying and a totally new avenue for me; he is incredibly creative and talented and I was thrilled to be taken along for the datamoshed ride.

Another collaboration during this period was with my long-time friend from college Tim Dickson Jr. He is one of my favorite pianists. Everything he plays is very thoughtful and he has developed this wonderfully minimalist approach. He was going through tough times and wanted to share as much of his creations as possible so I asked him if he wouldn’t mind sending me some of his compositions for me to reimagine. He uploaded about 15 pieces and I used only them and Kyma in creating our album ‘abstractions from underground‘.

The Unpronounceables‘ as a trio was yet another collaboration that started during lockdown. Somehow I finagled my way into the group comprised of İlker Işıkyakar and Robert Efroymson, both friends of mine from the Kyma community. At that point, there was still hope for KISS 2020 in Illinois and for us performing as a trio, therefore we needed to start practicing. The only way that was possible at the time was online. As I remember we had a few chats to check-in and say “˜hi’ then we jumped in and started feeling our way forward. The only guideline we ever had was to split the sonic spectrum in thirds, one of us would take the lows, one of us the mids, one of us the highs—that was İlker’s idea I think. However, that quickly disappeared and we all just did what we could. I remember using a crinkled piece of paper as an instrument and opening my window to let the sound of birds and children playing in the courtyard come into the jam. I didn’t have my ganjo yet so I used whatever I could to be expressive, it was really beneficial to work through that. To jam, we used a free application called ‘Jamulus’. Robert made a dedicated server and the three of us would join in once a week and play for 30 minutes or so. Surprisingly the latency wasn’t unbearable. We were doing mostly atonal, nonrhythmic sounds though and I suspect if we needed tight sync it would have been painful.

Finally, despite the lockdown, I was able to continue volunteering with the school once they made the transition to online. While brainstorming with the school on how it would be possible to offer workshops for kids in lockdown at home with seemingly limited resources, I realized I could immediately share what I had just learned: to use whatever you have where you are! The results were spectacular: they created a wonderful variety of sound collages created, recorded and edited mostly through cell phones. I assigned daily exercises to record objects of a certain material, for example, metal, glass, wood etc. I asked them to consider the way the object was brought into vibration by another object. Don’t just bang on it! We also discussed the opportunity to use sound to tell a story even if it’s a highly bizarre, surreal, subjective story. In the end, I learned far more from them than they did from me. I was asking them how they made sounds and I was taking notes. For these classes, we met through Google Hangouts and shared everything on Google Drive.

Your recent release, “the front lines of the war,” is the result of another collaboration, this time with Pacific rim poet/multi-disciplinary artist Scott Ezell. Visually, tactilely, sonically and verbally, you and Scott employ multiple senses to express the disintegration and despair of the “shatter zone”. What exactly is a shatter zone?

Shatter Zone has at least a double meaning. It started off in geology to describe randomly cracked and fissured rocks then came to also be used to describe borderlands usually with displaced peoples, so for me, it carries both of those resonances.

The sounds you created for this project are also intentionally distressed and distorted through techniques like bit-crushing, digital encoding artifacts and vocal modulations. Particularly striking is the section that begins “In the shatter zone too long…” whose cracking dripping conveys a miasma of putrescence and disease carried along on a dry wind. Can you describe how you did the vocal processing in this section (and any other techniques you employed in this section that you’d like to share)?

True, there’s a lot of different audio processes happening to make the album sound the way it does including all of the things you’ve mentioned. The piece you’re talking about is called “Shatter Zones” and it came to sound the way it does through Kyma granulation. Scott’s reading of the poem is one take of about 8 minutes and I used a SampleCloud to read through the recording in a few seconds longer than that and used a short grain duration of less than 1 second. What results is a very slightly slower reading, but the pitch of his voice stays the same. It’s just slow enough and granulated enough that we can feel uneasy but not so much that the words aren’t intelligible. It’s super important that we understand what he’s saying so I had to find the line between making an evocative sound and leaving his voice intelligible. I thought there was an obvious correlation to Shatter Zones and granulation which is why I decided to use that technique. The sound around his voice is granular synthesis as well but with long grains of 5 or more seconds. I had this field recording I did on my cell phone of a dwindling campfire, down to the embers, and it always sounded really interesting to me. I brought it into Kyma, replicated it, added some frequency and pan jitter so each iteration wouldn’t sound the same, and used the long grain technique.

What other techniques did you use to “degrade” and “distort” the sound? Did you intentionally choose to distribute the sound component on cassette tape for that reason?

We arrived at putting the sound on a cassette for several reasons. I think a physical element is a really meaningful necessity for music and with the chapbook there was already going to be a physical element so we knew we wanted physical music. These days there’s basically three options: CD, vinyl, cassette — all of which will have a noticeably different sound. We thought about this from the beginning instead of it being an afterthought. Initially, we were thinking vinyl but because of the cost and the length of the sides we realized it probably wasn’t the best solution, if it was even possible. Between a CD and a cassette, this content lends itself to the cassette format more than a CD, not that CD would have been a bad choice. We welcome what the cassette tape might do to the sound—a further opportunity for the medium to influence the art. Besides, over the last year or so I’ve been buying music on cassette—they are really coming back—and they don’t sound nearly as undesireably bad as you might think. In fact, some music sounds best on cassette. I think it’s really cool if an artist considers all the different formats and decides which one is best for the art—like a painter choosing what they will paint on, it matters, it changes the look. When listening to this project on cassette there is a noticeably different experience than listening digitally, and with the exception of one track I think it is all more desirable on the cassette version. As for other techniques for degradation and distortion, I used a lot of cell phone field recordings, there’s a few databent audio files and I recorded out of Kyma into another recorder and during that process a bunch of noise came along. I also have a 1960’s style tone-bender pedal I made a long time ago that’s super special to me, and all of the guitar tones went through that. Then, of course, there’s the stuff inside Kyma which you’ve already hinted at: Bitwise operators, granulation, cross-synthesis and frequency domain haze.

Do you think it’s ironic to use Kyma, a system designed for generating high-quality sound, to intentionally degrade and distort the audio?

Kyma is definitely known for high quality, meticulous sound, but that’s not actually why I got into it, or why I continue to use it. Over the years, I’ve developed and found a way of working that’s meaningful to me and it’s centered around Kyma: real-time creation, no editing, and performing the sound. Kyma thrives in that situation and is extremely stable in both the studio and live use, therefore it’s where I feel really creative and capable of expressing the sound in my head.

One way I think about Kyma is like a musical instrument, say a guitar. What does a guitar sound like? What could it sound like? When does it stop being a guitar sound? Seems to me there’s no end to that and definitely no right or wrong. Kyma is the same way for me. In fact, it never occurred to me that I was making lo-fi sounds in a hi-fi system. I think there’s a sadness when data compression is used for convenience and to miniaturize art, but when it’s used from the onset for a unique sound or look, like we’ve done here, that’s different.

Is this piece about Myanmar? Or is it about “every war” in any place?

Both. The spoken words are specifically based on Scott’s first-person experience of a Myanmar Army offensive against the Shan State Army-North and ethnic minority civilians in Shan State, Myanmar in 2015. Scott was smuggled into Shan State under a tarp in the back of an SUV. The Shan State Suite (side one of the cassette) tells this story. At the same time, the project is expressing a bigger realization as it explores the ways that global systems implicate us all in vectors of destruction and conflict in which “everyone is on the front lines of the war.” (This last sentence is Scott’s words, he said it perfectly for our liner notes, so I’ve just repeated it here.)

Can you offer your listeners any suggestions for remediating action?

It is incredibly difficult not to be lost in despair and discouraged by the things that we, especially Scott, have seen and experienced, but I think the work of art itself is actually meant to be a positive thing. It’s once we fully ignore and disregard things that hope is lost, or rather that we are taking an impossible chance hoping that it “works out for the best.”

I believe we have to make an effort and that effort can come in many forms. This project cracks open the door and offers a start to a conversation, a very difficult, long conversation, but a necessary one, I think. Scott and I have had success bringing this kind of content into different University environments including an ethics of engineering course at the University of Virginia. We shared a piece of art we made and shared our personal experiences and relationships to contested landscapes and marginalized peoples. The response we got from the professors and students suggested there would be on-going consideration for the topic.

Finally, there are lots of good people in the world and lots of human rights organizations seeking to stop violence both before and after it has started, so that’s something encouraging. There is hope and we can start to do better immediately even if only on a very small, personal level. In fact, that’s actually where it all needs to start, I think.

In an ideal world, what would you love to work on for your next project?

An installation of some kind. I really want to do something on a bigger, more immersive, more tactile scale using Kyma and sound as a medium of expression along with some of the visual forms I’ve been getting into. In the meantime I’ll continue to do what I’m doing!

What do you see as the future direction(s) for digital media art and artists? For example, you’ve gone all-in on developing your youtube channel for both educational and artistic purposes, video, and live streaming. How does youtube (and more) figure into your own future plans?

I think we are going to continue to see extremes and new forms of art. Artists, as a whole, always challenge and question which leads to new horizons. Digital media has certainly created many incredible and wonderful on-going opportunities for artists. I see VR/AR getting more and more capable as well as popular. Also, with the advent of everything becoming “˜smart’, new needs have arisen for artists. Like Kyma being used in the design of the Jaguar I-PACE, that kind of stuff. As for my YouTube channel, I have every intention of continuing it. I don’t know exactly what all the content is going to be, but that’s why I put ‘Evolver’ in the name.


As a coda, we strongly recommend that everyone check out the detailed description of ‘the front lines of the war’ on Will Klingenmeier’s website, where you can also place an order for a copy of this beautifully produced, limited-edition chapbook and cassette for yourself or as a gift for a friend.

It’s not an easy work to categorize. It’s an album, it is a video, it’s a chapbook of poetry, yet it’s so much more than that: it is a meticulously crafted artifact which, in its every detail, conveys the degradation, despoilment and degeneration of a “shatter zone”. It arrives at your mailbox in a muddied, distressed envelope with multiple mismatched stamps and a torn, grease-marked address label, like a precious letter somehow secreted out of a war zone, typed on torn and blood/flower stained stationary and reeking of unrelenting grief, wretchedness, and inescapable loss. It deals with many topics that are not easy or comfortable to confront, so be sure to prepare yourself before you start listening.