In early 2025, listen closely to the soundtrack of Lady of the Dunes, the new true crime series airing on the Oxygen Network. The series looks at a recently solved 50 year old cold case.
The director wanted the music to have a haunting quality to tie into the mystery and brutality of the case. That’s when they called on composer/editor John Balcom.
Balcom started out by recording plaintive, mysterious sounds, like wine glasses and a waterphone. Then he took sections from those recordings and treated them using Kyma’s Multigrid. There was a performative aspect to all of this; he would manipulate the sounds going into Kyma, as well as adjusting sounds in real time in the Multigrid, resulting in some disturbingly haunting sounds. Balcom then incorporated those sounds to build out full compositions.
Here are some audio examples of the textures he created using Kyma, along with a couple of tracks that incorporate those textures (Warning: the following clips contain haunting sounds and may be disturbing for all audiences. Listener discretion is advised! ⚠️ ):
Here’s a screen shot of one of Balcom’s Multigrids:
Balcolm concludes by saying: “The music would not have been the same without Kyma!”
It’s not every day that a consultant/teacher gets a closing screen credit on a Netflix hit series, but check out this screenshot from the end of Witcher E7: Out of the Fire, into the Frying Pan:
Alan and his small band of sound designers meet regularly to share screens and help each other solve sound design challenges during hands-on sessions that last from 1-2 hours. Not only are these sessions an opportunity to deepen Kyma knowledge; they’re also an excuse to bring the sound design team together and share ideas. Alan views his role as a mentor or moderator, insisting that the sound designers do the hands-on Kyma work themselves.
There’s no syllabus; it’s the sound designers who set the agenda for each class, sometimes aiming to solve a specific problem, but more often, they aim to come up with a general solution to a situation that comes up repeatedly. They’ve taken to calling these solutions “little Kyma machines” that will generate an effect for them, not just for this project but for future projects as well. Over time, they’ve created several, reusable, custom “machines”, among them: ‘warping performer’, ‘doppler flanger’, ‘density stomp generator’, and ‘whisper wind’.
Rob Prynne is particularly enamored of a custom patch based on the “CrossFilter” module in Kyma (based on an original design by Pete Johnston), sometimes enhanced by one of Cristian Vogel’s NeverEngine formant-shifting patches. In an interview with A Sound Effect, Rob explains, “It’s like a convolution reverb but with random changes to the length and pitch of the impulse response computed in real-time. It gives a really interesting movement to the sound.” According to the interview, it was the tripping sequence in episode 7 where the CrossFilter really came into play.
When asked if there are any unique challenges posed by sound design, Alan observes, “The big things to note about sound design, particularly for TV / Film, is that it’s usually done quickly, there’s lots of it, you’re doing it at the instruction of a director; the director wants ‘something’ but isn’t necessarily nuanced in being able to talk about sound.”
A further challenge is that the sounds must “simultaneously sound ‘right’ for the audience while also being new or exciting. And the things that sound ‘right’, like the grammar of film editing, are based on a historically constructed sense of rightness, not physics.”
Alan summarizes the main advantages of Kyma for sound designers as: sound quality, workflow, originality, and suitability for working in teams. In what way does Kyma support “teams”?
With other sound design software and plugins you can get into all sorts of versioning and compatibility issues. A setup that’s working fine on my computer might not work on a teammate’s. Kyma is a much more reliable environment. A Sound that runs on my Paca[(rana | mara)] is going to work on yours.
Even more important for professional sound design is “originality”:
Kyma sounds different from “all the same VST plugins everyone else is using”. This is an important factor for film / TV sound designers. They need to produce sounds that meet the brief of the director but also sound fresh, exciting, new and different from everyone else. Using Kyma is their secret advantage. They are not using the same Creature Voice plugins everyone else is using but instead creating a Sound from scratch comes up with results unique to their studio.
For Witcher fans, the sound designers have provided a short cue list of examples for your listening pleasure:
Kyma Sound
Heard
Episode
Warping performer
on every creature
in every episode
Warping performer
on portal sounds and magic
S3E03
Warping performer
on the wild hunt vapour
S3E03
Doppler flanger spinning thing
on the ring of fire
S3E05 / S3E06
Density stomp generator
for group dancing
S3E05
Whisper wind convolved vocal
for wild hunt
S3E03
Visit Alan Jackson’s SpeakersOnStrings to request an invitation to the Kyma Kata — a (free) weekly Zoom-based peer-learning study group. Alan also offers individual and group Kyma coaching, teaching and mentoring.
Nate and his team from the award-winning “The Show About Science” podcast had a question:
“Can sound help us understand the complex patterns in our universe?”
This question led Nate to Symbolic Sound in Champaign, Illinois on a journey where sound, music, and data intertwine in captivating and thought-provoking ways…
Listen to the podcast or read the transcript at the link below:
Networked collaboration, telematic performances, and online learning have been growing in popularity for several years, but the lockdowns and social-distancing guidelines precipitated by the global COVID-19 pandemic have accelerated the adoption of these modes of interaction. This is a brief (and evolving) report on some of the solutions your fellow Kyma-nauts have found for practicing creative collaborations, live performances, private tutoring, consulting, and teaching large online courses. Thanks for sharing your input, feedback and alternative solutions for distance-collaboration with the Kyma community!
Note: For example configurations showing how to get audio onto your computer and out onto a network, read this first.
Kyma Kata
One of the earliest ongoing examples is Alan Jackson’s Kyma Kata, a regular meeting of peers who practice Kyma programming together, that has been operating online in Google Hangouts for over a year before the crisis and recently celebrated their 100th session! (Did they know something the rest of the world didn’t?) The Kyma Kata currently meets twice a week, on Mondays and Tuesdays. They begin each session with “Prime Minister’s Question Time” (open question-and-answer session on how to do specific tasks in Kyma), followed by an exercise that each person works on independently for 30 minutes, after which they share and discuss their results. Ostensibly the session lasts for 2 hours, but when people really get interested in a problem, some of them stick with it for much longer (though there is no honor lost if someone has to leave after two hours).
Kata participants focus on how to do something together, so screen-sharing is important and audio quality has been less important: they often play over the air, using the computer’s built-in microphone to send the audio.
For higher quality audio, Alan uses a small USB mixer plugged in to the Mac as the Hangouts audio source. Using the mixer, he can mix the Paca’s output and a microphone which provides a lot better quality than over-the-air through the laptop’s mic, although it’s still limited by Hangout’s audio quality, delay and bandwidth.
What is the Kata and How do I sign up?
The term, kata, which comes to us by way of karate, has been adopted by software engineers as a way to regularly practice their craft together by picking a problem and finding several different solutions. The point of a kata is not so much arriving at a correct answer as it is to practice the art of programming.
In the Kyma Kata, a group of aspiring Kymanistas come together regularly via teleconferencing and the facilitator (Alan) introduces an exercise that everyone works independently for about half an hour, after which people take turns talking about their solutions. All levels of Kyma ability are welcome, so why not join the fun?
Improvisation with the Unpronounceables
The Unpronounceables are Robert Efroymson in Santa Fe New Mexico, Ilker Isikyakar in Albuquerque New Mexico, and Will Klingenmeier (ordinarily based in Colorado, but due to travel restrictions, on extended lockdown in Yerevan Armenia). To prepare for a live improvisation planned for KISS 2020, Robert proposed setting up some remote sessions using Jamulus.
Collaboration platform
Using the Jamulus software, musicians can engage in real-time improvisation sessions over the Internet. A single server running the Jamulus server software collects audio data from each Jamulus client, mixes the audio data and sends the mix back to each client. Initially, Robert set up a private server for the group, but they now use one of the public Jamulus servers as an alternative. One of the amusing side-effects of using the public server is that they are occasionally joined by uninvited random guests who start jamming with them.
During a session, the Unpronounceables use a Slack channel to communicate with each other by text and Jamulus to time-align and mix the three audio sources and for sending the mix to each of the three locations.
Audio from Kyma
Each Unpronounceable uses a second interface to get audio from Kyma to the host computer. Robert uses a Behringer to come out of Kyma, and an IO/2 to get to his Mac. Ilker sends his MOTU Track 16 audio outputs to a Behringer; then selects the Behringer as an I/O in the Jamulus preference tab. Will uses a ZOOM H4n as his Kyma interface and sends the audio to an M-Audio Fast Track Pro which acts as the interface for Jamulus.
Ecosystemic audio in virtual rooms
Scott Miller and Pat O’Keefe’s HDPHN project has always been an exploration of what it means to be alone together — it’s a live public concert where each member of the audience wears headphones, rather than listening through speakers. When stay-at-home orders made it impossible for Scott in Otsego to meet in person with Pat in St. Paul, Minnesota, they started looking into how to move the live HDPHN performance onto the Internet.
When Earth Day Art Model 2020 shifted from a live to an online festival, Scott and Pat used this as an opportunity to dive in and perform HDPHN along with one of their older pieces Zeitgeist live through the Internet.
Audio from Kyma
Scott describes the audio routing for HDPHN as follows:
Pat’s mic comes over Zoom and out of my desktop headphone audio. It also goes into Kyma input 1 on my Traveller. With Zoom, I can’t get/send stereo from a live source. With two people (did this Friday) I bring the second person in on a separate Skype/Zoom/Facetime session on another device, and into Kyma input 2. With 2 inputs, I then mathematically cross-processing them in a virtual room.
I am sending Kyma’s processed/mixed output (Main 1-2) back into my desktop via Lynx E22 audio card, going into DSP-Quattro for compression and EQ, then to iShowU virtual audio interface 1) —> to Zoom for Pat’s monitoring, and 2) —>OBS and then to YouTube synced with Pat’s Zoom video. YouTube latency very bad and it wrecked chamber music with duo, but was fun for free improv with a different duo.
Live coding at a Virtual Club
On Monday, 11 May 2020, beginning at 20.30 CET Time, Lucretio will be live-coding in Kyma using a new Tool of his own design at The Circle UXR Zone. The Circle is a virtual club that is a UXR.zone — a decentralized social VR communication platform offering secure communication, free from user-tracking and ads. A spinoff project of the #30daysinVR transhumanist performance by Enea Le Fons, a UXR.zone allows participants to share virtual rooms and communicate via avatars across a range of devices: from VR headsets (main platform) to desktop and mobile phone browsers.
The avatars of people attending the UXR events are either anonymous (robots) or follow a strict dress code based on the CVdazzle research to stress the importance of cyber camouflage via aesthetics against dystopian surveillance measures happening in the real world. Click to enter the club
The belly of the BEAST
Simon Smith (who is actually quite a slender chap at the BEAST of Birmingham) has recently been tasked with researching online collaboration platforms for the BEAST, so we asked him for some tips from the front lines. He just completed a Sound and Music Workshop with Ximena Alarcon (earlier he helped Alarcon on a telematic performance using the Jacktrip software from Stanford).
In the workshop, she also mentioned:
Artsmesh A network music and performance management tool. Content creators run the Artsmesh client which streams live media point-to-point; audiences run a light Artsmesh client to watch the shows.
SoundJack is a realtime communication system with adjustable quality and latency parameters. Depending on the physical distance, network capacities, network conditions and routing, some degree of musical interaction is possible.
How have you been collaborating, teaching, consulting, creating during the lockdown? We’re interested in hearing your stories, solutions and experiences.
Have you used YouTube or Vimeo for live streaming with Kyma?
What’s your preferred video conferencing software for sending computer audio (Zoom, BigBlueButton, Meet)?
Have you been using remote desktop software (like Chrome Remote Desktop, Jump) to access your studio computer from home?
We welcome hearing about alternate solutions for this ongoing report.
In Steven Johnson‘s upcoming book, Wonderland: How play made the modern world, he includes a chapter on the connection between musical instrument design and technological innovation. In this episode of his Wonderland podcast, he asks how and why it is that some experimental sounds find their way into the musical mainstream. With special guests Brian Eno, Alex Ross, Caroline Shaw, Carla Scaletti, and Antenes.
Composer Roland Kuit was recently interviewed on the prime time news program SBS 6 Hart van Nederland to discuss his Kyma sound explorations that will be launched into space on September 8, 2016 on NASA’s OSIRIS-REx mission to the near-earth asteroid Bennu.
While his music is being launched into space on September 8 2016, Kuit will be at the Kyma International Sound Symposium in Leicester, UK presenting his music and ideas along with filmmaker Karin Schomaker so you’ll have an opportunity to meet and talk with him at KISS2016.
Extreme sound design, radical electronic music, and the impending hardware revolution — Darwin Grosse recently sat down with Symbolic Sound’s Carla Scaletti, and the resulting conversation took some unexpected turns. Listen to the full podcast on Darwin Grosse’s Art + Music + Technology podcast.
Composer/synthesis-researcher Roland Kuit will be demonstrating his work with Kyma and the Buchla 200 interacting with each other in EMS Studio 4 in a live stereo broadcast from EMS in Stockholm for the Dutch Concertzender Radio on 25 February 2015…
At the heart of these compositions are the Kyma algorithms that Kuit uses to frequency-modulate the Buchla Complex Waveform Generator Model 259 which, in turn, is used as trigger function for the Sequential Voltage Source Model 243, thus exploring the intriguing area that lies between note triggers and wavetables. This sequential control voltage is controlling a second Complex Waveform Generator Model 259. And the audio of this Waveform Generator is used as the carrier for the 285e Frequency Shifter / Balanced Modulator. The 266e Source of Uncertainty modulates a third Complex Waveform Generator Model 259 and is used as modulation signal for the Balanced Modulator which is fed through 296e Spectral Processor for filtering. Finally the loop is closed by routing these algorithmic audio ‘sentences’ back to Kyma for Ring Modulation and quadraphonic placement.
Kuit is a frequent guest on the Concertzender where he’s often called in as a modular synthesis expert to explain various synthesis algorithms or to discuss music by some of the pioneers of electronic music.
EMS#1 will stream live on 25 February after which it will be available as a podcast from the Concertzender archive.
Jeffrey Agrell, educator/performer/composer and author of Improvisation Games for Classical Musicians, writes in his blog about a new experience he had recently: he performed with Mike Wittgraf who was processing his signal through Kyma and controlling the processing using a Wiimote + Nunchuck game controller.  Here’s an excerpt:
Composer John Balcom recently completed the score for a new documentary utilizing Kyma as his synthesis tool kit.  BIG SHOT, part of ESPN’s award-winning series 30 FOR 30, tells the story of John Spano’s notorious purchase of the New York Islanders hockey team – which, 4 months after it happened, was exposed as one of the biggest scams in sports history. Directed by Kevin Connolly (E from ENTOURAGE), the film offers the first ever interview with Spano. It’s a pretty incredible story — Newsday called it “a must-watch for anyone with an interest in the power of delusion — both of the self and of others.” The film will be premiering Oct 22nd at 8pm EST on ESPN, and will eventually be available on demand as well as Netflix.
Far more than a sports documentary, the film is, at its core, the story of how a con man pulled off an incredible scam, and much of Balcom’s music speaks to this part of the film. The main instruments used were harp, piano, percussion, and synth, with Kyma supplying most of the synth parts.
When asked why he uses Kyma, Balcom responds, “I find the sound quality second to none. It has become an invaluable tool for me and I find myself using it more and more in my projects.”