The latest Kyma 7 software update introduces new Sounds and Capytalk expressions to enhance the versatility of file-based, time-indexed sound design, 130x faster app switching, and more!
Kyma 7.43f6, released 3 October 2024, delivers a 130x speed-up in inter-application switching on macOS and Windows computers — a dramatic performance boost to smooth out your workflow and enhance your productivity.
In addition to the speed-up, this update introduces two new Capytalk expressions for ramp functions with loop points, providing greater flexibility and control over live Sound parameters.
Two new Sounds — TimeIndexForSamples with a Frequency (rather than Rate) field to control the playback speed of a sample selected from a list of sample file names and TimeIndexForFileNames which can select from among multiple FileNames — enhance the versatility of file-based, time-indexed sound design (including samples, PSI analyses, spectral analyses, sampleClouds).
Check the Prototypes for examples using these new Sounds:
SampleWithTimeIndex !Frequency
SampleWithTimeIndex !Rate
MultiSampleWithTimeIndex !Rate
MultiSampleWithTimeIndex !KeyPitch
Non-harmonic MultiSpectrum with TimeIndex
Kyma 7.43f6 also includes a host of other improvements and feature requests, all designed to make your sound design experience ever more enjoyable.
Unlock all the speed and other enhancements by downloading the new version from the Help menu in Kyma 7 today.
Inspired by an ABCs of Physics board book he bought for his son at Powell’s in Portland, University of Oregon professor Jon Bellona became convinced that kids’ books should be for both kids and their parents.
Bellona’s new book ABCs of Audio Recording is an alphabet concept book for kids and their parents to learn more about audio production through straight-forward concepts, definitions, and images. [ed., my favorite entry is “Z for impedance”]
Dedicated to his sons, Peregrine and Ellery, Jon Bellona’s ABCs of Audio Recording is ideal for the children in your life (and the child in you).
Kyma 7.43f5 is here, and it’s packed with new features for customizing your Kyma environment and enhancements to optimize your realtime sound design experience.
Kyma 7.43f5 is here, and it’s packed with new features for customizing your Kyma environment and enhancements to optimize your realtime sound design experience.
Highlights:
• Personalize your Workspace: Choose from a variety of default background colors and widgets for your Virtual Control Surface to create a more personalized and comfortable work environment.
• Enhanced Color Selection: Color swatches guide your selection process, making it easier to find the perfect color combinations
• Smoother Performance: Enjoy smoother and more responsive screen updates.
New Features:
• Capytalk for Non-Linear Lookups: Perform complex non-linear lookups into arrays of changing EventValues, unlocking new possibilities for interactive control over Kyma Sound parameters
• Improved Interpolation for Capytalk intoXArray:yArray: Continuously variable interpolation methods offer greater flexibility and control over how values are mapped between points — from immediate, to linear, to piecewise spline interpolation
• Shared Files: Manage file references within your Kyma Sounds using SharedFileName and SharedFileNames. These new Sounds streamline your workflow by replacing all occurrences of file variables with your chosen file selections.
Additional Enhancements:
• Improved InputOutputCharacteristic for Pacamara Ristretto: This Sound has been rewritten for enhanced efficiency, accuracy, and piecewise spline interpolation.
• Inspiring new Samples and Images: Spark your creativity with a fresh set of samples and images provided by composer/performer Andrea Young and astrophotographer/sound designer Rick Stevenson.
…and more!
Download the update today to unlock the full potential of your Kyma-Pacamara sound design environment! Available free from the Help menu in Kyma.
“Ben Burtt is a pioneer and visionary who has fundamentally changed the way we perceive sound in cinema.”
In this interview, Burtt shares some of his experiences as Sound Designer for several iconic films, including his discovery of “a kind of sophisticated music program called the Kyma” which he used in the creation of the voice of WALL-E.
The interviewer asked Ben about the incredible voices he created for WALL-E and EVA:
Well, Andrew Stanton who was the creator of the WALL-E character in the story; apparently he jokingly referred to his movie as the R2-D2 movie. He wanted to develop a very affable robot character that didn’t speak or had very limited speech that was mostly sound effects of its body moving and a few strange kind of vocals, and someone (his producer, I think — Jim Morris) said well why don’t you just talk to Ben Burtt, the guy who did R2-D2, so they got in touch with me.
Pixar is in the Bay Area (San Francisco) so it was nearby, and I went over and looked at about 10 minutes that Andrew Stanton had already put together with just still pictures — storyboards of the beginning of the film where WALL-E’s out on his daily work activities boxing up trash and so on and singing and playing his favorite music, and of course I was inspired by it and I thought well here’s a great challenge and I took it on.
This was a few months before they had to actually greenlight the project. I didn’t find this out until later but there was some doubt at that time about whether you could make a movie in which the main characters don’t really talk in any kind of elaborate way; they don’t use a lot of words. Would it sustain the audience’s interest? The original intention in the film that I started working on was that there was no spoken language in the film that you would understand at all; that was a goal at one point…
So I took a little bit of the R2 idea to come up with a voice where human performance would be part of it but it had to have other elements to it that made it seem electronic and machine-like. But WALL-E wasn’t going to Beep and Boop and Buzz like R2; it had to be different, so I struggled along trying different things for a few months and trying different voices — a few different character actors. And I often ended up experimenting on myself because I’m always available. You know it’s like the scientist in his lab takes the potion because there’s no one else around to test it: Jekyll and Hyde, I think that’s what it is. So I took the potion and and turned into Mr Hyde…
The idea was to always give the impression of what WALL-E was thinking through sound…
But eventually it ended up that I had a program — it was a kind of sophisticated music program called the Kyma and it had one sound in it — a process where it it would synthesize a voice but it [intentionally] didn’t do very well; the voice had artifacts that had funny distortions in it and extra noises. It didn’t work perfectly as a pure voice but I took advantage of the fact that the artifacts and mistakes in it were useful and interesting and could be used and I worked out a process where you could record sounds, starting with my own voice, and then process them a second time and do a re-performance where, as it plays back, you can stretch or compress or repitch the sounds in real time.
So you can take the word “Wall-E” and then you could make it have a sort of envelope of electronic noise around it; it gave it a texture that made it so it wasn’t human and that’s where it really worked. And of course it was in combination with the little motors in his arms and his head and his treads — everything was part of his expression.
The idea was to always give the impression of what WALL-E was thinking through sound — just as if you were playing a musical instrument and you wanted to make little phrases of music which indicated the feeling for what kind of communication was taking place.
Composer Chi Wang describes her instrumentation in Transparent Affordance as a “data-driven instrument.” Data is derived from touching, titling, and rotating an iPad along with a “custom made box” of her own design. Near the end of the piece Wang uses the “box” to great effect. She places the iPad in the box and taps on the outside. Data is still transmitted to Kyma even without direct interaction, though the sounds are now generated in a much more sparse manner as though the iPad needs to rest or the performer decides to contain it. Finally the top is put on the box as one last flurry of sound escapes.
Scott L. Miller describes his piece Eidolon as “…a phantom film score I thought I heard on a transatlantic flight…” Although the film may not exist to guide us through a narrative, the soundtrack does exist and takes the place of visual storytelling. Eidolon, scored for flute, violin, clarinet, and percussion with Kyma, blurs the lines between what the performers are creating and what the electronic sounds are, an aspect of how we hear the world: we can alternately isolate and focus on sounds to identify them and we can also widen our ears to absorb a sonic texture or a soundscape. Electronic sounds were synthesized in Kyma using additive synthesis, mostly partials 17 – 32 of sub-audio fundamentals.
Ecosystemics — the guiding principle for much of composer Scott L. Miller’s work over past two decades, constitutes an ecological approach to composition in which form is a dynamic process that is intimately tied to the ambience of the space in which the music occurs. In a live ecosystemic environment, Kyma Sounds are parametrically coupled with the environment via sound. As Miller explains in his two-part article for INSIGHTs magazine — Ecosystemic Programming and Composition:
In ecosystemic music, change in the sonic environment is continuously measured by Kyma with input from microphones. This change produces data that is mapped to control the production of sound. Environmental change may be instigated by performers, audience members, sound produced by the computer itself, and the ambience of the space(s) in general.
Sam Wells and Adam Vidiksis, collaborators on Miller’s new album of telematic ecosystemic music, Human Capital, describe performing with Miller’s Kyma environments as “like interacting with a living entity”.
Collaborators since 2003, the duo’s name derives from the fact that both of them manipulate devices — one a clarinet, one a computer — to generate music. And that, despite their best efforts, these devices are never fully under their control, at times almost seeming to have a mind of their own. Rather than bemoaning this fact, Scott and Pat welcome the potential for unimagined sonic discoveries inherent in this unpredictability.
Friday’s setlist includes:
Piano – Forte I, Piano – Forte II, and Piano – Forte III telematic collaborations
Semai Seddi-Araban by Tanburi Cemil Bey, the premiere of the duo’s take on a classic Turkish semai.
Mirror Inside from Shape Shifting (2004), for clarinet and Kyma
Fragrance of Distant Sundays, the duo’s tribute to Carei Thomas, the Minneapolis improviser/composer who passed away in 2020
Composer/performer Andrea Young has contributed several vocalises (melodies without words) to the Kyma library — perfect for experimenting with feature extraction or as spectrally-rich source material for manipulation and analysis/resynthesis.
In her artistic work, composer/performer Andrea Young explores the full range of potential interactions between voice and computer: acoustic, amplified, deconstructed/reconstructed, and extracting features from the voice to use as control signals for synthesis and processing algorithms.
Although amplification and live processing are widely used in both experimental and commercial music, the last option — feature-extraction and remapping to (potentially unrelated) sound parameter controls — is much less explored.
Andrea has contributed several vocalises (melodies without words) to the Kyma library — perfect for experimenting with feature extraction or as spectrally-rich source material for manipulation and analysis/resynthesis. Check your Kyma 3rd party Samples folder, in the sub-folder Andrea Young.
Dr. Young studied vocal performance and composition at The University of Victoria, electronic music at The Institute of Sonology in The Hague, and holds a Performer-Composer doctorate from The California Institute of the Arts.
When composer/performer Steve Ricks upgraded his trombone to a B.A.C. Paseo and paired it with Kyma, it opened the door on a new world of improvisation with live electronics. You can hear some of those results as tracks in his new album Solo Excursions. (Listen for Kyma on tracks: 2, 6, 8, 9, and 11).
Ricks is particularly fond of the SampleCloud which he uses to create an overall texture in nearly every track. In track 6, “Kybone Study 16 Harrison Bergeron” he added several ring modulation sounds with filtering and delay/reverb to create additional frequencies and a “metallic” sound, suggestive of the metallic “handicaps” forced on the protagonist of Kurt Vonnegut’s Harrison Bergeron: a short story set in a dystopian near-future where anyone who is too smart, too beautiful or too athletic is required to wear “handicaps” to make them “more equal” to everyone else.
In the music, there is a sense of resistance that the performer has to push against as the sound is coming out of the trombone, through the microphone and into Kyma.
THE YEAR WAS 2081, and everybody was finally equal. They weren’t only equal before God and the law. They were equal every which way. Nobody was smarter than anybody else. Nobody was better looking than anybody else. Nobody was stronger or quicker than anybody else. All this equality was due to the 211th, 212th, and 213 th Amendments to the Constitution, and to the unceasing vigilance of agents of the United States Handicapper General.
— Kurt Vonnegut
On Steve Ricks’ May 2023 album, released in conjunction with Ron Coulter, Precipitations (New Focus Recordings), Track 2, Late Night Call, features Ricks improvising with a single SampleCloud Sound on one of his own audio files, while Ron improvises on percussion and lo-fi electronic devices.
According to the liner notes:
Late Night Call revels in the subtle timbral distinctions in non-pitched electronic sounds; early dial-up modems, bad telephone connections, and poor TV or radio reception come to mind as we listen to this ecology of resistors, currents, and connections. Mechanics’ Choice establishes a four note ostinato as a pad over which Coulter improvises on found objects and gongs. A reverse processing effect turns the texture inside out, distorting it as the sound envelopes fold back upon themselves.
The Precipitations tracks 4 and 6, Charming Ways and I-Se3m, also feature the same Kybone (Kyma + trombone) setup.
Another Coulter/Ricks project, Beside the Avoid, was released in 2022 by Coulter under his Kreating SounD label. According to Ricks, Kyma-created Sounds can be heard throughout the album. In a departure from the way Ricks typically uses Kyma, the track Wow, Why & Wot takes a random walk with the morphing sound and Steve’s spoken recordings of the words “Wow,” “Why,” and “What”.
The latest Kyma release (version 7.43) focuses on performance optimization and expanded functionality.
29 June 2024, Champaign, IL . The latest Kyma release (version 7.43) focuses on performance optimization and expanded functionality.
Performance Boost: Optimizations to the Smalltalk Interpreter and garbage collector have resulted in noticeable performance improvements for Kyma running on both macOS and Windows, especially under low-memory conditions.
Here’s a short video of the in-house tool we developed for monitoring and fine-tuning the dynamics of the garbage collector (Note: when monitoring memory usage, Kyma runs at half speed — this is just for in-house tweaking, not a run-time tool):
Expanded Functionality: Among other enhancements and additions, Kyma now allows you to bind Strings and Collections to ?greenVariables in a MapEventValues. This provides greater flexibility and modularity for “lifted” Sounds as well as signal-flow-wide references to features like file durations, names, collection sizes, and more.
The update is free: you can download it from the Help menu in Kyma. As always, a full list of changes and additions is included with the update.
Composer Tom Williams used Kyma to extensively transform the sonic materials in his 2023 acousmatic composition Piano Trace, performed at multiple festivals including Jauna Muzika 2023 in Vilnius, Noisefloor 2024 in Lisbon; Spatial Audio Gathering, DMU, Leicester; NYCEMF2024, New York. Williams, who has a doctorate in music composition from Boston University, currently heads the Music Production MA program at Coventry University.
Piano Trace is conceived as an unfolding of trace material that marks its original source: soundings made on an upright piano. Williams writes:
This is my piano. The piano that has been by my side as a tool for composition but never, until now, the actual source of my composition. A pocketful of playful recordings from the soundboard, piano keys, pedals and strings are the sonic roots. Throughout, and within the transformations and messing-up of the source sounds, there lies an inherent trace, a timbral DNA, a semblance of sonic integrity that is the ephemeral body of Piano Trace.