The latest Kyma 7 software update introduces new Sounds and Capytalk expressions to enhance the versatility of file-based, time-indexed sound design, 130x faster app switching, and more!
Kyma 7.43f6, released 3 October 2024, delivers a 130x speed-up in inter-application switching on macOS and Windows computers — a dramatic performance boost to smooth out your workflow and enhance your productivity.
In addition to the speed-up, this update introduces two new Capytalk expressions for ramp functions with loop points, providing greater flexibility and control over live Sound parameters.
Two new Sounds — TimeIndexForSamples with a Frequency (rather than Rate) field to control the playback speed of a sample selected from a list of sample file names and TimeIndexForFileNames which can select from among multiple FileNames — enhance the versatility of file-based, time-indexed sound design (including samples, PSI analyses, spectral analyses, sampleClouds).
Check the Prototypes for examples using these new Sounds:
SampleWithTimeIndex !Frequency
SampleWithTimeIndex !Rate
MultiSampleWithTimeIndex !Rate
MultiSampleWithTimeIndex !KeyPitch
Non-harmonic MultiSpectrum with TimeIndex
Kyma 7.43f6 also includes a host of other improvements and feature requests, all designed to make your sound design experience ever more enjoyable.
Unlock all the speed and other enhancements by downloading the new version from the Help menu in Kyma 7 today.
Composer/performer Andrea Young has contributed several vocalises (melodies without words) to the Kyma library — perfect for experimenting with feature extraction or as spectrally-rich source material for manipulation and analysis/resynthesis.
In her artistic work, composer/performer Andrea Young explores the full range of potential interactions between voice and computer: acoustic, amplified, deconstructed/reconstructed, and extracting features from the voice to use as control signals for synthesis and processing algorithms.
Although amplification and live processing are widely used in both experimental and commercial music, the last option — feature-extraction and remapping to (potentially unrelated) sound parameter controls — is much less explored.
Andrea has contributed several vocalises (melodies without words) to the Kyma library — perfect for experimenting with feature extraction or as spectrally-rich source material for manipulation and analysis/resynthesis. Check your Kyma 3rd party Samples folder, in the sub-folder Andrea Young.
Dr. Young studied vocal performance and composition at The University of Victoria, electronic music at The Institute of Sonology in The Hague, and holds a Performer-Composer doctorate from The California Institute of the Arts.
The latest Kyma release (version 7.43) focuses on performance optimization and expanded functionality.
29 June 2024, Champaign, IL . The latest Kyma release (version 7.43) focuses on performance optimization and expanded functionality.
Performance Boost: Optimizations to the Smalltalk Interpreter and garbage collector have resulted in noticeable performance improvements for Kyma running on both macOS and Windows, especially under low-memory conditions.
Here’s a short video of the in-house tool we developed for monitoring and fine-tuning the dynamics of the garbage collector (Note: when monitoring memory usage, Kyma runs at half speed — this is just for in-house tweaking, not a run-time tool):
Expanded Functionality: Among other enhancements and additions, Kyma now allows you to bind Strings and Collections to ?greenVariables in a MapEventValues. This provides greater flexibility and modularity for “lifted” Sounds as well as signal-flow-wide references to features like file durations, names, collection sizes, and more.
The update is free: you can download it from the Help menu in Kyma. As always, a full list of changes and additions is included with the update.
The newest update to Kyma (7.42f5) features improvements to the interface and several handy new features including:
For more accurate placement of Sounds during drag-and-drop, the cursor changes to a cross-hairs with transparent center plus corners during dragging.
Enhanced syntax coloring matches the colors of corresponding pairs of open/close brackets and parentheses.
Cleaner signal flow diagrams, thanks to an option to hide Constant Zero shared Sounds in Replicators (Edit menu > Settings > Appearance)
New option to open a Sound with a MapEventValues for mapping the parameter automation functions from the bottom of the Timeline, so you can hear the Sound on its own with the same settings it has in the context of the Timeline with those parameters automated.
There’s a new keyboard shortcut for unlocking the VCS for editing. Cmd+T (mnemonic TRANSFORM VCS or Turn-on the editor) that is equivalent to Action menu > Edit VCS layout
As well as multiple other fixes and enhancement requests from the Kyma community (thank you for your reports and feedback!)
The new update is free, and you can download it from the Help menu in Kyma.
Networked collaboration, telematic performances, and online learning have been growing in popularity for several years, but the lockdowns and social-distancing guidelines precipitated by the global COVID-19 pandemic have accelerated the adoption of these modes of interaction. This is a brief (and evolving) report on some of the solutions your fellow Kyma-nauts have found for practicing creative collaborations, live performances, private tutoring, consulting, and teaching large online courses. Thanks for sharing your input, feedback and alternative solutions for distance-collaboration with the Kyma community!
Note: For example configurations showing how to get audio onto your computer and out onto a network, read this first.
Kyma Kata
One of the earliest ongoing examples is Alan Jackson’s Kyma Kata, a regular meeting of peers who practice Kyma programming together, that has been operating online in Google Hangouts for over a year before the crisis and recently celebrated their 100th session! (Did they know something the rest of the world didn’t?) The Kyma Kata currently meets twice a week, on Mondays and Tuesdays. They begin each session with “Prime Minister’s Question Time” (open question-and-answer session on how to do specific tasks in Kyma), followed by an exercise that each person works on independently for 30 minutes, after which they share and discuss their results. Ostensibly the session lasts for 2 hours, but when people really get interested in a problem, some of them stick with it for much longer (though there is no honor lost if someone has to leave after two hours).
Kata participants focus on how to do something together, so screen-sharing is important and audio quality has been less important: they often play over the air, using the computer’s built-in microphone to send the audio.
For higher quality audio, Alan uses a small USB mixer plugged in to the Mac as the Hangouts audio source. Using the mixer, he can mix the Paca’s output and a microphone which provides a lot better quality than over-the-air through the laptop’s mic, although it’s still limited by Hangout’s audio quality, delay and bandwidth.
What is the Kata and How do I sign up?
The term, kata, which comes to us by way of karate, has been adopted by software engineers as a way to regularly practice their craft together by picking a problem and finding several different solutions. The point of a kata is not so much arriving at a correct answer as it is to practice the art of programming.
In the Kyma Kata, a group of aspiring Kymanistas come together regularly via teleconferencing and the facilitator (Alan) introduces an exercise that everyone works independently for about half an hour, after which people take turns talking about their solutions. All levels of Kyma ability are welcome, so why not join the fun?
Improvisation with the Unpronounceables
The Unpronounceables are Robert Efroymson in Santa Fe New Mexico, Ilker Isikyakar in Albuquerque New Mexico, and Will Klingenmeier (ordinarily based in Colorado, but due to travel restrictions, on extended lockdown in Yerevan Armenia). To prepare for a live improvisation planned for KISS 2020, Robert proposed setting up some remote sessions using Jamulus.
Collaboration platform
Using the Jamulus software, musicians can engage in real-time improvisation sessions over the Internet. A single server running the Jamulus server software collects audio data from each Jamulus client, mixes the audio data and sends the mix back to each client. Initially, Robert set up a private server for the group, but they now use one of the public Jamulus servers as an alternative. One of the amusing side-effects of using the public server is that they are occasionally joined by uninvited random guests who start jamming with them.
During a session, the Unpronounceables use a Slack channel to communicate with each other by text and Jamulus to time-align and mix the three audio sources and for sending the mix to each of the three locations.
Audio from Kyma
Each Unpronounceable uses a second interface to get audio from Kyma to the host computer. Robert uses a Behringer to come out of Kyma, and an IO/2 to get to his Mac. Ilker sends his MOTU Track 16 audio outputs to a Behringer; then selects the Behringer as an I/O in the Jamulus preference tab. Will uses a ZOOM H4n as his Kyma interface and sends the audio to an M-Audio Fast Track Pro which acts as the interface for Jamulus.
Ecosystemic audio in virtual rooms
Scott Miller and Pat O’Keefe’s HDPHN project has always been an exploration of what it means to be alone together — it’s a live public concert where each member of the audience wears headphones, rather than listening through speakers. When stay-at-home orders made it impossible for Scott in Otsego to meet in person with Pat in St. Paul, Minnesota, they started looking into how to move the live HDPHN performance onto the Internet.
When Earth Day Art Model 2020 shifted from a live to an online festival, Scott and Pat used this as an opportunity to dive in and perform HDPHN along with one of their older pieces Zeitgeist live through the Internet.
Audio from Kyma
Scott describes the audio routing for HDPHN as follows:
Pat’s mic comes over Zoom and out of my desktop headphone audio. It also goes into Kyma input 1 on my Traveller. With Zoom, I can’t get/send stereo from a live source. With two people (did this Friday) I bring the second person in on a separate Skype/Zoom/Facetime session on another device, and into Kyma input 2. With 2 inputs, I then mathematically cross-processing them in a virtual room.
I am sending Kyma’s processed/mixed output (Main 1-2) back into my desktop via Lynx E22 audio card, going into DSP-Quattro for compression and EQ, then to iShowU virtual audio interface 1) —> to Zoom for Pat’s monitoring, and 2) —>OBS and then to YouTube synced with Pat’s Zoom video. YouTube latency very bad and it wrecked chamber music with duo, but was fun for free improv with a different duo.
Live coding at a Virtual Club
On Monday, 11 May 2020, beginning at 20.30 CET Time, Lucretio will be live-coding in Kyma using a new Tool of his own design at The Circle UXR Zone. The Circle is a virtual club that is a UXR.zone — a decentralized social VR communication platform offering secure communication, free from user-tracking and ads. A spinoff project of the #30daysinVR transhumanist performance by Enea Le Fons, a UXR.zone allows participants to share virtual rooms and communicate via avatars across a range of devices: from VR headsets (main platform) to desktop and mobile phone browsers.
The avatars of people attending the UXR events are either anonymous (robots) or follow a strict dress code based on the CVdazzle research to stress the importance of cyber camouflage via aesthetics against dystopian surveillance measures happening in the real world. Click to enter the club
The belly of the BEAST
Simon Smith (who is actually quite a slender chap at the BEAST of Birmingham) has recently been tasked with researching online collaboration platforms for the BEAST, so we asked him for some tips from the front lines. He just completed a Sound and Music Workshop with Ximena Alarcon (earlier he helped Alarcon on a telematic performance using the Jacktrip software from Stanford).
In the workshop, she also mentioned:
Artsmesh A network music and performance management tool. Content creators run the Artsmesh client which streams live media point-to-point; audiences run a light Artsmesh client to watch the shows.
SoundJack is a realtime communication system with adjustable quality and latency parameters. Depending on the physical distance, network capacities, network conditions and routing, some degree of musical interaction is possible.
How have you been collaborating, teaching, consulting, creating during the lockdown? We’re interested in hearing your stories, solutions and experiences.
Have you used YouTube or Vimeo for live streaming with Kyma?
What’s your preferred video conferencing software for sending computer audio (Zoom, BigBlueButton, Meet)?
Have you been using remote desktop software (like Chrome Remote Desktop, Jump) to access your studio computer from home?
We welcome hearing about alternate solutions for this ongoing report.
Generate sequences based on the patterns discovered in your MIDI files
Symbolic Sound today released a new module that generates endlessly evolving sequences based on the patterns it discovers in a MIDI file. HMMPatternGenerator, the latest addition to the library of the world’s most advanced sound design environment, is now available to sound designers, musicians, and researchers as part of a free software update: Kyma 7.25.
Composers and sound designers are masters of pattern generation — skilled at inventing, discovering, modifying, and combining patterns with just the right mix of repetition and variation to excite and engage the attention of a listener. HMMPatternGenerator is a tool to help you discover the previously unexplored patterns hidden within your own library of MIDI files and to generate endlessly varying event sequences, continuous controllers, and new combinations based on those patterns.
Here’s a video glimpse at some of the potential applications for the HMMPatternGenerator:
What can HMMPatternGenerator do for you?
Games, VR, AR — In an interactive game or virtual environment, there’s no such thing as a fixed-duration scene. HMMPatternGenerator can take a short segment of music and extend it for an indeterminate length of time without looping.
Live improvisation backgrounds — Improvise live over an endlessly evolving HMMPatternGenerator sequence based on the patterns found in your favorite MIDI files.
Keep your backgrounds interesting — Have you been asked to provide the music for a gallery opening, a dance class, a party, an exercise class or some other event where music is not the main focus? The next time you’re asked to provide “background†music, you won’t have to resort to loops or sample clouds; just create a short segment in the appropriate style, save it as a MIDI file, and HMMPatternGenerator will generate sequences in that style for as long as the party lasts — even after you shut down your laptop (because it’s all generated on the Paca(rana) sound engine, not on your host computer).
Inspiration under a deadline — Need to get started quickly? Provide HMMPatternGenerator with your favorite MIDI files, route its MIDI output stream to your sequencer or notation software, and listen while it generates endless recombinations and variations on the latent patterns lurking within those files. Save the best parts to use as a starting point for your new composition.
Sound for picture — When the director doubles the duration of a scene a few hours before the deadline, HMMPatternGenerator can come to the rescue by taking your existing cue and extending it for an arbitrary length of time, maintaining the original meter and the style but with continuous variations (no looping).
Structured textures — HMMPatternGenerator isn’t limited to generating discrete note events; it can also generate timeIndex functions to control other synthesis algorithms (like SumOfSine resynthesis, SampleClouds and more) or as a time-dependent control function for any other synthesis or processing parameter. That means you can use a MIDI file to control abstract sounds in a new, highly-structured way.
MIDI as code — If you encode the part-of-speech (like verb, adjective, noun, etc) as a MIDI pitch, you can compose a MIDI sequence that specifies a grammar for English sentences and then use HMMPatternGenerator to trigger samples of yourself speaking those words — generating an endless variety of grammatically correct sentences (or even artificial poetry). Imagine what other secret meanings you could encode as MIDI sequences — musical sequences that can be decrypted only when decoded by the Kyma Sound generator you’ve designed for that purpose.
Self-discovery — HMMPatternGenerator can help you tease apart what it is about your favorite music that makes it sound the way it does. By adjusting the parameters of the HMMPatternGenerator and listening to the results, you can uncover latent structures and hyper meters buried deep within the music of your favorite composers — including some patterns you hadn’t  even realized were hidden within your own music.
Remixes and mashups — Use HMMPatternGenerator to generate an never-ending stream of ideas for remixes (of one MIDI file) and amusing mashups (when you provide two or more MIDI files in different styles).
Galleries of possibilities — Select a MIDI file in the Kyma 7.25 Sound Browser and, at the click of a button, generate a gallery of hundreds of pattern generators, all based on that MIDI file. At that point, it’s easy to substitute your own synthesis algorithms and design new musical instruments to be controlled by the pattern-generator. Quickly create unique, high-quality textures and sequences by combining some of the newly-developed MIDI-mining pattern generators with the massive library of unique synthesis and processing modules already included with Kyma.
How does it work?
If each event in the original MIDI file is completely unique, then there is only one path through the sequence — the generated sequence is the same as the original MIDI sequence. Things start to get interesting when some of the events are, in some way, equivalent to others (for example, when events of the same pitch and duration appear more than once in the file).
HMMPatternGenerator uses equivalent events as pivot points — decision points at which it can choose to take an alternate path through the original sequence (the proverbial “fork in the roadâ€). No doubt you’re familiar with using a common chord to pivot to another key; now imagine using a common state to pivot to a whole new section of a MIDI file or, if you give HMMPatternGenerator several MIDI files, from one genre to another.
By live-tweaking the strengths of three equivalence tests — pitch, time-to-next, and position within a hyper-bar — you can continuously shape how closely the generated sequence follows the original sequence of events, ranging from a note-for-note reproduction to a completely random sequence based only on the frequency with which that event occurs in the original file.
Other new features in Kyma 7.25 include:
▪ Optimizations to the Spherical Panner for 3d positioning and panning (elevation and azimuth) for motion-tracking VR or mixed reality applications and enhanced binaural mixes — providing up to 4 times speed increases (meaning you can track 4 times as many 3d sound sources in real time).
▪ Multichannel interleaved file support in the Wave editor
• New granular reverberation and 3d spatializing examples in the Kyma Sound Library
and more…
Availability
Kyma 7.25 is available as a free update starting today and can be downloaded from the Help menu in Kyma 7. For more information, please visit:Â symbolicsound.com.
Summary
The new features in the Kyma 7.25 sound design environment are designed to help you stay in the creative flow by adding automatic Gallery generation from MIDI files, and the HMMPatternGenerator module which can be combined with the extensive library of sound synthesis, pattern-generation, and processing algorithms already available in Kyma.
Background
Symbolic Sound revolutionized the live sound synthesis and processing industry with the introduction of Kyma in 1990. Today, Kyma continues to set new standards for sound quality, innovative synthesis and processing algorithms, rock-solid live performance hardware, and a supportive, professional Kyma community both online and through the annual Kyma International Sound Symposium (KISS).
Kyma, Pacarana, Paca, and their respective logos are trademarks of Symbolic Sound Corporation. Other company and product names may be trademarks of their respective owners.
New Sounds, help for learning Capytalk, plus handy new features in the Timeline, Multigrid and Virtual Control Surface
Kyma 7.12, the latest update to the world’s most advanced sound design software, is now available as a free download for the sound designers, musicians, and researchers using Kyma 7.
Kyma 7.12
Learning Capytalk — Who doesn’t want to improve their coding skills a little bit? Kyma 7.12 makes it easy to learn the Capytalk parameter-control language with the new Capytalk of the Day (CoD) feature. Each day, CoD introduces a single Capytalk message, along with documentation, coding examples, and links to Sounds that show how to use the message in parameter fields. The CoD provides gentle reminders of all those Capytalk features you may have forgotten about (along with some you may have missed along the way). If you’re new to Kyma, it’s a great way to learn a little bit of Capytalk each day; just think, by this time next year, you’ll know 365 Capytalk messages! CoD: it’s the perfect companion to the Sons du Jour!
New Sounds — Kyma is all about the sounds, and this update includes new Sounds, new Prototypes, and several improvements to existing Sounds, including a tuned van Der Pol Oscillator, a Tilt EQ, sample-cloud morphing on live-captured audio, and an easier-to-read, easier-to-modify vocal synthesis modal filter (See the Highlights below for additional details).
Live Performance — Several new features support live interactive sound design and musical performance including: a Timeline option that lets you substitute a recording for the live input (when you’re developing a piece that includes live acoustic performers); a Multigrid option to display subsets of controls on a secondary Virtual Control Surface (so you can split the controls among the laptop screen and your iPad), and a Tool for displaying your secondary VCS or the movie associated with a Timeline at full size on a second display.
 
Highlights of Kyma 7.12 include:
• The USO Tilt EQ is useful for gently tipping the spectral balance of the Input to emphasize the lows and de-emphasize the highs (or vice versa);
• TunedVanDerPolOscillator: The van der Pol oscillator is a model of a nonlinear vacuum tube oscillator developed by Dutch physicist Balthasar van der Pol at Philips in the 1920s. This version of the van der Pol oscillator is tuned and driven by an internal oscillator. When the Ratio of driver to the van der Pol’s natural frequency is set to 1, you can get frequencies that are stable. But you can also experiment with Ratios other than 1 and Driver waveforms other than Sine to get unusual distortion and modulation effects;
• MultiSampleCloud and Morph1DSampleCloud now have a fromMemoryWriter checkbox so you can use MemoryWriters to capture multiple streams of live audio input and live-switch or morph between them as sample clouds;
• ModalFilter Vocal Formants BPF and ModalFilter Vocal Formants KBD prototypes have had their parameter values reorganized in order to make it clearer how to change the formant frequencies and amplitudes;
• A new Timeline option to select a samples file as the input to a Track (which you can elect to read from RAM or from disk). This can be useful when developing a Timeline for live acoustic performers (since during development, you can use a recording of the live input rather than always having to generate the input live);
• A Timeline option to compress or expand durations of all selected Time bars and their associated automation;
• A new sub-layout in the Timeline Virtual Control Surface: “All Global Controls†where you can access all the Master controls in a single layout;
• Multigrid options to immediately set all Tracks to Inactive and to define an Inactive slot as either Silent or Pass-through;
• A Multigrid option to display a track layout, the all-seeing-eye, the mixer, the grid layout, or the shared controls layout on the secondary Virtual Control Surface (VCS). This can be useful when you’d like to display one of the layouts on an iPad using Kyma Control and another on the laptop screen, or when you’d like to display one set of controls for a performer or the audience and a different set of controls for yourself. There’s also a new Tools menu option to display the secondary VCS full-sized on a second display;
…and more. For more details, go to the Help menu in Kyma and check for for software updates.
Availability
Kyma 7.12 is available as a free update, downloadable from the Help menu in Kyma 7.
Summary
The new features in the Kyma 7.12 sound design environment are designed to help you expand your mastery over live parameter control through Capytalk, keep your sound interactions lively and dynamic with new features in the Virtual Control Surface, Timeline and Multigrid, and to expand your sonic universe with newly developed synthesis and control algorithms that can be combined with the extensive library of algorithms already in Kyma.
Background
Symbolic Sound revolutionized the live sound synthesis and processing industry with the introduction of Kyma in 1990. Today, Kyma continues to set new standards for sound quality, innovative synthesis and processing algorithms, rock-solid live performance hardware, and a supportive, professional Kyma community both online and through the annual Kyma International Sound Symposium (KISS).
In 1999, astrophysicist/musician David McClain spent an intense three-month period working on The Northern Sky Survey, mapping the sky in the near infrared while getting by on an hour of sleep per night. When he finished the survey, he was suddenly struck by a viral infection that nearly killed him; his doctors were never able to determine the cause and, after three months, the infection dissipated almost as quickly as it had appeared. But afterward David noticed that he could no longer understand his wife when she was speaking to him. He went to an audiologist and discovered he had a sensorineural hearing loss of 60-70 dB in the high frequency range. Hearing aids helped him understand speech, but he was devastated to discover that music never sounded right through the hearing aids. But as a physicist, he was determined to solve the problem.
Motivated by his love of music and informed by his scientific training, McClain has spent the last 16 years developing equations to describe the entire hearing experience – from the cochlea, to the afferent 8th nerve, to processing in the central nervous system, efferent 8th nerve interactions — and developing signal processing algorithms to adapt to and compensate for his hearing loss in a way that preserves the audio experience of music. The result is a collection of signal processing algorithms he calls Crescendo. Kyma is one of the tools David uses for testing out new ideas and prototyping them for Crescendo.
Now he’s blogging about his findings on his web site http://refined-audiometrics.com. In keeping with his motto “Keeping music enjoyable for all!” David hopes that his experiences, research findings, and extensive set of algorithms can benefit others.
Kyma 7.1 is now available as a free update for sound designers, musicians, and researchers currently using Kyma 7. New features in the Kyma 7.1 sound design environment help you stay in the creative flow by extending automatic Gallery generation to analysis files and Sounds, keeping your sound interactions lively and dynamic with support for additional multidimensional physical controllers, while expanding your sonic universe to include newly developed synthesis and control algorithms that can be combined with the extensive library of algorithms already in Kyma.
Sonic AI (Artistic Inspiration) — Need to get started quickly? Kyma 7.1 provides Galleries everywhere! Select any Sound (signal-flow patch); click Gallery to automatically generate an extensive library of examples, all based on your initial idea. Or start with a sample file, spectral analysis, PSI analysis or Sound in your library, and click the Grid button to create a live matrix of sound sources and processing that you can rapidly explore and tweak to zero-in on exactly the sound you need. Hear something you like? A single click opens a signal flow editor on the current path through the Grid so you can start tweaking and expanding on your initial design.
Responsive Control — Last year, Symbolic Sound introduced support for Multidimensional Polyphonic Expression (MPE) MIDI, which they demonstrated with Roger Linn Designs’ LinnStrument. Now, Kyma 7.1 extends that support to the ROLI Seaboard RISE; just plug the RISE into the USB port on the back of the Paca(rana) and play. Kyma 7.1 also maintains Kyma’s longstanding support for the original multidimensional polyphonic controller: the Haken Audio Continuum fingerboard. Also new with Kyma 7.1 is plug-and-play support for the Wacom Intuos Pro tablet, combining a three dimensional multitouch surface with the precision and refined motor-control afforded by the Wacom pen.
â–ª The new Spherical Panner uses perceptual cues to give you 3d positioning and panning (elevation and azimuth) for motion-tracking VR or mixed reality applications and enhanced binaural mixes.
â–ª A new 3d controller in the Virtual Control Surface provides three dimensions of mappable controls in a single aggregate fader. Also new in Kyma 7.1: three-dimensional and 2-dimensional faders can optionally leave a trace or a history so you can visualize the trajectory of algorithmically generated controls.
â–ª Enhanced spectral analysis tools in Kyma 7.1 provide narrower analysis bands, additional resynthesis partials, and more accurate time-stretching.
â–ª The new, batch spectral analysis tool for non-harmonic source material is perfect for creating vast quantities of audio assets from non-harmonic samples like textures, backgrounds, and ambiences. Once you have those analysis files, you can instantly generate a library of highly malleable additive and aggregate resynthesis examples by simply clicking the Gallery button.
▪ Nudging the dice — Once you have an interesting preset, nudging the dice can be a highly effective way to discover previously unimagined sounds by taking a random walk in the general vicinity of the parameter space. Shift-click on the dice icon or Shift+R to nudge the controller values by randomly generating values within 10% of the current settings.
â–ª Generate dynamic, evolving timbres by smoothly morphing from one waveshape to another in oscillators, wave shapers, and grain clouds using new sound synthesis and processing modules: MultiOscillator, Morph3dOscillator, Interpolate3D, MultiGrainCloud, Morph3dGrainCloud, MultiWaveshaper, Morph3dWaveshaper and others.
â–ª An optional second Virtual Control Surface (VCS) can display one set of images and controls for the audience or performers while you control another set of sound parameters using the primary Virtual Control Surface on your laptop or iPad.
▪ A new version of Symbolic Sound’s Kyma Control app for the Apple iPad includes a tab for activating Sounds in the Multigrid using multi-touch plus support for 128-bit IPv6 addressing (giving you approximately as many IP addresses as there are atoms in the Earth).
▪ Kyma 7.1 provides enhanced support for physical and external software control sources in the form of incoming message logs for MIDI and OSC as well as an OSC Tool for communicating with devices that have not yet implemented Kyma’s open protocol for bi-directional OSC communication.
▪ New functionality in Kyma’s real-time parameter control language, Capytalk, includes messages for auto-tuned voicing and harmonizing within live-selectable musical scales along with numerous other new messages. (For full details open the Capytalk Reference from the Kyma Help menu).
Today, Kyma continues to set new standards for sound quality, innovative synthesis and processing algorithms, rock-solid live performance hardware, and a supportive, professional Kyma community both online and through the annual Kyma International Sound Symposium (KISS).
NeverEngine Labs (Cristian Vogel and Gustav Scholda) have announced a new set of classes and tools for creating and manipulating multicycle wavetables and audiofiles with embedded markers in Kyma 7. The new ROM tools can be used for creating multisample players and to prepare audio files for morphing oscillators, grain envelopes and wavetable libraries.