Kyma (7.43) Delivers Performance Enhancements and Expanded Functionality

29 June 2024, Champaign, IL . The latest Kyma release (version 7.43) focuses on performance optimization and expanded functionality.

Performance Boost: Optimizations to the Smalltalk Interpreter and garbage collector have resulted in noticeable performance improvements for Kyma running on both macOS and Windows, especially under low-memory conditions.

Here’s a short video of the in-house tool we developed for monitoring and fine-tuning the dynamics of the garbage collector (Note: when monitoring memory usage, Kyma runs at half speed — this is just for in-house tweaking, not a run-time tool):

Expanded Functionality: Among other enhancements and additions, Kyma now allows you to bind Strings and Collections to ?greenVariables in a MapEventValues. This provides greater flexibility and modularity for “lifted” Sounds as well as signal-flow-wide references to features like file durations, names, collection sizes, and more.

Setting ?firstFileName to a String in all Sounds to the left of the MapEventValues
Binding a Collection of Strings to ?displayNames

The update is free: you can download it from the Help menu in Kyma. As always, a full list of changes and additions is included with the update.

Kyma update features enhancements to interface

The newest update to Kyma (7.42f5) features improvements to the interface and several handy new features including:

    • For more accurate placement of Sounds during drag-and-drop, the cursor changes to a cross-hairs with transparent center plus corners during dragging.
    • Enhanced syntax coloring matches the colors of corresponding pairs of open/close brackets and parentheses.

    • Cleaner signal flow diagrams, thanks to an option to hide Constant Zero shared Sounds in Replicators (Edit menu > Settings > Appearance)
    • New option to open a Sound with a MapEventValues for mapping the parameter automation functions from the bottom of the Timeline, so you can hear the Sound on its own with the same settings it has in the context of the Timeline with those parameters automated.

  • There’s a new keyboard shortcut for unlocking the VCS for editing. Cmd+T (mnemonic TRANSFORM VCS or Turn-on the editor) that is equivalent to Action menu > Edit VCS layout

As well as multiple other fixes and enhancement requests from the Kyma community (thank you for your reports and feedback!)

The new update is free, and you can download it from the Help menu in Kyma.

Remote collaboration, telematic performances, and online learning

Networked collaboration, telematic performances, and online learning have been growing in popularity for several years, but the lockdowns and social-distancing guidelines precipitated by the global COVID-19 pandemic have accelerated the adoption of these modes of interaction. This is a brief (and evolving) report on some of the solutions your fellow Kyma-nauts have found for practicing creative collaborations, live performances, private tutoring, consulting, and teaching large online courses. Thanks for sharing your input, feedback and alternative solutions for distance-collaboration with the Kyma community!

Note: For example configurations showing how to get audio onto your computer and out onto a network, read this first.

Kyma Kata

One of the earliest ongoing examples is Alan Jackson’s Kyma Kata, a regular meeting of peers who practice Kyma programming together, that has been operating online in Google Hangouts for over a year before the crisis and recently celebrated their 100th session! (Did they know something the rest of the world didn’t?) The Kyma Kata currently meets twice a week, on Mondays and Tuesdays. They begin each session with “Prime Minister’s Question Time” (open question-and-answer session on how to do specific tasks in Kyma), followed by an exercise that each person works on independently for 30 minutes, after which they share and discuss their results. Ostensibly the session lasts for 2 hours, but when people really get interested in a problem, some of them stick with it for much longer (though there is no honor lost if someone has to leave after two hours).

Alan Jackson and the Tuesday Night Kyma Kata
Collaboration platform

The Kata use Google Meet (née Hangouts) for their meetings, primarily because it seems to work on everyone’s computer and it integrates well with Slack. To start a Hangout, they just type /hangout into the Slack channel.

Audio from Kyma

Kata participants focus on how to do something together, so screen-sharing is important and audio quality has been less important: they often play over the air, using the computer’s built-in microphone to send the audio.

Tuesday Kyma Kata: Andreas, Ben, Jason, Opal, Pete, Domenico, Simon, & Charlie
Andreas, Ben, Jason, Opal, Pete, Domenico, Simon & Charlie

 

For higher quality audio, Alan uses a small USB mixer plugged in to the Mac as the Hangouts audio source. Using the mixer, he can mix the Paca’s output and a microphone which provides a lot better quality than over-the-air through the laptop’s mic, although it’s still limited by Hangout’s audio quality, delay and bandwidth.

What is the Kata and How do I sign up?

The term, kata, which comes to us by way of karate, has been adopted by software engineers as a way to regularly practice their craft together by picking a problem and finding several different solutions. The point of a kata is not so much arriving at a correct answer as it is to practice the art of programming.

In the Kyma Kata, a group of aspiring Kymanistas come together regularly via teleconferencing  and the facilitator (Alan) introduces an exercise that everyone works independently for about half an hour, after which people take turns talking about their solutions. All levels of Kyma ability are welcome, so why not join the fun?

Improvisation with the Unpronounceables

The Unpronounceables are Robert Efroymson in Santa Fe New Mexico, Ilker Isikyakar in Albuquerque New Mexico, and Will Klingenmeier (ordinarily based in Colorado, but due to travel restrictions, on extended lockdown in Yerevan Armenia). To prepare for a live improvisation planned for KISS 2020, Robert proposed setting up some remote sessions using Jamulus.

The Unpronounceables
Collaboration platform

Using the Jamulus software, musicians can engage in real-time improvisation sessions over the Internet. A single server running the Jamulus server software collects audio data from each Jamulus client, mixes the audio data and sends the mix back to each client. Initially, Robert set up a private server for the group, but they now use one of the public Jamulus servers as an alternative. One of the amusing side-effects of using the public server is that they are occasionally joined by uninvited random guests who start jamming with them.

During a session, the Unpronounceables use a Slack channel to communicate with each other by text and Jamulus to time-align and mix the three audio sources and for sending the mix to each of the three locations.

Audio from Kyma

Each Unpronounceable uses a second interface to get audio from Kyma to the host computer. Robert uses a Behringer to come out of Kyma, and an IO/2 to get to his Mac. Ilker sends his MOTU Track 16 audio outputs to a Behringer; then selects the Behringer as an I/O in the Jamulus preference tab. Will uses a ZOOM H4n as his Kyma interface and sends the audio to an M-Audio Fast Track Pro which acts as the interface for Jamulus.

Ecosystemic audio in virtual rooms

Scott Miller and Pat O’Keefe’s HDPHN project has always been an exploration of what it means to be alone together — it’s a live public concert where each member of the audience wears headphones, rather than listening through speakers. When stay-at-home orders made it impossible for Scott in Otsego to meet in person with Pat in St. Paul, Minnesota, they started looking into how to move the live HDPHN performance onto the Internet.

 

When Earth Day Art Model 2020 shifted from a live to an online festival, Scott and Pat used this as an opportunity to dive in and perform HDPHN along with one of their older pieces Zeitgeist live through the Internet.

 

 

Audio from Kyma

Scott describes the audio routing for HDPHN as follows:

Pat’s mic comes over Zoom and out of my desktop headphone audio. It also goes into Kyma input 1 on my Traveller. With Zoom, I can’t get/send stereo from a live source. With two people (did this Friday) I bring the second person in on a separate Skype/Zoom/Facetime session on another device, and into Kyma input 2. With 2 inputs, I then mathematically cross-processing them in a virtual room.

I am sending Kyma’s processed/mixed output (Main 1-2) back into my desktop via Lynx E22 audio card, going into DSP-Quattro for compression and EQ, then to iShowU virtual audio interface 1) —> to Zoom for Pat’s monitoring, and 2) —>OBS and then to YouTube synced with Pat’s Zoom video. YouTube latency very bad and it wrecked chamber music with duo, but was fun for free improv with a different duo.

Live coding at a Virtual Club

On Monday, 11 May 2020, beginning at 20.30 CET Time, Lucretio will be live-coding in Kyma using a new Tool of his own design at The Circle UXR Zone. The Circle is a virtual club that is a UXR.zone — a decentralized social VR communication platform offering secure communication, free from user-tracking and ads. A spinoff project of the #30daysinVR transhumanist performance by Enea Le Fons, a UXR.zone allows participants to share virtual rooms and communicate via avatars across a range of devices: from VR headsets (main platform) to desktop and mobile phone browsers.

The avatars of people attending the UXR events are either anonymous (robots) or follow a strict dress code based on the CVdazzle research to stress the importance of cyber camouflage via aesthetics against dystopian surveillance measures happening in the real world. Click to enter the club

The belly of the BEAST

Simon Smith (who is actually quite a slender chap at the BEAST of Birmingham) has recently been tasked with researching online collaboration platforms for the BEAST, so we asked him for some tips from the front lines. He just completed a Sound and Music Workshop with Ximena Alarcon (earlier he helped Alarcon on a telematic performance using the Jacktrip software from Stanford).

In the workshop, she also mentioned:

  • Artsmesh A network music and performance management tool. Content creators run the Artsmesh client which streams live media point-to-point; audiences run a light Artsmesh client to watch the shows.
  • SoundJack is a realtime communication system with adjustable quality and latency parameters. Depending on the physical distance, network capacities, network conditions and routing, some degree of musical interaction is possible.
  • Jitsi,  an open source teleconferencing platform

Other options?

How have you been collaborating, teaching, consulting, creating during the lockdown? We’re interested in hearing your stories, solutions and experiences.

Have you used YouTube or Vimeo for live streaming with Kyma?

What’s your preferred video conferencing software for sending computer audio (Zoom, BigBlueButton, Meet)?

Have you been using remote desktop software (like Chrome Remote Desktop, Jump) to access your studio computer from home?

We welcome hearing about alternate solutions for this ongoing report.

New Pattern Generator for Kyma 7.25

Generate sequences based on the patterns discovered in your MIDI files

Symbolic Sound today released a new module that generates endlessly evolving sequences based on the patterns it discovers in a MIDI file. HMMPatternGenerator, the latest addition to the library of the world’s most advanced sound design environment, is now available to sound designers, musicians, and researchers as part of a free software update: Kyma 7.25.

Composers and sound designers are masters of pattern generation — skilled at inventing, discovering, modifying, and combining patterns with just the right mix of repetition and variation to excite and engage the attention of a listener. HMMPatternGenerator is a tool to help you discover the previously unexplored patterns hidden within your own library of MIDI files and to generate endlessly varying event sequences, continuous controllers, and new combinations based on those patterns.

Here’s a video glimpse at some of the potential applications for the HMMPatternGenerator:

 

What can HMMPatternGenerator do for you?

Games, VR, AR — In an interactive game or virtual environment, there’s no such thing as a fixed-duration scene. HMMPatternGenerator can take a short segment of music and extend it for an indeterminate length of time without looping.

Live improvisation backgrounds — Improvise live over an endlessly evolving HMMPatternGenerator sequence based on the patterns found in your favorite MIDI files.

Keep your backgrounds interesting — Have you been asked to provide the music for a gallery opening, a dance class, a party, an exercise class or some other event where music is not the main focus? The next time you’re asked to provide “background” music, you won’t have to resort to loops or sample clouds; just create a short segment in the appropriate style, save it as a MIDI file, and HMMPatternGenerator will generate sequences in that style for as long as the party lasts — even after you shut down your laptop (because it’s all generated on the Paca(rana) sound engine, not on your host computer).

Inspiration under a deadline — Need to get started quickly? Provide HMMPatternGenerator with your favorite MIDI files, route its MIDI output stream to your sequencer or notation software, and listen while it generates endless recombinations and variations on the latent patterns lurking within those files. Save the best parts to use as a starting point for your new composition.

Sound for picture — When the director doubles the duration of a scene a few hours before the deadline, HMMPatternGenerator can come to the rescue by taking your existing cue and extending it for an arbitrary length of time, maintaining the original meter and the style but with continuous variations (no looping).

Structured textures — HMMPatternGenerator isn’t limited to generating discrete note events; it can also generate timeIndex functions to control other synthesis algorithms (like SumOfSine resynthesis, SampleClouds and more) or as a time-dependent control function for any other synthesis or processing parameter. That means you can use a MIDI file to control abstract sounds in a new, highly-structured way.

MIDI as code — If you encode the part-of-speech (like verb, adjective, noun, etc) as a MIDI pitch, you can compose a MIDI sequence that specifies a grammar for English sentences and then use HMMPatternGenerator to trigger samples of yourself speaking those words — generating an endless variety of grammatically correct sentences (or even artificial poetry). Imagine what other secret meanings you could encode as MIDI sequences — musical sequences that can be decrypted only when decoded by the Kyma Sound generator you’ve designed for that purpose.

Self-discovery — HMMPatternGenerator can help you tease apart what it is about your favorite music that makes it sound the way it does. By adjusting the parameters of the HMMPatternGenerator and listening to the results, you can uncover latent structures and hyper meters buried deep within the music of your favorite composers — including some patterns you hadn’t  even realized were hidden within your own music.

Remixes and mashups — Use HMMPatternGenerator to generate an never-ending stream of ideas for remixes (of one MIDI file) and amusing mashups (when you provide two or more MIDI files in different styles).

Galleries of possibilities — Select a MIDI file in the Kyma 7.25 Sound Browser and, at the click of a button, generate a gallery of hundreds of pattern generators, all based on that MIDI file. At that point, it’s easy to substitute your own synthesis algorithms and design new musical instruments to be controlled by the pattern-generator. Quickly create unique, high-quality textures and sequences by combining some of the newly-developed MIDI-mining pattern generators with the massive library of unique synthesis and processing modules already included with Kyma.

How does it work?

If each event in the original MIDI file is completely unique, then there is only one path through the sequence — the generated sequence is the same as the original MIDI sequence. Things start to get interesting when some of the events are, in some way, equivalent to others (for example, when events of the same pitch and duration appear more than once in the file).

HMMPatternGenerator uses equivalent events as pivot points — decision points at which it can choose to take an alternate path through the original sequence (the proverbial “fork in the road”). No doubt you’re familiar with using a common chord to pivot to another key; now imagine using a common state to pivot to a whole new section of a MIDI file or, if you give HMMPatternGenerator several MIDI files, from one genre to another.

By live-tweaking the strengths of three equivalence tests — pitch, time-to-next, and position within a hyper-bar — you can continuously shape how closely the generated sequence follows the original sequence of events, ranging from a note-for-note reproduction to a completely random sequence based only on the frequency with which that event occurs in the original file.

Other new features in Kyma 7.25 include:

▪ Optimizations to the Spherical Panner for 3d positioning and panning (elevation and azimuth) for motion-tracking VR or mixed reality applications and enhanced binaural mixes — providing up to 4 times speed increases (meaning you can track 4 times as many 3d sound sources in real time).

▪ Multichannel interleaved file support in the Wave editor

• New granular reverberation and 3d spatializing examples in the Kyma Sound Library

and more…

Availability

Kyma 7.25 is available as a free update starting today and can be downloaded from the Help menu in Kyma 7. For more information, please visit: symbolicsound.com.

Summary

The new features in the Kyma 7.25 sound design environment are designed to help you stay in the creative flow by adding automatic Gallery generation from MIDI files, and the HMMPatternGenerator module which can be combined with the extensive library of sound synthesis, pattern-generation, and processing algorithms already available in Kyma.

Background

Symbolic Sound revolutionized the live sound synthesis and processing industry with the introduction of Kyma in 1990. Today, Kyma continues to set new standards for sound quality, innovative synthesis and processing algorithms, rock-solid live performance hardware, and a supportive, professional Kyma community both online and through the annual Kyma International Sound Symposium (KISS).

For more information:

Website
Email
@SymbolicSound
Facebook


Kyma, Pacarana, Paca, and their respective logos are trademarks of Symbolic Sound Corporation. Other company and product names may be trademarks of their respective owners.

Kyma 7.12 Sound Design Software Update

New Sounds, help for learning Capytalk, plus handy new features in the Timeline, Multigrid and Virtual Control Surface

Kyma 7.12, the latest update to the world’s most advanced sound design software, is now available as a free download for the sound designers, musicians, and researchers using Kyma 7.

Kyma 7.12

Learning Capytalk — Who doesn’t want to improve their coding skills a little bit? Kyma 7.12 makes it easy to learn the Capytalk parameter-control language with the new Capytalk of the Day (CoD) feature. Each day, CoD introduces a single Capytalk message, along with documentation, coding examples, and links to Sounds that show how to use the message in parameter fields. The CoD provides gentle reminders of all those Capytalk features you may have forgotten about (along with some you may have missed along the way). If you’re new to Kyma, it’s a great way to learn a little bit of Capytalk each day; just think, by this time next year, you’ll know 365 Capytalk messages! CoD: it’s the perfect companion to the Sons du Jour!

New Sounds — Kyma is all about the sounds, and this update includes new Sounds, new Prototypes, and several improvements to existing Sounds, including a tuned van Der Pol Oscillator, a Tilt EQ, sample-cloud morphing on live-captured audio, and an easier-to-read, easier-to-modify vocal synthesis modal filter (See the Highlights below for additional details).

Live Performance — Several new features support live interactive sound design and musical performance including: a Timeline option that lets you substitute a recording for the live input (when you’re developing a piece that includes live acoustic performers); a Multigrid option to display subsets of controls on a secondary Virtual Control Surface (so you can split the controls among the laptop screen and your iPad), and a Tool for displaying your secondary VCS or the movie associated with a Timeline at full size on a second display.

 

Highlights of Kyma 7.12 include:

• The USO Tilt EQ is useful for gently tipping the spectral balance of the Input to emphasize the lows and de-emphasize the highs (or vice versa);

• TunedVanDerPolOscillator: The van der Pol oscillator is a model of a nonlinear vacuum tube oscillator developed by Dutch physicist Balthasar van der Pol at Philips in the 1920s. This version of the van der Pol oscillator is tuned and driven by an internal oscillator. When the Ratio of driver to the van der Pol’s natural frequency is set to 1, you can get frequencies that are stable. But you can also experiment with Ratios other than 1 and Driver waveforms other than Sine to get unusual distortion and modulation effects;

• MultiSampleCloud and Morph1DSampleCloud now have a fromMemoryWriter checkbox so you can use MemoryWriters to capture multiple streams of live audio input and live-switch or morph between them as sample clouds;

• ModalFilter Vocal Formants BPF and ModalFilter Vocal Formants KBD prototypes have had their parameter values reorganized in order to make it clearer how to change the formant frequencies and amplitudes;

• A new Timeline option to select a samples file as the input to a Track (which you can elect to read from RAM or from disk). This can be useful when developing a Timeline for live acoustic performers (since during development, you can use a recording of the live input rather than always having to generate the input live);

• A Timeline option to compress or expand durations of all selected Time bars and their associated automation;

• A new sub-layout in the Timeline Virtual Control Surface: “All Global Controls” where you can access all the Master controls in a single layout;

• Multigrid options to immediately set all Tracks to Inactive and to define an Inactive slot as either Silent or Pass-through;

• A Multigrid option to display a track layout, the all-seeing-eye, the mixer, the grid layout, or the shared controls layout on the secondary Virtual Control Surface (VCS). This can be useful when you’d like to display one of the layouts on an iPad using Kyma Control and another on the laptop screen, or when you’d like to display one set of controls for a performer or the audience and a different set of controls for yourself. There’s also a new Tools menu option to display the secondary VCS full-sized on a second display;

…and more. For more details, go to the Help menu in Kyma and check for for software updates.

Availability

Kyma 7.12 is available as a free update, downloadable from the Help menu in Kyma 7.

Summary

The new features in the Kyma 7.12 sound design environment are designed to help you expand your mastery over live parameter control through Capytalk, keep your sound interactions lively and dynamic with new features in the Virtual Control Surface, Timeline and Multigrid, and to expand your sonic universe with newly developed synthesis and control algorithms that can be combined with the extensive library of algorithms already in Kyma.

Background

Symbolic Sound revolutionized the live sound synthesis and processing industry with the introduction of Kyma in 1990. Today, Kyma continues to set new standards for sound quality, innovative synthesis and processing algorithms, rock-solid live performance hardware, and a supportive, professional Kyma community both online and through the annual Kyma International Sound Symposium (KISS).

For more information:

Website
Email
Twitter
Facebook

The rocket scientist of human hearing

In 1999, astrophysicist/musician David McClain spent an intense three-month period working on The Northern Sky Survey, mapping the sky in the near infrared while getting by on an hour of sleep per night. When he finished the survey, he was suddenly struck by a viral infection that nearly killed him; his doctors were never able to determine the cause and, after three months, the infection dissipated almost as quickly as it had appeared. But afterward David noticed that he could no longer understand his wife when she was speaking to him. He went to an audiologist and discovered he had a sensorineural hearing loss of 60-70 dB in the high frequency range. Hearing aids helped him understand speech, but he was devastated to discover that music never sounded right through the hearing aids. But as a physicist, he was determined to solve the problem.

Motivated by his love of music and informed by his scientific training, McClain has spent the last 16 years developing equations to describe the entire hearing experience – from the cochlea, to the afferent 8th nerve, to processing in the central nervous system, efferent 8th nerve interactions — and developing signal processing algorithms to adapt to and compensate for his hearing loss in a way that preserves the audio experience of music. The result is a collection of signal processing algorithms he calls Crescendo. Kyma is one of the tools David uses for testing out new ideas and prototyping them for Crescendo.

Now he’s blogging about his findings on his web site http://refined-audiometrics.com. In keeping with his motto “Keeping music enjoyable for all!” David hopes that his experiences, research findings, and extensive set of algorithms can benefit others.

 

Kyma 7.1 Sound Design Software — more inspiration, more live interaction, more sounds

Kyma 7.1 is now available as a free update for sound designers, musicians, and researchers currently using Kyma 7. New features in the Kyma 7.1 sound design environment help you stay in the creative flow by extending automatic Gallery generation to analysis files and Sounds, keeping your sound interactions lively and dynamic with support for additional multidimensional physical controllers, while expanding your sonic universe to include newly developed synthesis and control algorithms that can be combined with the extensive library of algorithms already in Kyma.

Kyma 7.1 — Sound Design Inspiration

Sonic AI (Artistic Inspiration) — Need to get started quickly? Kyma 7.1 provides Galleries everywhere! Select any Sound (signal-flow patch); click Gallery to automatically generate an extensive library of examples, all based on your initial idea. Or start with a sample file, spectral analysis, PSI analysis or Sound in your library, and click the Grid button to create a live matrix of sound sources and processing that you can rapidly explore and tweak to zero-in on exactly the sound you need. Hear something you like? A single click opens a signal flow editor on the current path through the Grid so you can start tweaking and expanding on your initial design.

Responsive Control — Last year, Symbolic Sound introduced support for Multidimensional Polyphonic Expression (MPE) MIDI, which they demonstrated with Roger Linn Designs’ LinnStrument. Now, Kyma 7.1 extends that support to the ROLI Seaboard RISE; just plug the RISE into the USB port on the back of the Paca(rana) and play. Kyma 7.1 also maintains Kyma’s longstanding support for the original multidimensional polyphonic controller: the Haken Audio Continuum fingerboard. Also new with Kyma 7.1 is plug-and-play support for the Wacom Intuos Pro tablet, combining a three dimensional multitouch surface with the precision and refined motor-control afforded by the Wacom pen.

Recombinant Sound — Now you can gain entrée into the world of nonlinear acoustics, biological oscillators, chaos and more with the new, audio-rate Dynamical Systems modules introduced in Kyma 7.1. New modules include a van der Pol oscillator, Lorenz system, and Double-well potential, each of which can generate audio signals or control signals as well as being driven by other audio inputs to create delightfully unpredictable chaotic behavior.

Other new features in Kyma 7.1 include:

â–ª The new Spherical Panner uses perceptual cues to give you 3d positioning and panning (elevation and azimuth) for motion-tracking VR or mixed reality applications and enhanced binaural mixes.

â–ª A new 3d controller in the Virtual Control Surface provides three dimensions of mappable controls in a single aggregate fader. Also new in Kyma 7.1: three-dimensional and 2-dimensional faders can optionally leave a trace or a history so you can visualize the trajectory of algorithmically generated controls.

â–ª Enhanced spectral analysis tools in Kyma 7.1 provide narrower analysis bands, additional resynthesis partials, and more accurate time-stretching.

â–ª The new, batch spectral analysis tool for non-harmonic source material is perfect for creating vast quantities of audio assets from non-harmonic samples like textures, backgrounds, and ambiences. Once you have those analysis files, you can instantly generate a library of highly malleable additive and aggregate resynthesis examples by simply clicking the Gallery button.

▪ Nudging the dice — Once you have an interesting preset, nudging the dice can be a highly effective way to discover previously unimagined sounds by taking a random walk in the general vicinity of the parameter space. Shift-click on the dice icon or Shift+R to nudge the controller values by randomly generating values within 10% of the current settings.

â–ª Generate dynamic, evolving timbres by smoothly morphing from one waveshape to another in oscillators, wave shapers, and grain clouds using new sound synthesis and processing modules: MultiOscillator, Morph3dOscillator, Interpolate3D, MultiGrainCloud, Morph3dGrainCloud, MultiWaveshaper, Morph3dWaveshaper and others.

â–ª An optional second Virtual Control Surface (VCS) can display one set of images and controls for the audience or performers while you control another set of sound parameters using the primary Virtual Control Surface on your laptop or iPad.

▪ A new version of Symbolic Sound’s Kyma Control app for the Apple iPad includes a tab for activating Sounds in the Multigrid using multi-touch plus support for 128-bit IPv6 addressing (giving you approximately as many IP addresses as there are atoms in the Earth).

▪ Kyma 7.1 provides enhanced support for physical and external software control sources in the form of incoming message logs for MIDI and OSC as well as an OSC Tool for communicating with devices that have not yet implemented Kyma’s open protocol for bi-directional OSC communication.

▪ New functionality in Kyma’s real-time parameter control language, Capytalk, includes messages for auto-tuned voicing and harmonizing within live-selectable musical scales along with numerous other new messages. (For full details open the Capytalk Reference from the Kyma Help menu).

Today, Kyma continues to set new standards for sound quality, innovative synthesis and processing algorithms, rock-solid live performance hardware, and a supportive, professional Kyma community both online and through the annual Kyma International Sound Symposium (KISS).

For more information:

“What’s new in Kyma 7.1” presentation at KISS2016
Website
Email
@SymbolicSound
Facebook

Never Engine Labs announces Multicycle wavetable Tools for Kyma 7

NeverEngine Labs (Cristian Vogel and Gustav Scholda) have announced a new set of classes and tools for creating and manipulating multicycle wavetables and audiofiles with embedded markers in Kyma 7. The new ROM tools can be used for creating multisample players and to prepare audio files for morphing oscillators, grain envelopes and wavetable libraries.

Available from: http://www.cristianvogel.com/neverenginelabs/product/rom-tools

Kyma 7 support for LinnStrument and MPE

Kyma 7 now offers plug-and-play support for Roger Linn Design’s LinnStrument and other MPE-enabled MIDI instruments. Kyma automatically puts the LinnStrument into MPE mode when you connect it via USB-MIDI or MIDI 5-pin DIN (or via your computer, using Delora Software’s Kyma Connect). Once connected, any keyboard-controlled Sound in Kyma automatically sets the polyphony and responds to the LinnStrument — no extra controllers are needed, and you don’t have to select a special mode on the LinnStrument — so you just plug it in, and play.

What is MPE?

Traditional MIDI note events have two dimensions — pitch and velocity — neither of which can be altered directly with the fingers once the key has gone down. But musicians performing with live electronics are driving the demand for new electronic instruments — instruments whose touch, reliability, sensitivity, and responsiveness can begin to approach those of traditional acoustic instruments.

Over the last 10-15 years, more and more instrument makers have sought to incorporate continuous control over pitch and velocity and to add a third dimension of continuous control: timbre. One of the earliest entries in this new category was the Continuum fingerboard from Haken Audio (which has had plug-and-play support in Kyma since 2001). More recently, Madrona Labs (Soundplane), Eigenlabs (Eigenharp), ROLI (Seaboard), and Roger Linn Design (LinnStrument) have been offering “keyboard-like” instruments that provide three dimensions of expressive, continuous control per finger.

But how is it possible to send these three-dimensional continuous polyphonic MIDI notes to a sound engine? Haken Audio first used a FireWire protocol before switching over to a proprietary, optimized MIDI protocol. Symbolic Sound and Madrona Labs used Open Sound Control (OSC) for Kyma Control and Soundplane, respectively. But the growing proliferation of new instruments and proprietary protocols was threatening to become a nightmare for soft-and-hardware synthesizer makers to support.

Enter software developer Geert Bevin who, in January of this year, started working with key industry professionals on a new, more expressive MIDI specification called MPE: Multidimensional Polyphonic Expression. The new MPE standard has already been implemented on Roger Linn Design’s LinnStrument, the Madrona Labs Soundplane, the ROLI Rise Seaboard, and several other instrument makers are currently in the process of adding an MPE-mode to their instruments.

With MPE, the music industry now has a standard protocol for communicating between expressive controllers and the sound hardware and software capable of sonically expressing the subtlety, responsiveness, and live interaction offered by these controllers.

Kyma — Interactive, responsive, and live

Kyma, with its legendary audio quality, vast synthesis codebase and deep access to detailed parameter control, is the ideal sound engine to pair with these new, more responsive controller interfaces for live expressive performance, and Symbolic Sound has a long history of working with instrument makers to provide tight, seamless integration and bi-directional communication between these new instruments and Kyma.

In addition to its graphical signal flow editor, file editors, and Sound Library, Kyma 7 also provides several environments in which you can create an instrument where the synthesis, processing, parameter-mapping, and even the mode of interaction can evolve over time during a performance:

  • In the Multigrid (displayed on the iPad during the video), you can switch instantly between sources, effects, and combinations of the two with no interruption in the audio signal. Perform live, inspired in the moment, with infinite combinatorial possibilities.
  • In the Kyma 7 Timeline you can slow down or stop the progression of time to synchronize your performance with other performers, with key events, or with features extracted from an audio signal during your performance.
  • Using the Tool you can create a state machine where input conditions trigger the evaluation of blocks of code (for example, the game-of-life displayed on the LinnStrument during the closing credits of the video is being controlled by a Tool).
  • Kyma also provides a realtime parameter language called Capytalk where you can make parameters depend on one another or control subsets of parameters algorithmically.
  • It’s easy to add a new parameter control, simply type in the desired controller name preceded by an exclamation point — a control is automatically created for you, and it even generates its own widget in a Virtual Control Surface which can be remapped to external controllers (through MIDI, 14-bit MIDI, or OSC). This makes it easy to augment your live MPE controllers with other MIDI and OSC controllers or with tablet controller apps.

More information

Multidimensional Polyphonic Expression (MPE)
expressiveness.org

LinnStrument
rogerlinndesign.com

Kyma 7
symbolicsound.com

Kyma at ICMC2015

Kyma had a strong presence at the 2015 International Computer Music Conference in Denton Texas, September 25 — October 1, including live performances by

Jeffrey Stolet,
ICMC2015JeffStolet2

Wang Chi,

Jon Bellona,
JP Bellona ICMC2015.jpg

Jon Bellona angst2

and Sun Hua,
Sun Hua ICMC2015.jpg
a keynote lecture by Symbolic Sound president Carla Scaletti,
ICMC2015 keynote Title Slide

ICMC2015 keynote social brain crowd

ICMC2015 keynote IMS to Platypus

ICMC2015 keynote platypus meets capybara Wang photo

ICMC2015 keynote close2

ICMC2015 keynote SSC in 1989

ICMC2015 keynote smiling at laptop2

ICMC2015 keynote output from the brain

ICMC2015 keynote computer musicians predict the future

ICMC2015 keynote making imaginary real

a one-hour Kyma workshop also presented by Scaletti (new music pioneer Larry Austin is seen in the audience at the lower left)
Kyma workshop ICMC2015 photo by Chi Wang

and fixed media pieces by Fred Szymanski and Jinshuo Feng. (If we’ve left anyone out, please let us know!)

Thanks to the ICMC 2015 organizers, presenters, and composers!

Special thanks to the ICMC organizers, Wang Chi, Sun Hua, and Jon Bellona for the photos and Iacopo Sinigaglia for the video excerpt.