On November 28 2022 at the Prater in Vienna, the audience will board a Luftwaggon of the Wiener Riesenrad ferris wheel to experience Bruno Liberda’s new composition still-kreisen-drehen-stehn / frieren die glockentöne am eingebildeten eis (still-circling-turning-standing / bell tones freeze on imaginary ice) for double carillon & Kyma.
Two carillons are played live and fed through Kyma — repeating, turning, or standing still through various granulations, feedbacks, ring modulations, pitch deteriorations, moving reverbs and more — creating a frosty new soundscape, while the public has a moving view over Vienna.
The instruments are artworks in themselves: fully functional carillons created by composer, Bruno Liberda
Wiener Riesenrad, Riesenradplatz 1, 1020 Wien, Austria
28.11.2022 Wiener Riesenrad, Riesenradplatz 1, 1020 Wien
1. Vorstellung: 18:00
2. Vorstellung: 19:00
Four ferris wheel wagons as floating, circling, stages for works by:
Bruno Liberda, Masao Ono, Anita Steinwidder, Christine Schörkhuber, Verena Dürr, Sophie Eidenberger, Stefanie Prenn.
Und er lässt es gehen
Alles wie es will
Dreht, und seine Leier
steht ihm nimmer still
(Wilhelm Müller, 1824)
I am Violet the Organ Grinder
And I grind all the live long day
I live for the organ, that I am grinding
I´ll die, but I won´t go away
(Prince, 1991)
Kyma 7 now offers plug-and-play support for Roger Linn Design’s LinnStrument and other MPE-enabled MIDI instruments. Kyma automatically puts the LinnStrument into MPE mode when you connect it via USB-MIDI or MIDI 5-pin DIN (or via your computer, using Delora Software’s Kyma Connect). Once connected, any keyboard-controlled Sound in Kyma automatically sets the polyphony and responds to the LinnStrument — no extra controllers are needed, and you don’t have to select a special mode on the LinnStrument — so you just plug it in, and play.
What is MPE?
Traditional MIDI note events have two dimensions — pitch and velocity — neither of which can be altered directly with the fingers once the key has gone down. But musicians performing with live electronics are driving the demand for new electronic instruments — instruments whose touch, reliability, sensitivity, and responsiveness can begin to approach those of traditional acoustic instruments.
Over the last 10-15 years, more and more instrument makers have sought to incorporate continuous control over pitch and velocity and to add a third dimension of continuous control: timbre. One of the earliest entries in this new category was the Continuum fingerboard from Haken Audio (which has had plug-and-play support in Kyma since 2001). More recently, Madrona Labs (Soundplane), Eigenlabs (Eigenharp), ROLI (Seaboard), and Roger Linn Design (LinnStrument) have been offering “keyboard-like” instruments that provide three dimensions of expressive, continuous control per finger.
But how is it possible to send these three-dimensional continuous polyphonic MIDI notes to a sound engine? Haken Audio first used a FireWire protocol before switching over to a proprietary, optimized MIDI protocol. Symbolic Sound and Madrona Labs used Open Sound Control (OSC) for Kyma Control and Soundplane, respectively. But the growing proliferation of new instruments and proprietary protocols was threatening to become a nightmare for soft-and-hardware synthesizer makers to support.
Enter software developer Geert Bevin who, in January of this year, started working with key industry professionals on a new, more expressive MIDI specification called MPE: Multidimensional Polyphonic Expression. The new MPE standard has already been implemented on Roger Linn Design’s LinnStrument, the Madrona Labs Soundplane, the ROLI Rise Seaboard, and several other instrument makers are currently in the process of adding an MPE-mode to their instruments.
With MPE, the music industry now has a standard protocol for communicating between expressive controllers and the sound hardware and software capable of sonically expressing the subtlety, responsiveness, and live interaction offered by these controllers.
Kyma — Interactive, responsive, and live
Kyma, with its legendary audio quality, vast synthesis codebase and deep access to detailed parameter control, is the ideal sound engine to pair with these new, more responsive controller interfaces for live expressive performance, and Symbolic Sound has a long history of working with instrument makers to provide tight, seamless integration and bi-directional communication between these new instruments and Kyma.
In addition to its graphical signal flow editor, file editors, and Sound Library, Kyma 7 also provides several environments in which you can create an instrument where the synthesis, processing, parameter-mapping, and even the mode of interaction can evolve over time during a performance:
In the Multigrid (displayed on the iPad during the video), you can switch instantly between sources, effects, and combinations of the two with no interruption in the audio signal. Perform live, inspired in the moment, with infinite combinatorial possibilities.
In the Kyma 7 Timeline you can slow down or stop the progression of time to synchronize your performance with other performers, with key events, or with features extracted from an audio signal during your performance.
Using the Tool you can create a state machine where input conditions trigger the evaluation of blocks of code (for example, the game-of-life displayed on the LinnStrument during the closing credits of the video is being controlled by a Tool).
Kyma also provides a realtime parameter language called Capytalk where you can make parameters depend on one another or control subsets of parameters algorithmically.
It’s easy to add a new parameter control, simply type in the desired controller name preceded by an exclamation point — a control is automatically created for you, and it even generates its own widget in a Virtual Control Surface which can be remapped to external controllers (through MIDI, 14-bit MIDI, or OSC). This makes it easy to augment your live MPE controllers with other MIDI and OSC controllers or with tablet controller apps.
Jeffrey Agrell, educator/performer/composer and author of Improvisation Games for Classical Musicians, writes in his blog about a new experience he had recently: he performed with Mike Wittgraf who was processing his signal through Kyma and controlling the processing using a Wiimote + Nunchuck game controller.  Here’s an excerpt:
Tobias Enhus‘ Santa Monica California-based film-scoring studio is featured in the November 2013 issue of STUDIO magazine. You can get a preview of the article through this video in which Enhus gives a demo tour of his unique collection of gear (including a rack with three Pacaranas) presented in Swedish and the universal language of audio gear, all to the soft accompaniment of the glassy, metallic, vocal, analog electronics that have become his signature sound. Near the end of the video, Enhus does an impromptu performance with Max Mathews’ Radio Baton controlling vocal resynthesis in Kyma!
When not composing for film, television, games & advertising, Tobias Enhus enjoys a bit of cave diving & sleep walking.
The article describes how Tobias was born in Sweden and began by following in his father’s footsteps as a construction engineer before changing course to follow his true passion: music and sound design. Now he is a successful film composer and sound designer in Hollywood, and he has what he describes as a real monster in his sound design studio: “This is my audio playground,” Tobias says, referring to his Kyma system, the programming language considered by some to be the most powerful sound design tool available. Enhus’ Kyma system (his 3-Pacarana rack is among the world’s largest sound computing clusters), along with his Synclavier and analog synthesizer modules, have laid the technical foundation for Enhus’ successes in Los Angeles; his composing credits include the films Narc and the soon-to-be-released feature film Sisterhood of Night, the television series Top Gear and video game Spiderman 3, as well as sound design and music composition for numerous ads for companies like Mercedes and Coca Cola.
The article is full of photos, anecdotes, advice, and insights on the life of a professional composer and sound designer in LA. And it’s an inspiring story for anyone who feels they are expected to take one path in life and is seeking the courage to risk it all in order to follow their dreams.
Chi Wang at opening ceremony of Musicacoustica-Beijing (Xinhua/Luo Xiaoguang)
Two live electronic pieces for Kyma and game controllers were selected to be a part of the opening ceremonies for this year’s Musicacoustica Beijing festival on October 22, 2012: Chi Wang’s Sound Motion for Kyma and Kinect and Jeffrey Stolet’s Lariat Rituals for and Kyma and Gametrak controller.
Wang Chi’s Sound Motion is a multichannel interactive composition that utilizes Processing to analyze data captured from the user’s movement in space; that data stream is then used to control recorded, synthesized, and modified sound in Kyma.
In Jeffrey Stolet’s Lariat Rituals, fine positioning of the Gametrak in 3-space controls formants and other parameters of a synthesized male voice (as seen in this video).
Following the festival Stolet and Wang spent two weeks presenting seminars and lectures on Kyma at conservatories throughout China.
Jeffrey Stolet performs Lariat Rituals at KISS2012 in St Cloud MN
Sarth Calhoun’s introspective piece for a rainy Brooklyn evening (in a similar vein to the Metal Machine Trio tour he did with Lou Reed and Ulrich Krieger). Performed entirely on Continuum fingerboard using Kyma and Ableton Live amp modelers, it evokes the sense of conflict that can arise when you feel you need to walk a road that only you can see. Recorded as a single take, with no edits.
The Kyma International Sound Symposium is  four inspiring days and nights filled with sound design, ideas, discussions, and music, and it offers a wide range of opportunities to increase your Kyma mastery: from introductory master classes, to hands-on question-and-answer sessions; from thought-provoking presentations, to inspiring concerts and after-hours discussions with new-found friends and colleagues.
This year’s symposium KISS2012 will be on banks of the mighty Mississippi River, September 13-16, organized by St. Cloud State University School of the Arts and Symbolic Sound. The KISS2012 theme, reel time || real time, puts the spotlight on reel time (sound for picture), real time (live performance), and all timescales between, including sound design for games, live cinema, live improvisation ensembles, live performances from a score, sound design for live theatre, live signal generation for speech and hearing research, interactive data sonification, interactive sound art, and more!
The culmination of two years of research and development, Kyma X.82, a new software update for the Kyma X/Pacarana sound synthesis engine, is specifically designed to take advantage of the expressive capabilities and extended control offered by today’s new crop of alternative controllers and cutting edge musical interface designs.
The recent explosion of interest in new musical interfaces and alternative controllers for sound design and music has created a need for sound synthesis and processing engines that can take full advantage of the increased bandwidths, higher resolution, lower latencies, continuous pitch and velocity values, and subtle expressive capabilities of these new controllers. Symbolic Sound has a long history of support for alternative and extended controllers in Kyma X, and Symbolic Sound’s newest release, Kyma X.82, introduces several additional features to support these innovative musical interfaces and alternative controllers.
Features in Kyma X.82 include over 20 new morphing sound synthesis algorithms, support for 14-bit MIDI controllers, and the publication of Kyma’s OSC protocol to support and inspire future developments of new instruments and controllers that can exploit Kyma’s responsive, high-resolution sound synthesis and processing algorithms in a seamless, plug-and-play manner.
Whether you are a sound designer performing expressive creature voices to picture, an electronic musician performing live on stage with alternative controllers, or a composer using physical controllers to create dense multi-layered textures of sound in the studio, you will be able to take advantage of Kyma X.82’s ease of parameter-mapping, low latency, high-resolution parameters, and legendary sound quality. Additional features of the new release include enhanced multichannel panning and effects, higher quality spectral analysis, and a 40% speedup in the software executing on the host computer.
Sound and Video Examples
3d Morph on iPad
Using one of the new Morph3d objects to morph among a re-synthesized Tuva singer, bongo, flute, angry cat, female voice, violin, cat meow, and shakuhachi using Kyma Control on an iPad.
Morphadasical
The foreground ‘melody’ is performed live on a Continuum Fingerboard, using KeyTimbre (near/far) and KeyVelocity (pressure) to morph between re-synthesized violin, trombone and flute. In the background, Kyma is generating the Sax/Flute morph pattern.
Medieval Miasma
The key-mapped spectrum of an organ is re-synthesized through a FormantBank with a slowly changing formant. The voice is a key-mapped spectral analysis/resynthesis using sine wave oscillators.
Peace Flute
A key-mapped flute spectrum is re-synthesized with a time-stretched attack and played on the Tonnetz in Kyma Control.
Spectres
A re-synthesized voice morphing to re-synthesized bowed glass performed on the Kyma Control keyboard. In the background, a key-mapped piano spectrum performed on a standard MIDI keyboard is re-synthesized through a FilterBank with vinyl clicks as the input to the filter.
Cloud Cadence
A key-mapped CloudBank on a set of piano samples, performed on a standard MIDI keyboard.
PNO Squeal
Key-mapped piano spectra re-synthesized by a FormantBank played on standard MIDI keyboard with ModWheel controlling the formant to create the ‘squeals’.