Kyma Sound design studies at CMU

Did you know that you could study for a degree in sound design and work with Kyma at Carnegie Mellon University? Joe Pino, professor of sound design in the School of Drama at Carnegie Mellon University, teaches conceptual sound design, modular synthesis, Kyma, film sound design, ear training and audio technology in the sound design program.

Sound design works in the spaces between reality and abstraction. They are less interesting as a collection of triggers for giving designed worlds reality. They are more effective when they trigger emotional responses and remembered experiences.

Thinking through sound — Ben Burtt and the voice of WALL-E

Ben Burtt was recently awarded the 2024 Vision Award Ticinomoda, which describes him as:

“Ben Burtt is a pioneer and visionary who has fundamentally changed the way we perceive sound in cinema.”

In this interview, Burtt shares some of his experiences as Sound Designer for several iconic films, including his discovery of “a kind of sophisticated music program called the Kyma” which he used in the creation of the voice of WALL-E.

The interviewer asked Ben about the incredible voices he created for WALL-E and EVA:

Well, Andrew Stanton who was the creator of the WALL-E character in the story; apparently he jokingly referred to his movie as the R2-D2 movie. He wanted to develop a very affable robot character that didn’t speak or had very limited speech that was mostly sound effects of its body moving and a few strange kind of vocals, and someone (his producer, I think — Jim Morris) said well why don’t you just talk to Ben Burtt, the guy who did R2-D2, so they got in touch with me.

Pixar is in the Bay Area (San Francisco) so it was nearby, and I went over and looked at about 10 minutes that Andrew Stanton had already put together with just still pictures — storyboards of the beginning of the film where WALL-E’s out on his daily work activities boxing up trash and so on and singing and playing his favorite music, and of course I was inspired by it and I thought well here’s a great challenge and I took it on.

This was a few months before they had to actually greenlight the project. I didn’t find this out until later but there was some doubt at that time about whether you could make a movie in which the main characters don’t really talk in any kind of elaborate way; they don’t use a lot of words. Would it sustain the audience’s interest? The original intention in the film that I started working on was that there was no spoken language in the film that you would understand at all; that was a goal at one point…

So I took a little bit of the R2 idea to come up with a voice where human performance would be part of it but it had to have other elements to it that made it seem electronic and machine-like. But WALL-E wasn’t going to Beep and Boop and Buzz like R2; it had to be different, so I struggled along trying different things for a few months and trying different voices — a few different character actors. And I often ended up experimenting on myself because I’m always available. You know it’s like the scientist in his lab takes the potion because there’s no one else around to test it: Jekyll and Hyde, I think that’s what it is. So I took the potion and and turned into Mr Hyde…

Photo from Comicon by Miguel Discart Bruxelles, Belgique This file is licensed under the Creative Commons Attribution-Share Alike 2.0 Generic license

The idea was to always give the impression of what WALL-E was thinking through sound…

But eventually it ended up that I had a program — it was a kind of sophisticated music program called the Kyma and it had one sound in it — a process where it it would synthesize a voice but it [intentionally] didn’t do very well; the voice had artifacts that had funny distortions in it and extra noises. It didn’t work perfectly as a pure voice but I took advantage of the fact that the artifacts and mistakes in it were useful and interesting and could be used and I worked out a process where you could record sounds, starting with my own voice, and then process them a second time and do a re-performance where, as it plays back, you can stretch or compress or repitch the sounds in real time.

So you can take the word “Wall-E” and then you could make it have a sort of envelope of electronic noise around it; it gave it a texture that made it so it wasn’t human and that’s where it really worked. And of course it was in combination with the little motors in his arms and his head and his treads — everything was part of his expression.

The idea was to always give the impression of what WALL-E was thinking through sound — just as if you were playing a musical instrument and you wanted to make little phrases of music which indicated the feeling for what kind of communication was taking place.

True Crime

In early 2025, listen closely to the soundtrack of Lady of the Dunes, the new true crime series airing on the Oxygen Network. The series looks at a recently solved 50 year old cold case.

The director wanted the music to have a haunting quality to tie into the mystery and brutality of the case. That’s when they called on composer/editor John Balcom.

Balcom started out by recording plaintive, mysterious sounds, like wine glasses and a waterphone. Then he took sections from those recordings and treated them using Kyma’s Multigrid. There was a performative aspect to all of this; he would manipulate the sounds going into Kyma, as well as adjusting sounds in real time in the Multigrid, resulting in some disturbingly haunting sounds. Balcom then incorporated those sounds to build out full compositions.

Here are some audio examples of the textures he created using Kyma, along with a couple of tracks that incorporate those textures (Warning: the following clips contain haunting sounds and may be disturbing for all audiences. Listener discretion is advised! ⚠️ ):

 

Here’s a screen shot of one of Balcom’s Multigrids:

Balcolm concludes by saying: “The music would not have been the same without Kyma!”

Vision of things unseen

Motion Parallels, a collaboration between composer James Rouvelle and visual artist Lili Maya was developed earlier this year in connection with a New York Arts Program artist’s residency in NYC.

For the live performance Rouvelle compressed and processed the output of a Buchla Easel through a Kyma Timeline; on a second layer, the Easel was sampled, granulated and spectralized and then played via a midi-keyboard. The third layer of Sounds was entirely synthesized in Kyma and played via Kyma Control on the iPad.

For their performative works Rouvelle and Maya develop video and audio structures that they improvise through. Both the imagery and sound incorporate generative/interactive systems, so the generative elements can respond to them, and vice versa.

Rouvelle adds, “I work with Kyma every day, and I love it!”

Soundtrack for a Lost World

Performing with Kyma and a huge (but very soft) pipe organ, Franz Danksagmüller generated a live soundtrack for the silent film The Lost World (think silent-film Jurassic Park from the 1920s) on 25 May 2024 at the National Radio Symphony Concert Hall in Katowice Poland for an audience of 1600 people.

Franz Danksagmuller monitoring the VCS, the film, the Multigrid (on the iPad) and his pencil notes

The Polish National Radio Symphony Orchestra in Katowice seats an audience of 1800 people

In a gravity-defying stunt, Danksagmüller placed two mics within the swell boxes of both Manual V and Manual III, at the apex of the organ pipes, so he could process the pipe organ through Kyma without danger of feedback. He and his sound engineer worked until 2:30 am fine-tuning the setup.

 

Poster for the film “The Lost World” (1925) * First National Pictures – Source, Public Domain, https://commons.wikimedia.org/w/index.php?curid=12690702

* Note that, according to Wikipedia, “This historical image is not a factually accurate dinosaur restoration. Reason: Pronated hands, real T. rex did not have more than two fingers (unlike in this image), outdated posture, tail dragging, lack of possible feathers”

Kyma consultant Alan Jackson’s studio

Alan Jackson’s studio — designed for switching among a variety of sources, controllers, and ease of portability

Musician and Kyma consultant, Alan Jackson, known for his work with Phaze UK on the Witcher soundtrack, has wired his studio with an eye toward flexibility, making it easy for him to choose from among multiple sources, outputs, and controllers, and to detach a small mobile setup so he can visit clients in person and even continue working in Kyma during the train ride to and from this clients’ studios.

Jackson’s mobile studio sessions extend to the train to / from a gig

In his studio, his Pacamara Ristretto has a USB connection to the laptop and is also wired with quad analog in / out to the mixer. That way, Jackson can choose on a whim whether to route the Ristretto as an aggregate device through the DAW or do everything as analog audio through the mixer. Two additional speakers (not shown) are at the back of the studio and his studio is wired for quad by default.

The Faderfox UC4 is a cute and flexible MIDI controller plugged into the back of the Pacamara, ready at a moment’s notice to control the VCS, and a small Wacom tablet is plugged in and stashed to the left of the screen for controlling Kyma.

Jackson leaves his Pacamara on top so he can easily disconnect 5 cables from the back and run out the door with it… which leads to his “mobile” setup.

The Pacamara and its travel toiletry bag

Jackson’s travel setup is organized as a kit-within-a-kit.

The red, inner kit, is what he grabs if he just needs a minimal battery-powered Kyma setup, e.g., for developing stuff on the train, which includes:

  • a PD battery (good for about 3 hours when used with a MOTU Ultralite, longer with headphones)
  • a pair of tiny Sennheiser IE4 headphones
  • a couple of USB cables, and an Ethernet cable


The outer kit adds:

  • a 4 in / 4 out bus-powered Zoom interface
  • mains power for the Pacamara
  • more USB cables
  • a PD power adaptor cable, so he can run the MOTU Ultralite off the same battery as the Pacamara
  • a clip-on mic
  • the WiFi aerial

If you have any upcoming sound design projects you’d like to discuss, visit Alan’s Speakers on Strings website.

When not solving challenges for game and film sound designers, Alan performs his own music for live electronics.

Alan Jackson’s setup for a recent live improvisation for salad bowl and electronics

Out of the fire, into the Kyma consultancy

It’s not every day that a consultant/teacher gets a closing screen credit on a Netflix hit series, but check out this screenshot from the end of Witcher E7: Out of the Fire, into the Frying Pan:

Screenshot of the closing credits for Witcher E7 crediting sound designers from Phaze UK and Alan Jackson, Kyma consultancy

Kyma consultant, Alan M. Jackson has been working with sound designers Matt Collinge, Rob Prynne and Alyn Sclosa from Phaze UK for a while now, and the students surprised their sensei with an end credit on The Witcher

Kyma Consultancy – Alan M Jackson

Alan and his small band of sound designers meet regularly to share screens and help each other solve sound design challenges during hands-on sessions that last from 1-2 hours. Not only are these sessions an opportunity to deepen Kyma knowledge; they’re also an excuse to bring the sound design team together and share ideas. Alan views his role as a mentor or moderator, insisting that the sound designers do the hands-on Kyma work themselves.

There’s no syllabus; it’s the sound designers who set the agenda for each class, sometimes aiming to solve a specific problem, but more often, they aim to come up with a general solution to a situation that comes up repeatedly. They’ve taken to calling these solutions “little Kyma machines” that will generate an effect for them, not just for this project but for future projects as well. Over time, they’ve created several, reusable, custom “machines”, among them: ‘warping performer’, ‘doppler flanger’, ‘density stomp generator’, and ‘whisper wind’.

Rob Prynne is particularly enamored of a custom patch based on the “CrossFilter” module in Kyma (based on an original design by Pete Johnston), sometimes enhanced by one of Cristian Vogel’s NeverEngine formant-shifting patches. In an interview with A Sound Effect, Rob explains, “It’s like a convolution reverb but with random changes to the length and pitch of the impulse response computed in real-time. It gives a really interesting movement to the sound.” According to the interview, it was the tripping sequence in episode 7 where the CrossFilter really came into play.

When asked if there are any unique challenges posed by sound design, Alan observes, “The big things to note about sound design, particularly for TV / Film, is that it’s usually done quickly, there’s lots of it, you’re doing it at the instruction of a director; the director wants ‘something’ but isn’t necessarily nuanced in being able to talk about sound.”

A further challenge is that the sounds must “simultaneously sound ‘right’ for the audience while also being new or exciting. And the things that sound ‘right’, like the grammar of film editing, are based on a historically constructed sense of rightness, not physics.”

Alan summarizes the main advantages of Kyma for sound designers as: sound quality, workflow, originality, and suitability for working in teams. In what way does Kyma support “teams”?

With other sound design software and plugins you can get into all sorts of versioning and compatibility issues. A setup that’s working fine on my computer might not work on a teammate’s. Kyma is a much more reliable environment. A Sound that runs on my Paca[(rana | mara)] is going to work on yours.

Even more important for professional sound design is “originality”:

Kyma sounds different from “all the same VST plugins everyone else is using”. This is an important factor for film / TV sound designers. They need to produce sounds that meet the brief of the director but also sound fresh, exciting, new and different from everyone else. Using Kyma is their secret advantage. They are not using the same Creature Voice plugins everyone else is using but instead creating a Sound from scratch comes up with results unique to their studio.

For Witcher fans, the sound designers have provided a short cue list of examples for your listening pleasure:

Kyma Sound Heard Episode
Warping performer on every creature in every episode
Warping performer on portal sounds and magic S3E03
Warping performer on the wild hunt vapour S3E03
Doppler flanger spinning thing on the ring of fire S3E05 / S3E06
Density stomp generator for group dancing S3E05
Whisper wind convolved vocal for wild hunt S3E03

 

Visit Alan Jackson’s SpeakersOnStrings to request an invitation to the Kyma Kata — a (free) weekly Zoom-based peer-learning study group. Alan also offers individual and group Kyma coaching, teaching and mentoring.

A Sound Apart—interview with sound designer, Marco Lopez

Marco Lopez talks about how sound supports the story in “A Class Apart”

Sound helps create a miasma of privilege, power and tension for a drama set in an exclusive boarding school

A Class Apart (Zebrarummet), a new eight part mystery drama that will premiere on Viaplay on August 22 2021, is set in the hidden world of privilege and power that is an exclusive boarding school in Sweden. View the teaser on IMDB.

Marco Lopez, sound designer

We had a chance to speak with lead sound-effects editor, Marco Lopez, to find out more about how he used sound to enhance the narrative. Born in Leipzig Germany to Cypriot and Chilean parents, it seems inevitable that Lopez would become a multi-lingual citizen of the world. A solid background of 7 years of classical piano lessons and music theory led to sound engineering studies in Santiago, but it was almost by chance, during a short course entitled ‘Sound Design for Film’, added in his final semester, where he discovered his true passion for sound design, launching him on what he describes as “an endless search for knowledge and techniques”.

You come from a Cypriot, Chilean, and German background, but what about the Swedish connection? How did that come about?

In 2013, I attended Randy Thom’s sound design masterclass in Hamburg. Prior to the masterclass, each of the participants received a 5 minute sequence from the film “How To Train Your Dragon” and given the assignment of adding sound design to that sequence. By the end of the masterclass and after listening to my sound design, the Europa Studio (Filmlance International) sound design team invited me to visit them at their studio in Stockholm the next time I was in town. Eventually I took the decision to take the next step in my professional growth and move to Sweden, and I was fortunate enough to start working right away with the Europa Sound Studio/Filmlance team.

When Filmlance International mixer and sound designer, Erik Guldager, who was doing sound design for the first two episodes of “A Class Apart”, invited me to join the team, I immediately agreed! It’s always great working with them. Due to the pandemic the communication was done mainly by email, or Zoom. It was very effective, as if we were in the same place.

Is the dialog in Swedish? How does language influence sound design?

The dialog is indeed in Swedish. For the last five years, I have been speaking exclusively in Swedish with my girlfriend which has helped me a lot to learn the language. I think that it is important to understand the dialog and the underlying codes that sometimes might be carried along in this way. It becomes easier to support the story with the proper sound effects and build a better sound around them.

From the title and the synopsis, it sounds like class differences and privilege are a central “character” in this story. Did you try to create a “sound” or ambience that would convey privilege, exclusivity and power for some of the scenes? How did you go about doing that?

Yes, that is correct and that becomes even more prominent due to the mysterious death of one of the students of the boarding school: Tuna Kvarn. The central character is very well described both with the picture and with the dialog, so we began by highlighting those moments and, once we were happy with our first attempt, we then started adding details around those moments and enhancing them.

As part of this process, the director requested that we use “unnatural sounds”, sounds that would not be normally present in a certain room or an exterior space. This request made the whole project even more exciting for me, because it allowed us to open an extra door of creativity and gave us the opportunity to experiment and create elements (which I, unofficially referred to as “non-musical drones”) that functioned well in the overall context.

One of the guidelines from the sound supervisor of the project, Boris Laible, was that we were after a feeling. That is an inspiring place for me to be in, because sometimes it takes several attempts to nail it, and it’s interesting to be able to witness the different versions that can be created with different sound effects. Eventually we selected a few of those non-musical drones, based on the fact that they were blending well with the rest of the sounds and they were supporting the scenes properly, but most importantly, they were not distracting the viewer away from the storytelling. We kept tweaking and readjusting the sound design the whole time until the very end.

How did you use Kyma on this project?

I used Kyma both as an external FX processor where it receives and sends a signal to a DAW, and for offline processing (for example, to generate the non-musical drones).

One interesting sound design challenge was to create the sound of a grandfather clock ticking that, during some scenes, would slow down or accelerate to imply that something that was being said or some behavior was off. For that, I imported the sound effect in the Tau Editor and after creating a Gallery folder, I found a Sound where I could shift the tempo without affecting the pitch of the sound.

Then I thought of adding a clock bell and stretching its ringing, in a similar way as in the scene from the “Barton Fink” by the Coen brothers, where Barton taps the bell to register his arrival at the hotel. For that I used the Sum Of Sines Sound where I would modulate its pitch and give some sort of movement in the sound.

I even used Kyma in order to add an extra element to a CCTV electrical interference noise. By combining an FFT analysis with Christian Vogel’s ZDFResonatorBank prototype from his ZDF Filters Kyma Capsules, I was able to create some variations that blended very well with other sound effects recordings that I already had in my SFX library.

For the non-musical drones I would create Galleries and go through all the options given and if a Sound sounded interesting to me, I would spend more time experimenting with creating presets. This procedure was the most time consuming but it definitely gave fantastic results! By the end of the project, I realized I had used Kyma to create 96 non-musical drones along with a few extra sound effects.

Every space had its own defined character and within a certain situation we would introduce the non-musical drones and blend them with the rest of the sounds.

Are there any things that are easier (or faster or more natural) to do in Kyma than in other environments?

Just by importing a sound in Kyma, creating Gallery folders of Kyma Sounds it’s luxurious, because you can choose which one best suits your idea. Also the fact that I can control a Kyma Sound with my Wacom tablet, a microphone or my keyboard, gives me the freedom to perform the sound however I want to, or according to what is happening in the picture.

Could you describe your sound design studio setup?

I work on a 5.1 system, both on Pro Tools and Nuendo with the RME UCX audio interface. I use the MOTU Traveler mk3 connected to Kyma. I recently started using Dante which allows me to share the interface that the DAW is connected to and it gives a stereo format with 48Khz. Otherwise, I’ll just connect the interfaces of Kyma and the DAW via ADAT.

Do you usually work to picture? Do you use any live controllers or MIDI keyboards?

I always work to picture. I sometimes use a keyboard but for Kyma, I use the Wacom tablet more often.

How do you build your sound library?

If there’s a sound effect that I don’t have in my library, I’ll go out and record it, or I’ll use Kyma to create what I am after.

Any advice for Kyma sound designers? Any resources you recommend?

The fastest way to get into Kyma is to open a sound effect in Kyma and create a Gallery folder based on the options you choose. Then go through each folder and see the different Sounds that Kyma has created for you.

Personally I think of Kyma as an instrument in that, the more you practice, the more you will start seeing results. At the same time you also need the theory, so you understand the powerful possibilities and philosophy behind Kyma. That is why I would strongly recommend to read the manual. Once you begin to understand how it works you will be able to start building your own Sounds based on what you envisioned in the first place.

Having Kyma lessons is also a big plus. There’s, for example, Cristian Vogel, Alan Jackson and Will Klingenmeier. All three of them are very helpful!

Check the Kyma Q&A periodically and also ask questions there. You should also feel free to join the Kyma Kata Group! There’s a lot of great people that practice and share their knowledge on Kyma. I’d like to thank Charlie Norton, Andreas Frostholm, Alan Jackson and Pete Johnston, of the Kyma Kata group, who generously offered valuable suggestions and helped me out when it was needed.

What is the function of sound for picture?

Sound helps define the picture and brings up emotions that support the storytelling. In “A Class Apart” there were scenes where sound was underlining what was going on visually, but in other moments we would create something a bit differently from what was going on in the picture. I would say that in the last episode sound helped build up the tension gradually, right from the beginning until the very last scene.

Any tips for someone just starting out in sound design?

Give the best you can on the project you are working on, because your performance will open the door to the next project. Allow yourselves to make mistakes and learn from them. Especially in the beginning nobody expects from you to know everything. Later on, it can also happen that something that we might consider as a mistake, might trigger an idea to create something new and exciting. In every moment experience the world around you through your ears and hold those experiences in your memory. You never know when they will be a source of inspiration for you. Study as much as you can about sound design and meet other sound designers. Watch films. A lot! Two or three times the same film. Study them and listen to what the sound is doing in relation to the picture.

Tell us what you’re the most proud of in “A Class Apart” (don’t be shy!!)

I am proud because we delivered an exciting sound. The overall process was creative and fun. There were moments when it seemed overwhelming like there was too much to do, but I trusted the creative process and decided to enjoy it.

What kind of project would you most love to do the sound for next?

I would like to have the chance to work on an animation, a scifi or a thriller.

Finally, the most important question, where can we all binge watch “A Class Apart (Zebrarummet)”? It sounds really intriguing!!

A Class Apart (Zebrarummet) premieres on Viaplay on the 22nd of August!

A taste of some of Marco Lopez’ “non-musical drones” from A Class Apart (Zebrarummet)

Marco credits his first piano teacher, Josefina Beltren, with teaching him various ways to “perform the silence” in a piece of music. Clearly that early training has translated to his talent for creating different forms of meaningful “silence” to advance the story and lend character to rooms and spaces:

 

 

 

 

 

 

 

New Pattern Generator for Kyma 7.25

Generate sequences based on the patterns discovered in your MIDI files

Symbolic Sound today released a new module that generates endlessly evolving sequences based on the patterns it discovers in a MIDI file. HMMPatternGenerator, the latest addition to the library of the world’s most advanced sound design environment, is now available to sound designers, musicians, and researchers as part of a free software update: Kyma 7.25.

Composers and sound designers are masters of pattern generation — skilled at inventing, discovering, modifying, and combining patterns with just the right mix of repetition and variation to excite and engage the attention of a listener. HMMPatternGenerator is a tool to help you discover the previously unexplored patterns hidden within your own library of MIDI files and to generate endlessly varying event sequences, continuous controllers, and new combinations based on those patterns.

Here’s a video glimpse at some of the potential applications for the HMMPatternGenerator:

 

What can HMMPatternGenerator do for you?

Games, VR, AR — In an interactive game or virtual environment, there’s no such thing as a fixed-duration scene. HMMPatternGenerator can take a short segment of music and extend it for an indeterminate length of time without looping.

Live improvisation backgrounds — Improvise live over an endlessly evolving HMMPatternGenerator sequence based on the patterns found in your favorite MIDI files.

Keep your backgrounds interesting — Have you been asked to provide the music for a gallery opening, a dance class, a party, an exercise class or some other event where music is not the main focus? The next time you’re asked to provide “background” music, you won’t have to resort to loops or sample clouds; just create a short segment in the appropriate style, save it as a MIDI file, and HMMPatternGenerator will generate sequences in that style for as long as the party lasts — even after you shut down your laptop (because it’s all generated on the Paca(rana) sound engine, not on your host computer).

Inspiration under a deadline — Need to get started quickly? Provide HMMPatternGenerator with your favorite MIDI files, route its MIDI output stream to your sequencer or notation software, and listen while it generates endless recombinations and variations on the latent patterns lurking within those files. Save the best parts to use as a starting point for your new composition.

Sound for picture — When the director doubles the duration of a scene a few hours before the deadline, HMMPatternGenerator can come to the rescue by taking your existing cue and extending it for an arbitrary length of time, maintaining the original meter and the style but with continuous variations (no looping).

Structured textures — HMMPatternGenerator isn’t limited to generating discrete note events; it can also generate timeIndex functions to control other synthesis algorithms (like SumOfSine resynthesis, SampleClouds and more) or as a time-dependent control function for any other synthesis or processing parameter. That means you can use a MIDI file to control abstract sounds in a new, highly-structured way.

MIDI as code — If you encode the part-of-speech (like verb, adjective, noun, etc) as a MIDI pitch, you can compose a MIDI sequence that specifies a grammar for English sentences and then use HMMPatternGenerator to trigger samples of yourself speaking those words — generating an endless variety of grammatically correct sentences (or even artificial poetry). Imagine what other secret meanings you could encode as MIDI sequences — musical sequences that can be decrypted only when decoded by the Kyma Sound generator you’ve designed for that purpose.

Self-discovery — HMMPatternGenerator can help you tease apart what it is about your favorite music that makes it sound the way it does. By adjusting the parameters of the HMMPatternGenerator and listening to the results, you can uncover latent structures and hyper meters buried deep within the music of your favorite composers — including some patterns you hadn’t  even realized were hidden within your own music.

Remixes and mashups — Use HMMPatternGenerator to generate an never-ending stream of ideas for remixes (of one MIDI file) and amusing mashups (when you provide two or more MIDI files in different styles).

Galleries of possibilities — Select a MIDI file in the Kyma 7.25 Sound Browser and, at the click of a button, generate a gallery of hundreds of pattern generators, all based on that MIDI file. At that point, it’s easy to substitute your own synthesis algorithms and design new musical instruments to be controlled by the pattern-generator. Quickly create unique, high-quality textures and sequences by combining some of the newly-developed MIDI-mining pattern generators with the massive library of unique synthesis and processing modules already included with Kyma.

How does it work?

If each event in the original MIDI file is completely unique, then there is only one path through the sequence — the generated sequence is the same as the original MIDI sequence. Things start to get interesting when some of the events are, in some way, equivalent to others (for example, when events of the same pitch and duration appear more than once in the file).

HMMPatternGenerator uses equivalent events as pivot points — decision points at which it can choose to take an alternate path through the original sequence (the proverbial “fork in the road”). No doubt you’re familiar with using a common chord to pivot to another key; now imagine using a common state to pivot to a whole new section of a MIDI file or, if you give HMMPatternGenerator several MIDI files, from one genre to another.

By live-tweaking the strengths of three equivalence tests — pitch, time-to-next, and position within a hyper-bar — you can continuously shape how closely the generated sequence follows the original sequence of events, ranging from a note-for-note reproduction to a completely random sequence based only on the frequency with which that event occurs in the original file.

Other new features in Kyma 7.25 include:

▪ Optimizations to the Spherical Panner for 3d positioning and panning (elevation and azimuth) for motion-tracking VR or mixed reality applications and enhanced binaural mixes — providing up to 4 times speed increases (meaning you can track 4 times as many 3d sound sources in real time).

▪ Multichannel interleaved file support in the Wave editor

• New granular reverberation and 3d spatializing examples in the Kyma Sound Library

and more…

Availability

Kyma 7.25 is available as a free update starting today and can be downloaded from the Help menu in Kyma 7. For more information, please visit: symbolicsound.com.

Summary

The new features in the Kyma 7.25 sound design environment are designed to help you stay in the creative flow by adding automatic Gallery generation from MIDI files, and the HMMPatternGenerator module which can be combined with the extensive library of sound synthesis, pattern-generation, and processing algorithms already available in Kyma.

Background

Symbolic Sound revolutionized the live sound synthesis and processing industry with the introduction of Kyma in 1990. Today, Kyma continues to set new standards for sound quality, innovative synthesis and processing algorithms, rock-solid live performance hardware, and a supportive, professional Kyma community both online and through the annual Kyma International Sound Symposium (KISS).

For more information:

Website
Email
@SymbolicSound
Facebook


Kyma, Pacarana, Paca, and their respective logos are trademarks of Symbolic Sound Corporation. Other company and product names may be trademarks of their respective owners.

Course on Design Patterns for Sound and Music

Music 507 Design Patterns for Sound and Music. The art of sound design is vast, but there are certain patterns that seem to come up over and over again. Once you start to recognize those patterns, you can immediately apply them in new situations and start combining them to create your own sound design and live performance environments. We will look at design patterns that show up in audio synthesis and processing, parameter control logic, and live performance environments, all in the context of the Kyma sound design environment.

Registration open starting November 2017 (course starts in January 2018)
Music 507 Spring 2018
Mondays, 7:00 – 8:50 pm
Music Building 4th floor, Experimental Music Studios
University of Illinois Urbana-Champaign