A Sound Apart—interview with sound designer, Marco Lopez

Marco Lopez talks about how sound supports the story in “A Class Apart”

Sound helps create a miasma of privilege, power and tension for a drama set in an exclusive boarding school

A Class Apart (Zebrarummet), a new eight part mystery drama that will premiere on Viaplay on August 22 2021, is set in the hidden world of privilege and power that is an exclusive boarding school in Sweden. View the teaser on IMDB.

Marco Lopez, sound designer

We had a chance to speak with lead sound-effects editor, Marco Lopez, to find out more about how he used sound to enhance the narrative. Born in Leipzig Germany to Cypriot and Chilean parents, it seems inevitable that Lopez would become a multi-lingual citizen of the world. A solid background of 7 years of classical piano lessons and music theory led to sound engineering studies in Santiago, but it was almost by chance, during a short course entitled ‘Sound Design for Film’, added in his final semester, where he discovered his true passion for sound design, launching him on what he describes as “an endless search for knowledge and techniques”.

You come from a Cypriot, Chilean, and German background, but what about the Swedish connection? How did that come about?

In 2013, I attended Randy Thom’s sound design masterclass in Hamburg. Prior to the masterclass, each of the participants received a 5 minute sequence from the film “How To Train Your Dragon” and given the assignment of adding sound design to that sequence. By the end of the masterclass and after listening to my sound design, the Europa Studio (Filmlance International) sound design team invited me to visit them at their studio in Stockholm the next time I was in town. Eventually I took the decision to take the next step in my professional growth and move to Sweden, and I was fortunate enough to start working right away with the Europa Sound Studio/Filmlance team.

When Filmlance International mixer and sound designer, Erik Guldager, who was doing sound design for the first two episodes of “A Class Apart”, invited me to join the team, I immediately agreed! It’s always great working with them. Due to the pandemic the communication was done mainly by email, or Zoom. It was very effective, as if we were in the same place.

Is the dialog in Swedish? How does language influence sound design?

The dialog is indeed in Swedish. For the last five years, I have been speaking exclusively in Swedish with my girlfriend which has helped me a lot to learn the language. I think that it is important to understand the dialog and the underlying codes that sometimes might be carried along in this way. It becomes easier to support the story with the proper sound effects and build a better sound around them.

From the title and the synopsis, it sounds like class differences and privilege are a central “character” in this story. Did you try to create a “sound” or ambience that would convey privilege, exclusivity and power for some of the scenes? How did you go about doing that?

Yes, that is correct and that becomes even more prominent due to the mysterious death of one of the students of the boarding school: Tuna Kvarn. The central character is very well described both with the picture and with the dialog, so we began by highlighting those moments and, once we were happy with our first attempt, we then started adding details around those moments and enhancing them.

As part of this process, the director requested that we use “unnatural sounds”, sounds that would not be normally present in a certain room or an exterior space. This request made the whole project even more exciting for me, because it allowed us to open an extra door of creativity and gave us the opportunity to experiment and create elements (which I, unofficially referred to as “non-musical drones”) that functioned well in the overall context.

One of the guidelines from the sound supervisor of the project, Boris Laible, was that we were after a feeling. That is an inspiring place for me to be in, because sometimes it takes several attempts to nail it, and it’s interesting to be able to witness the different versions that can be created with different sound effects. Eventually we selected a few of those non-musical drones, based on the fact that they were blending well with the rest of the sounds and they were supporting the scenes properly, but most importantly, they were not distracting the viewer away from the storytelling. We kept tweaking and readjusting the sound design the whole time until the very end.

How did you use Kyma on this project?

I used Kyma both as an external FX processor where it receives and sends a signal to a DAW, and for offline processing (for example, to generate the non-musical drones).

One interesting sound design challenge was to create the sound of a grandfather clock ticking that, during some scenes, would slow down or accelerate to imply that something that was being said or some behavior was off. For that, I imported the sound effect in the Tau Editor and after creating a Gallery folder, I found a Sound where I could shift the tempo without affecting the pitch of the sound.

Then I thought of adding a clock bell and stretching its ringing, in a similar way as in the scene from the “Barton Fink” by the Coen brothers, where Barton taps the bell to register his arrival at the hotel. For that I used the Sum Of Sines Sound where I would modulate its pitch and give some sort of movement in the sound.

I even used Kyma in order to add an extra element to a CCTV electrical interference noise. By combining an FFT analysis with Christian Vogel’s ZDFResonatorBank prototype from his ZDF Filters Kyma Capsules, I was able to create some variations that blended very well with other sound effects recordings that I already had in my SFX library.

For the non-musical drones I would create Galleries and go through all the options given and if a Sound sounded interesting to me, I would spend more time experimenting with creating presets. This procedure was the most time consuming but it definitely gave fantastic results! By the end of the project, I realized I had used Kyma to create 96 non-musical drones along with a few extra sound effects.

Every space had its own defined character and within a certain situation we would introduce the non-musical drones and blend them with the rest of the sounds.

Are there any things that are easier (or faster or more natural) to do in Kyma than in other environments?

Just by importing a sound in Kyma, creating Gallery folders of Kyma Sounds it’s luxurious, because you can choose which one best suits your idea. Also the fact that I can control a Kyma Sound with my Wacom tablet, a microphone or my keyboard, gives me the freedom to perform the sound however I want to, or according to what is happening in the picture.

Could you describe your sound design studio setup?

I work on a 5.1 system, both on Pro Tools and Nuendo with the RME UCX audio interface. I use the MOTU Traveler mk3 connected to Kyma. I recently started using Dante which allows me to share the interface that the DAW is connected to and it gives a stereo format with 48Khz. Otherwise, I’ll just connect the interfaces of Kyma and the DAW via ADAT.

Do you usually work to picture? Do you use any live controllers or MIDI keyboards?

I always work to picture. I sometimes use a keyboard but for Kyma, I use the Wacom tablet more often.

How do you build your sound library?

If there’s a sound effect that I don’t have in my library, I’ll go out and record it, or I’ll use Kyma to create what I am after.

Any advice for Kyma sound designers? Any resources you recommend?

The fastest way to get into Kyma is to open a sound effect in Kyma and create a Gallery folder based on the options you choose. Then go through each folder and see the different Sounds that Kyma has created for you.

Personally I think of Kyma as an instrument in that, the more you practice, the more you will start seeing results. At the same time you also need the theory, so you understand the powerful possibilities and philosophy behind Kyma. That is why I would strongly recommend to read the manual. Once you begin to understand how it works you will be able to start building your own Sounds based on what you envisioned in the first place.

Having Kyma lessons is also a big plus. There’s, for example, Cristian Vogel, Alan Jackson and Will Klingenmeier. All three of them are very helpful!

Check the Kyma Q&A periodically and also ask questions there. You should also feel free to join the Kyma Kata Group! There’s a lot of great people that practice and share their knowledge on Kyma. I’d like to thank Charlie Norton, Andreas Frostholm, Alan Jackson and Pete Johnston, of the Kyma Kata group, who generously offered valuable suggestions and helped me out when it was needed.

What is the function of sound for picture?

Sound helps define the picture and brings up emotions that support the storytelling. In “A Class Apart” there were scenes where sound was underlining what was going on visually, but in other moments we would create something a bit differently from what was going on in the picture. I would say that in the last episode sound helped build up the tension gradually, right from the beginning until the very last scene.

Any tips for someone just starting out in sound design?

Give the best you can on the project you are working on, because your performance will open the door to the next project. Allow yourselves to make mistakes and learn from them. Especially in the beginning nobody expects from you to know everything. Later on, it can also happen that something that we might consider as a mistake, might trigger an idea to create something new and exciting. In every moment experience the world around you through your ears and hold those experiences in your memory. You never know when they will be a source of inspiration for you. Study as much as you can about sound design and meet other sound designers. Watch films. A lot! Two or three times the same film. Study them and listen to what the sound is doing in relation to the picture.

Tell us what you’re the most proud of in “A Class Apart” (don’t be shy!!)

I am proud because we delivered an exciting sound. The overall process was creative and fun. There were moments when it seemed overwhelming like there was too much to do, but I trusted the creative process and decided to enjoy it.

What kind of project would you most love to do the sound for next?

I would like to have the chance to work on an animation, a scifi or a thriller.

Finally, the most important question, where can we all binge watch “A Class Apart (Zebrarummet)”? It sounds really intriguing!!

A Class Apart (Zebrarummet) premieres on Viaplay on the 22nd of August!

A taste of some of Marco Lopez’ “non-musical drones” from A Class Apart (Zebrarummet)

Marco credits his first piano teacher, Josefina Beltren, with teaching him various ways to “perform the silence” in a piece of music. Clearly that early training has translated to his talent for creating different forms of meaningful “silence” to advance the story and lend character to rooms and spaces:

 

 

 

 

 

 

 

BTF: Looking back to move forward

Over the course of 2020/21, composers and performers have been adapting to the constraints of the pandemic in various ways and inventing new paradigms for live musical performance. Composer Vincenzo Gualtieri has chosen to focus on one of the essential aspects of making music together: attentive and reciprocal listening.

Gualtieri’s work can now be heard on a new album: the (BTF) project, released on EMA Vinci Records: https://www.emavinci.it/lec/archives/1482.

“BTF” stands for feed-back to feed-forward, because of the extensive use of feedback principles in this work. In a deeper sense, though, BTF also stands for looking back to move forward.

Based on the premise of attentive and reciprocal listening, there is, in (BTF), an encounter between “electronic sounds” (processed in real time) and “acoustic sounds”. This interaction takes place within an environment of self-regulated feedback. The DSP system responsible for the digital treatment of sound metaphorically “listens” not only to the external acoustic energy state but also to its own own internal states. The same is required of the musical instrument performer.

As Gualtieri puts it, “Sound-producing systems – human and machine – create networks of circularly causal relationships and chains of self-generating (self-poietic processes), mutually co-determining sound events.”

Composing in this way requires an adaptation, a paradigm shift. If the digital processing of sound is linked to the acoustic properties of the environment, the quantity (and quality) of the generated sound events is no longer entirely predictable. Therefore the musical score asks the performer to take into account both the performance instructions and, at the same time, to continuously listen to the product of electronic processing and be freely influenced by it.

For (BTF) 1-4, Gaultieri worked in tandem with other musicians, with the composer managing live-electronics. Starting with (BTF) -5, he decided to perform the acoustic instruments himself. So, from (BTF) -5, onward, Gualtieri has automated the entire DSP process, resulting in two types of sound events: one managed by the digital system responsible for electronic processing (Kyma) and the other guided by music notation or word processing.

Capturing the spirit of invention inspired (and compelled) by worldwide lockdowns and isolation, Gualtieri writes:

La temporanea rinuncia a collaborare con altri musicisti mi ha permesso di concentrarmi diversamente
sull’esplorazione sonora e le sue forme organizzative.

– V. Gualtieri

(In English: “Temporarily giving up on collaborating with other musicians allowed me to focus differently
on sound exploration and its organizational forms.”)

An inspiring manifesto for what has and continues to be a challenging time for composers and performers worldwide.

New Pattern Generator for Kyma 7.25

Generate sequences based on the patterns discovered in your MIDI files

Symbolic Sound today released a new module that generates endlessly evolving sequences based on the patterns it discovers in a MIDI file. HMMPatternGenerator, the latest addition to the library of the world’s most advanced sound design environment, is now available to sound designers, musicians, and researchers as part of a free software update: Kyma 7.25.

Composers and sound designers are masters of pattern generation — skilled at inventing, discovering, modifying, and combining patterns with just the right mix of repetition and variation to excite and engage the attention of a listener. HMMPatternGenerator is a tool to help you discover the previously unexplored patterns hidden within your own library of MIDI files and to generate endlessly varying event sequences, continuous controllers, and new combinations based on those patterns.

Here’s a video glimpse at some of the potential applications for the HMMPatternGenerator:

 

What can HMMPatternGenerator do for you?

Games, VR, AR — In an interactive game or virtual environment, there’s no such thing as a fixed-duration scene. HMMPatternGenerator can take a short segment of music and extend it for an indeterminate length of time without looping.

Live improvisation backgrounds — Improvise live over an endlessly evolving HMMPatternGenerator sequence based on the patterns found in your favorite MIDI files.

Keep your backgrounds interesting — Have you been asked to provide the music for a gallery opening, a dance class, a party, an exercise class or some other event where music is not the main focus? The next time you’re asked to provide “background” music, you won’t have to resort to loops or sample clouds; just create a short segment in the appropriate style, save it as a MIDI file, and HMMPatternGenerator will generate sequences in that style for as long as the party lasts — even after you shut down your laptop (because it’s all generated on the Paca(rana) sound engine, not on your host computer).

Inspiration under a deadline — Need to get started quickly? Provide HMMPatternGenerator with your favorite MIDI files, route its MIDI output stream to your sequencer or notation software, and listen while it generates endless recombinations and variations on the latent patterns lurking within those files. Save the best parts to use as a starting point for your new composition.

Sound for picture — When the director doubles the duration of a scene a few hours before the deadline, HMMPatternGenerator can come to the rescue by taking your existing cue and extending it for an arbitrary length of time, maintaining the original meter and the style but with continuous variations (no looping).

Structured textures — HMMPatternGenerator isn’t limited to generating discrete note events; it can also generate timeIndex functions to control other synthesis algorithms (like SumOfSine resynthesis, SampleClouds and more) or as a time-dependent control function for any other synthesis or processing parameter. That means you can use a MIDI file to control abstract sounds in a new, highly-structured way.

MIDI as code — If you encode the part-of-speech (like verb, adjective, noun, etc) as a MIDI pitch, you can compose a MIDI sequence that specifies a grammar for English sentences and then use HMMPatternGenerator to trigger samples of yourself speaking those words — generating an endless variety of grammatically correct sentences (or even artificial poetry). Imagine what other secret meanings you could encode as MIDI sequences — musical sequences that can be decrypted only when decoded by the Kyma Sound generator you’ve designed for that purpose.

Self-discovery — HMMPatternGenerator can help you tease apart what it is about your favorite music that makes it sound the way it does. By adjusting the parameters of the HMMPatternGenerator and listening to the results, you can uncover latent structures and hyper meters buried deep within the music of your favorite composers — including some patterns you hadn’t  even realized were hidden within your own music.

Remixes and mashups — Use HMMPatternGenerator to generate an never-ending stream of ideas for remixes (of one MIDI file) and amusing mashups (when you provide two or more MIDI files in different styles).

Galleries of possibilities — Select a MIDI file in the Kyma 7.25 Sound Browser and, at the click of a button, generate a gallery of hundreds of pattern generators, all based on that MIDI file. At that point, it’s easy to substitute your own synthesis algorithms and design new musical instruments to be controlled by the pattern-generator. Quickly create unique, high-quality textures and sequences by combining some of the newly-developed MIDI-mining pattern generators with the massive library of unique synthesis and processing modules already included with Kyma.

How does it work?

If each event in the original MIDI file is completely unique, then there is only one path through the sequence — the generated sequence is the same as the original MIDI sequence. Things start to get interesting when some of the events are, in some way, equivalent to others (for example, when events of the same pitch and duration appear more than once in the file).

HMMPatternGenerator uses equivalent events as pivot points — decision points at which it can choose to take an alternate path through the original sequence (the proverbial “fork in the road”). No doubt you’re familiar with using a common chord to pivot to another key; now imagine using a common state to pivot to a whole new section of a MIDI file or, if you give HMMPatternGenerator several MIDI files, from one genre to another.

By live-tweaking the strengths of three equivalence tests — pitch, time-to-next, and position within a hyper-bar — you can continuously shape how closely the generated sequence follows the original sequence of events, ranging from a note-for-note reproduction to a completely random sequence based only on the frequency with which that event occurs in the original file.

Other new features in Kyma 7.25 include:

▪ Optimizations to the Spherical Panner for 3d positioning and panning (elevation and azimuth) for motion-tracking VR or mixed reality applications and enhanced binaural mixes — providing up to 4 times speed increases (meaning you can track 4 times as many 3d sound sources in real time).

▪ Multichannel interleaved file support in the Wave editor

• New granular reverberation and 3d spatializing examples in the Kyma Sound Library

and more…

Availability

Kyma 7.25 is available as a free update starting today and can be downloaded from the Help menu in Kyma 7. For more information, please visit: symbolicsound.com.

Summary

The new features in the Kyma 7.25 sound design environment are designed to help you stay in the creative flow by adding automatic Gallery generation from MIDI files, and the HMMPatternGenerator module which can be combined with the extensive library of sound synthesis, pattern-generation, and processing algorithms already available in Kyma.

Background

Symbolic Sound revolutionized the live sound synthesis and processing industry with the introduction of Kyma in 1990. Today, Kyma continues to set new standards for sound quality, innovative synthesis and processing algorithms, rock-solid live performance hardware, and a supportive, professional Kyma community both online and through the annual Kyma International Sound Symposium (KISS).

For more information:

Website
Email
@SymbolicSound
Facebook


Kyma, Pacarana, Paca, and their respective logos are trademarks of Symbolic Sound Corporation. Other company and product names may be trademarks of their respective owners.

Tactile Utterances

Composer/sonologist Roland Kuit encountered the paintings of Tomas Rajlich in 1992. ‘Fundamental Painting’, a minimalist strategy that explores the post-existential nature of the painting itself – its color, structure and surface — it is simply the painting as a painting. Tomas opened Kuit’s eyes to a kind of minimalism that Kuit recognized in his music at that time when he was working with semi-predictable chaotic systems. Kuit began creating works for Tomas Rajlich in 1993 and last year, Kuit released a new piece for Kyma-extended string quartet: Tactile Utterance – for Tomas Rajlich.

The world premiere of Tactile Utterance took place on 23 June 2017 in the Kampa Museum – The Jan and Meda Mládek Foundation in Prague (CZ) for the opening of a special Tomas Rajlich retrospective: Zcela abstraktní retrospektiva. Composed especially for the occasion, Kuit’s three part work Tactile Utterance, expresses 50 years of painting by Tomas Rajlich.

Kuit’s recent research into new compositional methods, algorithms, and spectral music came together in this work. His aim was to capture the process of painting: how can we relate acrylate polymers on canvas to sound? Using bowing without ‘tone’ as a metaphor for brushing a tangible thickness of color; pointing out the secants with very short percussive sounds on the string instruments as grid; dense multiphonics as palet knifes — broadened textures smeared out and dissolving into light.

The premiere, performed by the FAMA Quartet with Roland Kuit on Kyma, was very well received.

The Prague recordings

For the recording, made during 15-20 February 2018, Roland decided to record the string quartet alone and unprocessed so he could do post-processing and balancing in the studio. Recording engineer Milan Cimfe of the SONO Recording Studios in Prague used 3 sets of microphones: one to create a very ‘close to the skin’ recording of all string instruments; the second set overhead; and the third set as ‘room’ recording. Kuit took the recordings to Sweden to finish the mix and Kyma processing.

Album art © Tomas Rajlich, Acrylic on Linen, 1990-1991 c/o Pictoright Amsterdam 2018

Tactile Utterance – Roland Emile Kuit
For Tomas Rajlich

1/ BRUSH 00:14:42

From a pianissimo-bowed wood sounds to noise, to an elaborated crescendo ending in a broad fortissimo textural cluster: Kyma extends the string sounds with spectral holds.

2/ MAZE 00:12:06

When walking by a grid, we see it first condensed – then open – then condensed again in both horizontal and vertical directions. The string quartet interprets ‘intersections’ by means of percussive sounds like pizzicato, spiccato, martelé, col legno etc. These sounds are treated as particles copied 100 times with the Kyma system, resulting in a noise wall. A ritardando to the center of the piece allows these particles to be distinguished as single sounds. With these single sounds, Roland made “spectral pictures” that could be smeared to complement the grid lines, followed by an accelerando back to prestissimo particles again.

3/ SURFACE 00:14:59

Multiphonics morphing to airy flageolets and the Kyma system processing the string quartet in algorithmic multiplexed resynthesized sounds, dissolving them into a muffled softness.

Roland Emile Kuit – Kyma

FAMA Quartet:
David Danel, – violin
Roman Hranička – violin,
Ondřej Martinovský – viola
Balázs Adorján – violoncello

Recorded by Milan Cimfe at the Sono Recording Studios Prague

DONEMUS
Composers Voice: CV 229

Youtube:
https://www.youtube.com/watch?v=vg-X2nwMpJk&list=PLxo15Dm-AsbxvJxoEJi8m1sseQ_TLgy31

VR_I: social, free roaming virtual reality


Gilles Jobin’s VR_I — an immersive virtual reality contemporary dance experience with a 3D sound track created entirely in Kyma.7 — has its world premiere from 6 to 10 October 2017 at the Festival du Nouveau Cinéma in Montreal. Unfolding on multiple, parallel space and time scales, VR_I immerses you in a wordless experience of the continuum from infinite to infinitesimal, leaving you with a new sense of perspective on your place in the universe.

In partnership with Artanim Foundation and utilizing their motion-capture and VR technology, VR_I is a pioneering work in social, free-roaming virtual reality. As many as five people can enter the experience together and see their own and each other’s bodies as avatars sharing the same virtual world as the characters (the dancers).

In VR_I, music emerges from the environment: wind in the desert transitions to a humming chorus sung by giants; wind chimes in the art-filled loft organize themselves into 5/8 rhythms as columns rise up from the floor, only to dissolve back into wind chimes again as the columns recede; in the city park, bird songs are echoed in flute melodies, and cicadas transform themselves into rhythmic patterns over tambura-like drones.

Each spectator hears an individualized soundscape, and there is no way to really know what everyone else is experiencing (just like in real life). Sounds and musical elements are positioned in space and attached to objects, giving each spectator a unique mix as they move through the space, culminating in upwardly spiraling Shepard-tones that swirl around and lift up the listeners as they contemplate their own place in the continuum from infinite to infinitesimal.

In beauty I walk
With beauty before me I walk
With beauty behind me I walk
With beauty above me I walk
With beauty around me I walk

— from the Native American Diné Blessing Way

Choreography: Gilles Jobin
Dancers: Susana Panadés Díaz, Victoria Chiu, Tidiani N’Diaye, Diya Naidu, Gilles Jobin
3D Music & Sound Design: Carla Scaletti
Costumes: Jean-Paul Lespagnard
3d modeling: Tristan Siodlak
Animation: Camilo de Martino
3D Scans & Motion Capture: Artanim
VR Platform: Artanim

For tour dates and booking information, visit: vr-i.space

Architecture of Sound

RIETVELD PAVILION — Roland Emile Kuit’s new album published by Donemus — is now available on iTunes. The album was released in conjunction with the 9 July 2017 World Premiere at the sculpture park of the Kröller-Möller Museum in Otterlo in The Netherlands. With this work, Kuit makes a connection between sound and De STIJL’s ideas and architecture, using pure tones as spectral building blocks, stacking energies to build harmonic sound planes and placing them in space by dividing the spectrum and displaying it on a maze of speakers.

Photography: Henk Porck

Sonologist-composer Roland Emile Kuit balances on the interface between research, music and sound art, at a point he called “the new listening”. Using Kyma, Kuit warps time — influencing the present with events that will happen in the future and vice versa. He uses real-time analysis of the sound of acoustical instruments to create spectral compositions.

Anna Martinova releases Dusha II

Anna Martinova
DJ, producer, and graphic designer, Anna Martinova, has just signed with a new publisher who will be releasing her new album Dusha II in April 2017 and following up with videos and other artist collaborations. Watch Anna’s new web site for details on upcoming releases and live shows. Here’s a teaser for Dusha II drenched with mesmerizing and mysterious Kyma sounds:

Martinova works by generating WAV files in Kyma, arranging them in Logic, adding melodic lines created with Alchemy, and layering in recorded vocals using Logic and has linked her DAW with RolandCloud.

…Kyma (Pacarana) is very special tool, i am happy to be introduced to this system. It boosts up my creativity, inspiring me in every sound atom i generate with it, and the machine is limitless. The quality i have on the output is so powerful, clean and unique. I grow with it.

Known as Tulpa for her dark progressive DJ sets and Dusha for her chill out / ambient music, Anna got her start at age 17 as a vocalist in a rock band. After shifting to the psy-trance scene, she now lives in Amsterdam where she continues live DJing and producing.

Here’s a taste of her alter-ego, Tulpa:

The rocket scientist of human hearing

In 1999, astrophysicist/musician David McClain spent an intense three-month period working on The Northern Sky Survey, mapping the sky in the near infrared while getting by on an hour of sleep per night. When he finished the survey, he was suddenly struck by a viral infection that nearly killed him; his doctors were never able to determine the cause and, after three months, the infection dissipated almost as quickly as it had appeared. But afterward David noticed that he could no longer understand his wife when she was speaking to him. He went to an audiologist and discovered he had a sensorineural hearing loss of 60-70 dB in the high frequency range. Hearing aids helped him understand speech, but he was devastated to discover that music never sounded right through the hearing aids. But as a physicist, he was determined to solve the problem.

Motivated by his love of music and informed by his scientific training, McClain has spent the last 16 years developing equations to describe the entire hearing experience – from the cochlea, to the afferent 8th nerve, to processing in the central nervous system, efferent 8th nerve interactions — and developing signal processing algorithms to adapt to and compensate for his hearing loss in a way that preserves the audio experience of music. The result is a collection of signal processing algorithms he calls Crescendo. Kyma is one of the tools David uses for testing out new ideas and prototyping them for Crescendo.

Now he’s blogging about his findings on his web site http://refined-audiometrics.com. In keeping with his motto “Keeping music enjoyable for all!” David hopes that his experiences, research findings, and extensive set of algorithms can benefit others.

 

Kyma 7.1 Sound Design Software — more inspiration, more live interaction, more sounds

Kyma 7.1 is now available as a free update for sound designers, musicians, and researchers currently using Kyma 7. New features in the Kyma 7.1 sound design environment help you stay in the creative flow by extending automatic Gallery generation to analysis files and Sounds, keeping your sound interactions lively and dynamic with support for additional multidimensional physical controllers, while expanding your sonic universe to include newly developed synthesis and control algorithms that can be combined with the extensive library of algorithms already in Kyma.

Kyma 7.1 — Sound Design Inspiration

Sonic AI (Artistic Inspiration) — Need to get started quickly? Kyma 7.1 provides Galleries everywhere! Select any Sound (signal-flow patch); click Gallery to automatically generate an extensive library of examples, all based on your initial idea. Or start with a sample file, spectral analysis, PSI analysis or Sound in your library, and click the Grid button to create a live matrix of sound sources and processing that you can rapidly explore and tweak to zero-in on exactly the sound you need. Hear something you like? A single click opens a signal flow editor on the current path through the Grid so you can start tweaking and expanding on your initial design.

Responsive Control — Last year, Symbolic Sound introduced support for Multidimensional Polyphonic Expression (MPE) MIDI, which they demonstrated with Roger Linn Designs’ LinnStrument. Now, Kyma 7.1 extends that support to the ROLI Seaboard RISE; just plug the RISE into the USB port on the back of the Paca(rana) and play. Kyma 7.1 also maintains Kyma’s longstanding support for the original multidimensional polyphonic controller: the Haken Audio Continuum fingerboard. Also new with Kyma 7.1 is plug-and-play support for the Wacom Intuos Pro tablet, combining a three dimensional multitouch surface with the precision and refined motor-control afforded by the Wacom pen.

Recombinant Sound — Now you can gain entrée into the world of nonlinear acoustics, biological oscillators, chaos and more with the new, audio-rate Dynamical Systems modules introduced in Kyma 7.1. New modules include a van der Pol oscillator, Lorenz system, and Double-well potential, each of which can generate audio signals or control signals as well as being driven by other audio inputs to create delightfully unpredictable chaotic behavior.

Other new features in Kyma 7.1 include:

â–ª The new Spherical Panner uses perceptual cues to give you 3d positioning and panning (elevation and azimuth) for motion-tracking VR or mixed reality applications and enhanced binaural mixes.

â–ª A new 3d controller in the Virtual Control Surface provides three dimensions of mappable controls in a single aggregate fader. Also new in Kyma 7.1: three-dimensional and 2-dimensional faders can optionally leave a trace or a history so you can visualize the trajectory of algorithmically generated controls.

â–ª Enhanced spectral analysis tools in Kyma 7.1 provide narrower analysis bands, additional resynthesis partials, and more accurate time-stretching.

â–ª The new, batch spectral analysis tool for non-harmonic source material is perfect for creating vast quantities of audio assets from non-harmonic samples like textures, backgrounds, and ambiences. Once you have those analysis files, you can instantly generate a library of highly malleable additive and aggregate resynthesis examples by simply clicking the Gallery button.

▪ Nudging the dice — Once you have an interesting preset, nudging the dice can be a highly effective way to discover previously unimagined sounds by taking a random walk in the general vicinity of the parameter space. Shift-click on the dice icon or Shift+R to nudge the controller values by randomly generating values within 10% of the current settings.

â–ª Generate dynamic, evolving timbres by smoothly morphing from one waveshape to another in oscillators, wave shapers, and grain clouds using new sound synthesis and processing modules: MultiOscillator, Morph3dOscillator, Interpolate3D, MultiGrainCloud, Morph3dGrainCloud, MultiWaveshaper, Morph3dWaveshaper and others.

â–ª An optional second Virtual Control Surface (VCS) can display one set of images and controls for the audience or performers while you control another set of sound parameters using the primary Virtual Control Surface on your laptop or iPad.

▪ A new version of Symbolic Sound’s Kyma Control app for the Apple iPad includes a tab for activating Sounds in the Multigrid using multi-touch plus support for 128-bit IPv6 addressing (giving you approximately as many IP addresses as there are atoms in the Earth).

▪ Kyma 7.1 provides enhanced support for physical and external software control sources in the form of incoming message logs for MIDI and OSC as well as an OSC Tool for communicating with devices that have not yet implemented Kyma’s open protocol for bi-directional OSC communication.

▪ New functionality in Kyma’s real-time parameter control language, Capytalk, includes messages for auto-tuned voicing and harmonizing within live-selectable musical scales along with numerous other new messages. (For full details open the Capytalk Reference from the Kyma Help menu).

Today, Kyma continues to set new standards for sound quality, innovative synthesis and processing algorithms, rock-solid live performance hardware, and a supportive, professional Kyma community both online and through the annual Kyma International Sound Symposium (KISS).

For more information:

“What’s new in Kyma 7.1” presentation at KISS2016
Website
Email
@SymbolicSound
Facebook

Imagining spaces

You won’t hear a single starting pistol or popped balloon in Matteo Milani’s Imagined Spaces impulse response library. Instead, the film sound designer imagined and synthesized the impulse responses of imaginary spaces using Kyma 7.

As a result, Imagined Spaces can do more than imbue your tracks with air, depth, and new perspective; it also expands and transforms the original material into something entirely new, something that’s never been heard before — like listening to your tracks in venues that exist only in the mind of the sound designer.