A Sound Apart—interview with sound designer, Marco Lopez

 Sound Design, Sound for picture, Streaming Series  Comments Off on A Sound Apart—interview with sound designer, Marco Lopez
Aug 032021
 
Marco Lopez talks about how sound supports the story in “A Class Apart”

Sound helps create a miasma of privilege, power and tension for a drama set in an exclusive boarding school

A Class Apart (Zebrarummet), a new eight part mystery drama that will premiere on Viaplay on August 22 2021, is set in the hidden world of privilege and power that is an exclusive boarding school in Sweden. View the teaser on IMDB.

Marco Lopez, sound designer

We had a chance to speak with lead sound-effects editor, Marco Lopez, to find out more about how he used sound to enhance the narrative. Born in Leipzig Germany to Cypriot and Chilean parents, it seems inevitable that Lopez would become a multi-lingual citizen of the world. A solid background of 7 years of classical piano lessons and music theory led to sound engineering studies in Santiago, but it was almost by chance, during a short course entitled ‘Sound Design for Film’, added in his final semester, where he discovered his true passion for sound design, launching him on what he describes as “an endless search for knowledge and techniques”.

You come from a Cypriot, Chilean, and German background, but what about the Swedish connection? How did that come about?

In 2013, I attended Randy Thom’s sound design masterclass in Hamburg. Prior to the masterclass, each of the participants received a 5 minute sequence from the film “How To Train Your Dragon” and given the assignment of adding sound design to that sequence. By the end of the masterclass and after listening to my sound design, the Europa Studio (Filmlance International) sound design team invited me to visit them at their studio in Stockholm the next time I was in town. Eventually I took the decision to take the next step in my professional growth and move to Sweden, and I was fortunate enough to start working right away with the Europa Sound Studio/Filmlance team.

When Filmlance International mixer and sound designer, Erik Guldager, who was doing sound design for the first two episodes of “A Class Apart”, invited me to join the team, I immediately agreed! It’s always great working with them. Due to the pandemic the communication was done mainly by email, or Zoom. It was very effective, as if we were in the same place.

Is the dialog in Swedish? How does language influence sound design?

The dialog is indeed in Swedish. For the last five years, I have been speaking exclusively in Swedish with my girlfriend which has helped me a lot to learn the language. I think that it is important to understand the dialog and the underlying codes that sometimes might be carried along in this way. It becomes easier to support the story with the proper sound effects and build a better sound around them.

From the title and the synopsis, it sounds like class differences and privilege are a central “character” in this story. Did you try to create a “sound” or ambience that would convey privilege, exclusivity and power for some of the scenes? How did you go about doing that?

Yes, that is correct and that becomes even more prominent due to the mysterious death of one of the students of the boarding school: Tuna Kvarn. The central character is very well described both with the picture and with the dialog, so we began by highlighting those moments and, once we were happy with our first attempt, we then started adding details around those moments and enhancing them.

As part of this process, the director requested that we use “unnatural sounds”, sounds that would not be normally present in a certain room or an exterior space. This request made the whole project even more exciting for me, because it allowed us to open an extra door of creativity and gave us the opportunity to experiment and create elements (which I, unofficially referred to as “non-musical drones”) that functioned well in the overall context.

One of the guidelines from the sound supervisor of the project, Boris Laible, was that we were after a feeling. That is an inspiring place for me to be in, because sometimes it takes several attempts to nail it, and it’s interesting to be able to witness the different versions that can be created with different sound effects. Eventually we selected a few of those non-musical drones, based on the fact that they were blending well with the rest of the sounds and they were supporting the scenes properly, but most importantly, they were not distracting the viewer away from the storytelling. We kept tweaking and readjusting the sound design the whole time until the very end.

How did you use Kyma on this project?

I used Kyma both as an external FX processor where it receives and sends a signal to a DAW, and for offline processing (for example, to generate the non-musical drones).

One interesting sound design challenge was to create the sound of a grandfather clock ticking that, during some scenes, would slow down or accelerate to imply that something that was being said or some behavior was off. For that, I imported the sound effect in the Tau Editor and after creating a Gallery folder, I found a Sound where I could shift the tempo without affecting the pitch of the sound.

Then I thought of adding a clock bell and stretching its ringing, in a similar way as in the scene from the “Barton Fink” by the Coen brothers, where Barton taps the bell to register his arrival at the hotel. For that I used the Sum Of Sines Sound where I would modulate its pitch and give some sort of movement in the sound.

I even used Kyma in order to add an extra element to a CCTV electrical interference noise. By combining an FFT analysis with Christian Vogel’s ZDFResonatorBank prototype from his ZDF Filters Kyma Capsules, I was able to create some variations that blended very well with other sound effects recordings that I already had in my SFX library.

For the non-musical drones I would create Galleries and go through all the options given and if a Sound sounded interesting to me, I would spend more time experimenting with creating presets. This procedure was the most time consuming but it definitely gave fantastic results! By the end of the project, I realized I had used Kyma to create 96 non-musical drones along with a few extra sound effects.

Every space had its own defined character and within a certain situation we would introduce the non-musical drones and blend them with the rest of the sounds.

Are there any things that are easier (or faster or more natural) to do in Kyma than in other environments?

Just by importing a sound in Kyma, creating Gallery folders of Kyma Sounds it’s luxurious, because you can choose which one best suits your idea. Also the fact that I can control a Kyma Sound with my Wacom tablet, a microphone or my keyboard, gives me the freedom to perform the sound however I want to, or according to what is happening in the picture.

Could you describe your sound design studio setup?

I work on a 5.1 system, both on Pro Tools and Nuendo with the RME UCX audio interface. I use the MOTU Traveler mk3 connected to Kyma. I recently started using Dante which allows me to share the interface that the DAW is connected to and it gives a stereo format with 48Khz. Otherwise, I’ll just connect the interfaces of Kyma and the DAW via ADAT.

Do you usually work to picture? Do you use any live controllers or MIDI keyboards?

I always work to picture. I sometimes use a keyboard but for Kyma, I use the Wacom tablet more often.

How do you build your sound library?

If there’s a sound effect that I don’t have in my library, I’ll go out and record it, or I’ll use Kyma to create what I am after.

Any advice for Kyma sound designers? Any resources you recommend?

The fastest way to get into Kyma is to open a sound effect in Kyma and create a Gallery folder based on the options you choose. Then go through each folder and see the different Sounds that Kyma has created for you.

Personally I think of Kyma as an instrument in that, the more you practice, the more you will start seeing results. At the same time you also need the theory, so you understand the powerful possibilities and philosophy behind Kyma. That is why I would strongly recommend to read the manual. Once you begin to understand how it works you will be able to start building your own Sounds based on what you envisioned in the first place.

Having Kyma lessons is also a big plus. There’s, for example, Cristian Vogel, Alan Jackson and Will Klingenmeier. All three of them are very helpful!

Check the Kyma Q&A periodically and also ask questions there. You should also feel free to join the Kyma Kata Group! There’s a lot of great people that practice and share their knowledge on Kyma. I’d like to thank Charlie Norton, Andreas Frostholm, Alan Jackson and Pete Johnston, of the Kyma Kata group, who generously offered valuable suggestions and helped me out when it was needed.

What is the function of sound for picture?

Sound helps define the picture and brings up emotions that support the storytelling. In “A Class Apart” there were scenes where sound was underlining what was going on visually, but in other moments we would create something a bit differently from what was going on in the picture. I would say that in the last episode sound helped build up the tension gradually, right from the beginning until the very last scene.

Any tips for someone just starting out in sound design? 

Give the best you can on the project you are working on, because your performance will open the door to the next project. Allow yourselves to make mistakes and learn from them. Especially in the beginning nobody expects from you to know everything. Later on, it can also happen that something that we might consider as a mistake, might trigger an idea to create something new and exciting. In every moment experience the world around you through your ears and hold those experiences in your memory. You never know when they will be a source of inspiration for you. Study as much as you can about sound design and meet other sound designers. Watch films. A lot! Two or three times the same film. Study them and listen to what the sound is doing in relation to the picture.

Tell us what you’re the most proud of in “A Class Apart” (don’t be shy!!)

I am proud because we delivered an exciting sound. The overall process was creative and fun. There were moments when it seemed overwhelming like there was too much to do, but I trusted the creative process and decided to enjoy it.

What kind of project would you most love to do the sound for next?

I would like to have the chance to work on an animation, a scifi or a thriller.

Finally, the most important question, where can we all binge watch “A Class Apart (Zebrarummet)”? It sounds really intriguing!!

A Class Apart (Zebrarummet) premieres on Viaplay on the 22nd of August!

A taste of some of Marco Lopez’ “non-musical drones” from A Class Apart (Zebrarummet)

Marco credits his first piano teacher, Josefina Beltren, with teaching him various ways to “perform the silence” in a piece of music. Clearly that early training has translated to his talent for creating different forms of meaningful “silence” to advance the story and lend character to rooms and spaces:

 

 

 

 

 

 

 

New Pattern Generator for Kyma 7.25

 Release, Software, Sound Design, Sound for picture  Comments Off on New Pattern Generator for Kyma 7.25
Jun 122019
 

Generate sequences based on the patterns discovered in your MIDI files

Symbolic Sound today released a new module that generates endlessly evolving sequences based on the patterns it discovers in a MIDI file. HMMPatternGenerator, the latest addition to the library of the world’s most advanced sound design environment, is now available to sound designers, musicians, and researchers as part of a free software update: Kyma 7.25.

Composers and sound designers are masters of pattern generation — skilled at inventing, discovering, modifying, and combining patterns with just the right mix of repetition and variation to excite and engage the attention of a listener. HMMPatternGenerator is a tool to help you discover the previously unexplored patterns hidden within your own library of MIDI files and to generate endlessly varying event sequences, continuous controllers, and new combinations based on those patterns.

Here’s a video glimpse at some of the potential applications for the HMMPatternGenerator:

 

What can HMMPatternGenerator do for you?

Games, VR, AR — In an interactive game or virtual environment, there’s no such thing as a fixed-duration scene. HMMPatternGenerator can take a short segment of music and extend it for an indeterminate length of time without looping.

Live improvisation backgrounds — Improvise live over an endlessly evolving HMMPatternGenerator sequence based on the patterns found in your favorite MIDI files.

Keep your backgrounds interesting — Have you been asked to provide the music for a gallery opening, a dance class, a party, an exercise class or some other event where music is not the main focus? The next time you’re asked to provide “background” music, you won’t have to resort to loops or sample clouds; just create a short segment in the appropriate style, save it as a MIDI file, and HMMPatternGenerator will generate sequences in that style for as long as the party lasts — even after you shut down your laptop (because it’s all generated on the Paca(rana) sound engine, not on your host computer).

Inspiration under a deadline — Need to get started quickly? Provide HMMPatternGenerator with your favorite MIDI files, route its MIDI output stream to your sequencer or notation software, and listen while it generates endless recombinations and variations on the latent patterns lurking within those files. Save the best parts to use as a starting point for your new composition.

Sound for picture — When the director doubles the duration of a scene a few hours before the deadline, HMMPatternGenerator can come to the rescue by taking your existing cue and extending it for an arbitrary length of time, maintaining the original meter and the style but with continuous variations (no looping).

Structured textures — HMMPatternGenerator isn’t limited to generating discrete note events; it can also generate timeIndex functions to control other synthesis algorithms (like SumOfSine resynthesis, SampleClouds and more) or as a time-dependent control function for any other synthesis or processing parameter. That means you can use a MIDI file to control abstract sounds in a new, highly-structured way.

MIDI as code — If you encode the part-of-speech (like verb, adjective, noun, etc) as a MIDI pitch, you can compose a MIDI sequence that specifies a grammar for English sentences and then use HMMPatternGenerator to trigger samples of yourself speaking those words — generating an endless variety of grammatically correct sentences (or even artificial poetry). Imagine what other secret meanings you could encode as MIDI sequences — musical sequences that can be decrypted only when decoded by the Kyma Sound generator you’ve designed for that purpose.

Self-discovery — HMMPatternGenerator can help you tease apart what it is about your favorite music that makes it sound the way it does. By adjusting the parameters of the HMMPatternGenerator and listening to the results, you can uncover latent structures and hyper meters buried deep within the music of your favorite composers — including some patterns you hadn’t  even realized were hidden within your own music.

Remixes and mashups — Use HMMPatternGenerator to generate an never-ending stream of ideas for remixes (of one MIDI file) and amusing mashups (when you provide two or more MIDI files in different styles).

Galleries of possibilities — Select a MIDI file in the Kyma 7.25 Sound Browser and, at the click of a button, generate a gallery of hundreds of pattern generators, all based on that MIDI file. At that point, it’s easy to substitute your own synthesis algorithms and design new musical instruments to be controlled by the pattern-generator. Quickly create unique, high-quality textures and sequences by combining some of the newly-developed MIDI-mining pattern generators with the massive library of unique synthesis and processing modules already included with Kyma.

How does it work?

If each event in the original MIDI file is completely unique, then there is only one path through the sequence — the generated sequence is the same as the original MIDI sequence. Things start to get interesting when some of the events are, in some way, equivalent to others (for example, when events of the same pitch and duration appear more than once in the file).

HMMPatternGenerator uses equivalent events as pivot points — decision points at which it can choose to take an alternate path through the original sequence (the proverbial “fork in the road”). No doubt you’re familiar with using a common chord to pivot to another key; now imagine using a common state to pivot to a whole new section of a MIDI file or, if you give HMMPatternGenerator several MIDI files, from one genre to another.

By live-tweaking the strengths of three equivalence tests — pitch, time-to-next, and position within a hyper-bar — you can continuously shape how closely the generated sequence follows the original sequence of events, ranging from a note-for-note reproduction to a completely random sequence based only on the frequency with which that event occurs in the original file.

Other new features in Kyma 7.25 include:

▪ Optimizations to the Spherical Panner for 3d positioning and panning (elevation and azimuth) for motion-tracking VR or mixed reality applications and enhanced binaural mixes — providing up to 4 times speed increases (meaning you can track 4 times as many 3d sound sources in real time).

▪ Multichannel interleaved file support in the Wave editor

• New granular reverberation and 3d spatializing examples in the Kyma Sound Library

and more…

Availability

Kyma 7.25 is available as a free update starting today and can be downloaded from the Help menu in Kyma 7. For more information, please visit: symbolicsound.com.

Summary

The new features in the Kyma 7.25 sound design environment are designed to help you stay in the creative flow by adding automatic Gallery generation from MIDI files, and the HMMPatternGenerator module which can be combined with the extensive library of sound synthesis, pattern-generation, and processing algorithms already available in Kyma.

Background

Symbolic Sound revolutionized the live sound synthesis and processing industry with the introduction of Kyma in 1990. Today, Kyma continues to set new standards for sound quality, innovative synthesis and processing algorithms, rock-solid live performance hardware, and a supportive, professional Kyma community both online and through the annual Kyma International Sound Symposium (KISS).

For more information:

Website
Email
@SymbolicSound
Facebook


Kyma, Pacarana, Paca, and their respective logos are trademarks of Symbolic Sound Corporation. Other company and product names may be trademarks of their respective owners.

Course on Design Patterns for Sound and Music

 Data-driven sound, Seminar, Sound Design, Sound for picture  Comments Off on Course on Design Patterns for Sound and Music
Nov 022017
 

Music 507 Design Patterns for Sound and Music. The art of sound design is vast, but there are certain patterns that seem to come up over and over again. Once you start to recognize those patterns, you can immediately apply them in new situations and start combining them to create your own sound design and live performance environments. We will look at design patterns that show up in audio synthesis and processing, parameter control logic, and live performance environments, all in the context of the Kyma sound design environment.

Registration open starting November 2017 (course starts in January 2018)
Music 507 Spring 2018
Mondays, 7:00 – 8:50 pm
Music Building 4th floor, Experimental Music Studios
University of Illinois Urbana-Champaign

Oct 022017
 


Gilles Jobin’s VR_I — an immersive virtual reality contemporary dance experience with a 3D sound track created entirely in Kyma.7 — has its world premiere from 6 to 10 October 2017 at the Festival du Nouveau Cinéma in Montreal. Unfolding on multiple, parallel space and time scales, VR_I immerses you in a wordless experience of the continuum from infinite to infinitesimal, leaving you with a new sense of perspective on your place in the universe.

In partnership with Artanim Foundation and utilizing their motion-capture and VR technology, VR_I is a pioneering work in social, free-roaming virtual reality. As many as five people can enter the experience together and see their own and each other’s bodies as avatars sharing the same virtual world as the characters (the dancers).

In VR_I, music emerges from the environment: wind in the desert transitions to a humming chorus sung by giants; wind chimes in the art-filled loft organize themselves into 5/8 rhythms as columns rise up from the floor, only to dissolve back into wind chimes again as the columns recede; in the city park, bird songs are echoed in flute melodies, and cicadas transform themselves into rhythmic patterns over tambura-like drones.

Each spectator hears an individualized soundscape, and there is no way to really know what everyone else is experiencing (just like in real life). Sounds and musical elements are positioned in space and attached to objects, giving each spectator a unique mix as they move through the space, culminating in upwardly spiraling Shepard-tones that swirl around and lift up the listeners as they contemplate their own place in the continuum from infinite to infinitesimal.

In beauty I walk
With beauty before me I walk
With beauty behind me I walk
With beauty above me I walk
With beauty around me I walk

— from the Native American Diné Blessing Way

Choreography: Gilles Jobin
Dancers: Susana Panadés Díaz, Victoria Chiu, Tidiani N’Diaye, Diya Naidu, Gilles Jobin
3D Music & Sound Design: Carla Scaletti
Costumes: Jean-Paul Lespagnard
3d modeling: Tristan Siodlak
Animation: Camilo de Martino
3D Scans & Motion Capture: Artanim
VR Platform: Artanim

For tour dates and booking information, visit: vr-i.space

Sound & Music for Augmenting Reality

 Concert, Conference, Event, Festival, Learning, Sound Design, Sound for picture  Comments Off on Sound & Music for Augmenting Reality
Aug 292017
 

KISS2017 in Oslo Norway 12-15 October 2017 — a symposium on new opportunities for sound designers & musicians in virtual, augmented and mixed reality creation

Sound and music are the original augmented reality technology. Throughout human history, sound and music have played an essential role in transforming the mundane into the sublime, turning everyday events into memorable milestones, and enhancing the flow of experience.

Sound designers, musicians, museum curators, game developers, researchers and others interested in the power of sound to create and augment reality are invited to participate in the Kyma International Sound Symposium, KISS2017 in Oslo Norway 12-15 October 2017. Join fellow participants exploring the uses of sound in Augmenting Reality through talks, live performances, hands-on sessions, and informal conversations over meals (which are included with your conference registration).

Program
The full KISS2017 technical and creative program is available here: http://kiss2017.symbolicsound.com/complete-program/

Here are a few highlights from the international lineup:

• Tour and reception at the Norwegian Museum of Science and Technology, including a special lecture on computer music pioneer Knut Wiggen’s musical innovation during the early years of the EMS Electronic Music Studio in Stockholm (presented by NOTAM’s founding director, Jøran Rudi), followed by a science/art-themed concert at the museum.

• An evening at Norway’s premiere jazz venue Victoria Nasjonal Jazzscene featuring Deathprod/Supersilent members Helge Sten & Arve Henriksen (Norway) performing live Kyma electronics on a program that also includes sets by SØS Gunver Rydberg (Denmark) and Michael Wittgraf (USA)
http://nasjonaljazzscene.no/arrangement/helge-sten-arve-henriksen-wittgraf-sos/

• Presentations on designing sound for planetarium-presentation; listening to the past in museum exhibitions; learning how to listen with cochlear implants; sound and music for calming dysregulated children; cooperation between musicians and machines

• Technology talks including pitch-tracking of live audio signals to control game avatars in real time, multidimensional audio, ambisonics, head-tracking, sonifying geo-spatial data, Open Sound Control, connections between Kyma 7 and the Unity3d game engine, live performance of electroacoustic music for an audience wearing VR headgear, using physical objects to interact with digital sound synthesis and processing, using wireless sensors to control and manipulate sound, and integrating live dance with generative sound and video for mixed reality performances.

• Desktop Demo Sessions where you can speak to the presenters one-on-one and ask them questions about their work

• An Open Lab where you can ask questions and consult on your Kyma projects with fellow practitioners and the creators of Kyma 7

• Live mixed reality performances including:

• A performer running through the streets of Oslo & transmitting a live video feed to the audience as his geospatial data generates and controls quadraphonic processing of a live ensemble
• A live musical performance of a computer game where acoustic audio controls real-time decisions leading to a distinctive outcome on each play-through
• Mixed reality performances where physical objects, like Tibetan bells or balloons, control digital sound and image generation
• Performances utilizing new musical inventions like the Electronic Bull Roarer and a new input device inspired by Ssireum Korean wrestling
• Citizen journalism and crowd-sourced news as an augmented reality performance
• A performance that looks at how organizations can use language to alter reality and neutralize our position as workers
• A mixed reality performance where long-distance communications augment time and space and magnify our stories
• The sounds of writing create sound fantasies in the minds of the audience & then mutate into other sounds that augment and clash with those imagined by the audience

Summary
KISS2017 is an opportunity for anyone interested in creating sound for augmenting reality to immerse themselves in new ideas and experiences and to meet and learn from like-minded colleagues.

Registration includes talks, concerts, reception, lunches & dinners (Student discounts are available): http://kiss2017.symbolicsound.com/kiss2017-registration/

For travel and lodging information: http://kiss2017.symbolicsound.com/travel-lodging/

Official KISS web site: http://kiss2017.symbolicsound.com

To follow the latest KISS news and developments:

Facebook
Twitter

Organizers and Sponsors

The Norwegian Academy of Music
University of Oslo Department of Musicology
NOTAM
Symbolic Sound Corporation
The Research Council of Norway

Contact the organizers

See you in Oslo!

La Berge Touring North America

 Concert, Event, Sound for picture  Comments Off on La Berge Touring North America
Feb 202017
 

Composer/Performers Anne La Berge and David Dramm are touring North America from the end of February through March 2017.  This is your chance to catch a live performance of Anne’s controversial Utter — for flute, multiple iPads and Kyma — and discover for yourself why Utter generated such intense post-concert discussion at KISS2016.

Here are the dates (in reverse chronological order so scroll down for the upcoming dates):

11 March 2017
San Francisco Center For New Music
San Francisco

10 March 2017
Indexical Concert
Radius Gallery
Tannery Arts Center
Santa Cruz
20.00

6 March 2017
Composition Colloquim with David Dramm
UC Santa Cruz
13.20 – 14.50

1 – 4 March 2017
Residency with David Dramm
Brigham Young University
3 March – La Berge solo concert
Madsen Recital Hall
19.30

28 February 2017
Solo concert
Music on Main
Vancouver
20.00

24 February 2017
INsphere
Vancouver

23 February 2017
Elastic Arts concert
with
Sam Pluta, Dana Jessen and Katherine Young
Elastic Arts, Chicago
21.00

21 February 2017
Compostion Seminar
University of Chicago

Kyma 7.1 Sound Design Software — more inspiration, more live interaction, more sounds

 Release, Software, Sound Design, Sound for picture  Comments Off on Kyma 7.1 Sound Design Software — more inspiration, more live interaction, more sounds
Nov 162016
 

Kyma 7.1 is now available as a free update for sound designers, musicians, and researchers currently using Kyma 7. New features in the Kyma 7.1 sound design environment help you stay in the creative flow by extending automatic Gallery generation to analysis files and Sounds, keeping your sound interactions lively and dynamic with support for additional multidimensional physical controllers, while expanding your sonic universe to include newly developed synthesis and control algorithms that can be combined with the extensive library of algorithms already in Kyma.

Kyma 7.1 — Sound Design Inspiration

Sonic AI (Artistic Inspiration) — Need to get started quickly? Kyma 7.1 provides Galleries everywhere! Select any Sound (signal-flow patch); click Gallery to automatically generate an extensive library of examples, all based on your initial idea. Or start with a sample file, spectral analysis, PSI analysis or Sound in your library, and click the Grid button to create a live matrix of sound sources and processing that you can rapidly explore and tweak to zero-in on exactly the sound you need. Hear something you like? A single click opens a signal flow editor on the current path through the Grid so you can start tweaking and expanding on your initial design.

Responsive Control — Last year, Symbolic Sound introduced support for Multidimensional Polyphonic Expression (MPE) MIDI, which they demonstrated with Roger Linn Designs’ LinnStrument. Now, Kyma 7.1 extends that support to the ROLI Seaboard RISE; just plug the RISE into the USB port on the back of the Paca(rana) and play. Kyma 7.1 also maintains Kyma’s longstanding support for the original multidimensional polyphonic controller: the Haken Audio Continuum fingerboard. Also new with Kyma 7.1 is plug-and-play support for the Wacom Intuos Pro tablet, combining a three dimensional multitouch surface with the precision and refined motor-control afforded by the Wacom pen.

Recombinant Sound — Now you can gain entrée into the world of nonlinear acoustics, biological oscillators, chaos and more with the new, audio-rate Dynamical Systems modules introduced in Kyma 7.1. New modules include a van der Pol oscillator, Lorenz system, and Double-well potential, each of which can generate audio signals or control signals as well as being driven by other audio inputs to create delightfully unpredictable chaotic behavior.

Other new features in Kyma 7.1 include:

▪ The new Spherical Panner uses perceptual cues to give you 3d positioning and panning (elevation and azimuth) for motion-tracking VR or mixed reality applications and enhanced binaural mixes.

▪ A new 3d controller in the Virtual Control Surface provides three dimensions of mappable controls in a single aggregate fader. Also new in Kyma 7.1: three-dimensional and 2-dimensional faders can optionally leave a trace or a history so you can visualize the trajectory of algorithmically generated controls.

▪ Enhanced spectral analysis tools in Kyma 7.1 provide narrower analysis bands, additional resynthesis partials, and more accurate time-stretching.

▪ The new, batch spectral analysis tool for non-harmonic source material is perfect for creating vast quantities of audio assets from non-harmonic samples like textures, backgrounds, and ambiences. Once you have those analysis files, you can instantly generate a library of highly malleable additive and aggregate resynthesis examples by simply clicking the Gallery button.

▪ Nudging the dice — Once you have an interesting preset, nudging the dice can be a highly effective way to discover previously unimagined sounds by taking a random walk in the general vicinity of the parameter space. Shift-click on the dice icon or Shift+R to nudge the controller values by randomly generating values within 10% of the current settings.

▪ Generate dynamic, evolving timbres by smoothly morphing from one waveshape to another in oscillators, wave shapers, and grain clouds using new sound synthesis and processing modules: MultiOscillator, Morph3dOscillator, Interpolate3D, MultiGrainCloud, Morph3dGrainCloud, MultiWaveshaper, Morph3dWaveshaper and others.

▪ An optional second Virtual Control Surface (VCS) can display one set of images and controls for the audience or performers while you control another set of sound parameters using the primary Virtual Control Surface on your laptop or iPad.

▪ A new version of Symbolic Sound’s Kyma Control app for the Apple iPad includes a tab for activating Sounds in the Multigrid using multi-touch plus support for 128-bit IPv6 addressing (giving you approximately as many IP addresses as there are atoms in the Earth).

▪ Kyma 7.1 provides enhanced support for physical and external software control sources in the form of incoming message logs for MIDI and OSC as well as an OSC Tool for communicating with devices that have not yet implemented Kyma’s open protocol for bi-directional OSC communication.

▪ New functionality in Kyma’s real-time parameter control language, Capytalk, includes messages for auto-tuned voicing and harmonizing within live-selectable musical scales along with numerous other new messages. (For full details open the Capytalk Reference from the Kyma Help menu).

Today, Kyma continues to set new standards for sound quality, innovative synthesis and processing algorithms, rock-solid live performance hardware, and a supportive, professional Kyma community both online and through the annual Kyma International Sound Symposium (KISS).

For more information:

“What’s new in Kyma 7.1” presentation at KISS2016
Website
Email
@SymbolicSound
Facebook

Transformation of fear

 Concert, Event, Festival, Sound for picture  Comments Off on Transformation of fear
Nov 092016
 

Kurt Schwitters Ursonate is a testament against war, nationalism, protectionism and establishment. In Landscapes of a voice, Roland Kuit seeks to transform his own fears into beauty by combining Schwitter’s poetry processed through Kyma with visuals by Karin Schomaker. In the face of his fears that progress has come to an end and the only thing left is degradation, Kuit seeks to create disruptive art, creating a peregrination through the human soul, finding new values in an impellent quadraphonic terrain of vocal spectra.

Roland Kuit – Kyma, voice 
Karin Schomaker – visuals

Hear it December 3-4 2016 at the Festival Internacional de Música Experimental en Vallecas, Sonikas XIV, Centro Cultural Lope de Vega, Madrid, Spain

Oct 232016
 

You won’t hear a single starting pistol or popped balloon in Matteo Milani’s Imagined Spaces impulse response library. Instead, the film sound designer imagined and synthesized the impulse responses of imaginary spaces using Kyma 7.

As a result, Imagined Spaces can do more than imbue your tracks with air, depth, and new perspective; it also expands and transforms the original material into something entirely new, something that’s never been heard before — like listening to your tracks in venues that exist only in the mind of the sound designer.

One man’s junk is another man’s musical instrument

 Concert, Conference, Festival, Sound for picture  Comments Off on One man’s junk is another man’s musical instrument
Aug 232016
 

Since childhood, composer/performer Franz Danksagmüller has been fascinated with the rich, interesting sound palette one can create from broken, discarded and so-called unplayable instruments. In 2013, on a visit to a local junkyard, he noticed a strange metal object that immediately captured his attention.

Screen Shot 2016-08-23 at 8.31.39 PM

Read the story of how he added contact mics and sensors and developed a bowing technique to transform this strange object (which he later discovered was part of a device for food preservation) into a new musical instrument that sends both audio and MIDI control data to Kyma.

Screen Shot 2016-08-23 at 8.35.28 PM
You can hear this mysterious and beautiful instrument performed live at KISS2016, when Danksagmüller and composer/performer/computer scientist John Mantegna perform their new piece — The Artificial Brain!
Screen Shot 2016-08-23 at 8.35.55 PM

© 2012 the eighth nerve Suffusion theme by Sayontan Sinha