A Sound Apart—interview with sound designer, Marco Lopez

 Sound Design, Sound for picture, Streaming Series  Comments Off on A Sound Apart—interview with sound designer, Marco Lopez
Aug 032021
 
Marco Lopez talks about how sound supports the story in “A Class Apart”

Sound helps create a miasma of privilege, power and tension for a drama set in an exclusive boarding school

A Class Apart (Zebrarummet), a new eight part mystery drama that will premiere on Viaplay on August 22 2021, is set in the hidden world of privilege and power that is an exclusive boarding school in Sweden. View the teaser on IMDB.

Marco Lopez, sound designer

We had a chance to speak with lead sound-effects editor, Marco Lopez, to find out more about how he used sound to enhance the narrative. Born in Leipzig Germany to Cypriot and Chilean parents, it seems inevitable that Lopez would become a multi-lingual citizen of the world. A solid background of 7 years of classical piano lessons and music theory led to sound engineering studies in Santiago, but it was almost by chance, during a short course entitled ‘Sound Design for Film’, added in his final semester, where he discovered his true passion for sound design, launching him on what he describes as “an endless search for knowledge and techniques”.

You come from a Cypriot, Chilean, and German background, but what about the Swedish connection? How did that come about?

In 2013, I attended Randy Thom’s sound design masterclass in Hamburg. Prior to the masterclass, each of the participants received a 5 minute sequence from the film “How To Train Your Dragon” and given the assignment of adding sound design to that sequence. By the end of the masterclass and after listening to my sound design, the Europa Studio (Filmlance International) sound design team invited me to visit them at their studio in Stockholm the next time I was in town. Eventually I took the decision to take the next step in my professional growth and move to Sweden, and I was fortunate enough to start working right away with the Europa Sound Studio/Filmlance team.

When Filmlance International mixer and sound designer, Erik Guldager, who was doing sound design for the first two episodes of “A Class Apart”, invited me to join the team, I immediately agreed! It’s always great working with them. Due to the pandemic the communication was done mainly by email, or Zoom. It was very effective, as if we were in the same place.

Is the dialog in Swedish? How does language influence sound design?

The dialog is indeed in Swedish. For the last five years, I have been speaking exclusively in Swedish with my girlfriend which has helped me a lot to learn the language. I think that it is important to understand the dialog and the underlying codes that sometimes might be carried along in this way. It becomes easier to support the story with the proper sound effects and build a better sound around them.

From the title and the synopsis, it sounds like class differences and privilege are a central “character” in this story. Did you try to create a “sound” or ambience that would convey privilege, exclusivity and power for some of the scenes? How did you go about doing that?

Yes, that is correct and that becomes even more prominent due to the mysterious death of one of the students of the boarding school: Tuna Kvarn. The central character is very well described both with the picture and with the dialog, so we began by highlighting those moments and, once we were happy with our first attempt, we then started adding details around those moments and enhancing them.

As part of this process, the director requested that we use “unnatural sounds”, sounds that would not be normally present in a certain room or an exterior space. This request made the whole project even more exciting for me, because it allowed us to open an extra door of creativity and gave us the opportunity to experiment and create elements (which I, unofficially referred to as “non-musical drones”) that functioned well in the overall context.

One of the guidelines from the sound supervisor of the project, Boris Laible, was that we were after a feeling. That is an inspiring place for me to be in, because sometimes it takes several attempts to nail it, and it’s interesting to be able to witness the different versions that can be created with different sound effects. Eventually we selected a few of those non-musical drones, based on the fact that they were blending well with the rest of the sounds and they were supporting the scenes properly, but most importantly, they were not distracting the viewer away from the storytelling. We kept tweaking and readjusting the sound design the whole time until the very end.

How did you use Kyma on this project?

I used Kyma both as an external FX processor where it receives and sends a signal to a DAW, and for offline processing (for example, to generate the non-musical drones).

One interesting sound design challenge was to create the sound of a grandfather clock ticking that, during some scenes, would slow down or accelerate to imply that something that was being said or some behavior was off. For that, I imported the sound effect in the Tau Editor and after creating a Gallery folder, I found a Sound where I could shift the tempo without affecting the pitch of the sound.

Then I thought of adding a clock bell and stretching its ringing, in a similar way as in the scene from the “Barton Fink” by the Coen brothers, where Barton taps the bell to register his arrival at the hotel. For that I used the Sum Of Sines Sound where I would modulate its pitch and give some sort of movement in the sound.

I even used Kyma in order to add an extra element to a CCTV electrical interference noise. By combining an FFT analysis with Christian Vogel’s ZDFResonatorBank prototype from his ZDF Filters Kyma Capsules, I was able to create some variations that blended very well with other sound effects recordings that I already had in my SFX library.

For the non-musical drones I would create Galleries and go through all the options given and if a Sound sounded interesting to me, I would spend more time experimenting with creating presets. This procedure was the most time consuming but it definitely gave fantastic results! By the end of the project, I realized I had used Kyma to create 96 non-musical drones along with a few extra sound effects.

Every space had its own defined character and within a certain situation we would introduce the non-musical drones and blend them with the rest of the sounds.

Are there any things that are easier (or faster or more natural) to do in Kyma than in other environments?

Just by importing a sound in Kyma, creating Gallery folders of Kyma Sounds it’s luxurious, because you can choose which one best suits your idea. Also the fact that I can control a Kyma Sound with my Wacom tablet, a microphone or my keyboard, gives me the freedom to perform the sound however I want to, or according to what is happening in the picture.

Could you describe your sound design studio setup?

I work on a 5.1 system, both on Pro Tools and Nuendo with the RME UCX audio interface. I use the MOTU Traveler mk3 connected to Kyma. I recently started using Dante which allows me to share the interface that the DAW is connected to and it gives a stereo format with 48Khz. Otherwise, I’ll just connect the interfaces of Kyma and the DAW via ADAT.

Do you usually work to picture? Do you use any live controllers or MIDI keyboards?

I always work to picture. I sometimes use a keyboard but for Kyma, I use the Wacom tablet more often.

How do you build your sound library?

If there’s a sound effect that I don’t have in my library, I’ll go out and record it, or I’ll use Kyma to create what I am after.

Any advice for Kyma sound designers? Any resources you recommend?

The fastest way to get into Kyma is to open a sound effect in Kyma and create a Gallery folder based on the options you choose. Then go through each folder and see the different Sounds that Kyma has created for you.

Personally I think of Kyma as an instrument in that, the more you practice, the more you will start seeing results. At the same time you also need the theory, so you understand the powerful possibilities and philosophy behind Kyma. That is why I would strongly recommend to read the manual. Once you begin to understand how it works you will be able to start building your own Sounds based on what you envisioned in the first place.

Having Kyma lessons is also a big plus. There’s, for example, Cristian Vogel, Alan Jackson and Will Klingenmeier. All three of them are very helpful!

Check the Kyma Q&A periodically and also ask questions there. You should also feel free to join the Kyma Kata Group! There’s a lot of great people that practice and share their knowledge on Kyma. I’d like to thank Charlie Norton, Andreas Frostholm, Alan Jackson and Pete Johnston, of the Kyma Kata group, who generously offered valuable suggestions and helped me out when it was needed.

What is the function of sound for picture?

Sound helps define the picture and brings up emotions that support the storytelling. In “A Class Apart” there were scenes where sound was underlining what was going on visually, but in other moments we would create something a bit differently from what was going on in the picture. I would say that in the last episode sound helped build up the tension gradually, right from the beginning until the very last scene.

Any tips for someone just starting out in sound design?

Give the best you can on the project you are working on, because your performance will open the door to the next project. Allow yourselves to make mistakes and learn from them. Especially in the beginning nobody expects from you to know everything. Later on, it can also happen that something that we might consider as a mistake, might trigger an idea to create something new and exciting. In every moment experience the world around you through your ears and hold those experiences in your memory. You never know when they will be a source of inspiration for you. Study as much as you can about sound design and meet other sound designers. Watch films. A lot! Two or three times the same film. Study them and listen to what the sound is doing in relation to the picture.

Tell us what you’re the most proud of in “A Class Apart” (don’t be shy!!)

I am proud because we delivered an exciting sound. The overall process was creative and fun. There were moments when it seemed overwhelming like there was too much to do, but I trusted the creative process and decided to enjoy it.

What kind of project would you most love to do the sound for next?

I would like to have the chance to work on an animation, a scifi or a thriller.

Finally, the most important question, where can we all binge watch “A Class Apart (Zebrarummet)”? It sounds really intriguing!!

A Class Apart (Zebrarummet) premieres on Viaplay on the 22nd of August!

A taste of some of Marco Lopez’ “non-musical drones” from A Class Apart (Zebrarummet)

Marco credits his first piano teacher, Josefina Beltren, with teaching him various ways to “perform the silence” in a piece of music. Clearly that early training has translated to his talent for creating different forms of meaningful “silence” to advance the story and lend character to rooms and spaces:

 

 

 

 

 

 

 

eighth nerve kyma news: 04 January 2021

 Uncategorized  Comments Off on eighth nerve kyma news: 04 January 2021
Jan 052021
 

Opportunities for learning, listening, connecting, and inventing new ways to make music together in 2021:

Kyma software updates

If you haven’t done so in a while, please check the Help menu in Kyma for software updates to ensure that you have all the new (and free) features and enhancements we’ve been adding! Here are a few highlights:

  • A New Year’s gift from Domenico Cipriani: Samples 3rd party > Domenico Cipriani > drumstation to discover the new samples. Thanks, Domenico, and Happy New Year to all!
  • Kyma 7+ is compatible with Apple macOS 11 Big Sur (released November 25, 2020).
  • Kyma 7++ is a universal binary and runs on Apple Silicon M1 processor.
  • SampleTimeIndex, FunctionGenerator, MemoryWriter can now take a sample-rate triggers or gates.
  • OscillatorTimeIndex Frequency field can now update at the sample rate.
  • Instructions for how to access your work Kyma system from home (or vice versa) and hear the sound on your laptop.
  • Displays in VCS now update when the mouse is down (when moving a fader, for example).
  • Substantial speed-ups & optimizations in Oscilloscope, Spectrum Analyzer, 2d & 3d aggregate faders controlled by SoundToGlobalControllers.
  • Wormhole now works with an Ethernet connection to the host computer (substituting for the FireWire connection from Paca(rana) to host computer.
  • A gift to Kyma users from Stephen Taylor of violin pizzicato and col legno samples performed by Olivia Taylor.
  • New Capytalk message tripWires: and segmentOfGrid:
  • Speed ups in the reading & writing of all Kyma file types.

and those are just the highlights. When you download, please check the release notes for more details.

Opportunities for learning Kyma:

Personal Kyma Coaching with Alan Jackson

Alan Jackson is providing online individual coaching sessions and small group classes in Kyma. There are 3 slots still available on Fridays for 1 hour individual Kyma coaching sessions. Sessions are personally tailored for each student and conducted online through screen-sharing and video conferencing. Jackson assigns homework or suggests a direction for self-study during the week (when help is available through Slack chat if you get stuck). At the end of the week (or every two weeks) you meet online to discuss what you’ve done then look at what you want to do next. You do everything in Kyma yourself, sharing your screen with Alan’s help and guidance. The cost is £60 (GBP) per session. More information…

Online Kyma Course led by Alan Jackson

Alan Jackson is also running a series of weekly online Kyma classes starting in early January 2021. There is a 2-hour online teaching session once a week on Friday at 16:00 UTC. Some self-study tasks are set and there is support over Slack throughout the course. The numbers are kept low (5 or 6 people) to ensure that the classes are participatory and collaborative. The provisional syllabus is:

8 January: Introduction to Capytalk
12 February: Introduction to Smalltalk in Kyma
19 March: Microsound (sound particles etc.)
Cost: £240 (GBP) for the 4 week course.

More information…

Hybrid in-person/remote courses using remote-Kyma

At Carnegie Mellon University, Professor Joe Pino plans to continue using remote-access Kyma in his Advanced Sound Skills course, so students can log in from their homes. Last semester he discovered an unexpected side-benefit to remote access: his students could also use their own laptops during in-person classes, thus saving time cleaning and sanitizing before, after and during class. In the spring semester, he will be trying to minimize the time that people have to be in a room together, although he still hasn’t figured out how to deal with analog modular synths, since half the attraction of that is having one’s hands on the devices.

At the University of Oregon, Professor Jeff Stolet has put together an online Pacarana farm — 7 Paca(rana)s running on Mac Minis — that Future Music Oregon students can access for coursework via the Splashtop remote desktop software. Students in Jeff Stolet’s Electronic Composition and Digital Audio & Sound Design courses as well as those in Jon Bellona’s Data Sonification course will use the Pacarana farm for the 2021 winter term (beginning 4 January).

At Indiana University, Professor Chi (“Iris”) Wang has two Pacaranas that she has set up for hybrid online and in-person access for her undergraduate and graduate courses in COMPUTER MUSIC: DESIGN and PERFORMANCE.

At New York University (NYU), Professor Joel Chadabe has set up two Pacas online for students in his Creating with Interactive Media: Kyma course for spring semester 2021 (classes start at the end of January). Chadabe’s course will be structured around the history, composing philosophy, and ideas presented on his site: https://joelchadabe.net/

At Dankook University in Cheonan (Seoul), Professor Kiyoung Lee is using a Kyma/Pacarana system for his courses on Advanced Sound Design-Kyma System, Electronic Music Ensemble, Computer Music Composition. During his 2021 sabbatical, he is finishing a book on Kyma.

Wave terrain on Lines

Ben Phenix (aka wave.terrains) has a tutorial on Lines taking you step by step through how to create a granular engine from scratch in Kyma

Summer course at SPLICE Institute

SPLICE Institute is an online, week-long, intensive summer program for performers, composers, and composer-performers interested in music that combines live performance and electronics. Applications are now being accepted for SPLICE Institute 2021, which will take place online 28 June – 2 July 2021. This year, SPLICE includes daily workshops with Carla Scaletti on the Kyma system, with remote desktop access to Kyma systems for those who do not already have access to their own systems. Registration is $200 for the week (scholarships available).

Roland Kuit book coming soon…

Roland Kuit is working on a book about Kyma to be finished in time for KISS 2021. In the meantime, there is an outline, loaded with sound examples, on his site.

SSC Vimeo/youtube channels

If you haven’t discovered them yet, check out the Kyma tutorial videos on Vimeo and tutorials by Will Klingenmeier and others on the Symbolic Sound youtube channel.

Opportunities to hear Kyma colleagues’ work:

In desperate times

Jeffrey Stolet’s new work in response to events this summer.

Jim O’Rourke live Steam(room)

Inferential: Sounds produced / processed in Kyma by Jim O’Rourke, recorded at the Steamroom in Japan, November 2020 with a shout-out to the Kata: Alan, Simon, Ian, Pete, Domenico, Ben and Charlie

Robert Jarvis: sonoraV19

Sound artist Robert Jarvis’ interview describing his evolving sound art piece based on numbers of confirmed cases of COVID-19, using Kyma to map the numbers to the amplitudes of harmonics (one per country) to create a slowly evolving timbre. Listen to the (regularly updated) sound here.

Lucretio live stream

On 29 December 2020, Domenico Cipriani (Lucretio) performed a 45-minute live stream using Kyma on Seoul Community Radio. Listen again…

Anne La Berge and the poetics of wind turbines

Site V, new sound art from data & the poetics of wind turbines by Anne La Berge (Flute & Kyma) & Phil Maguire (Doepfer Synth), inspired by Borssele Wind Farm Site V floating wind turbines stationed where water depths are too great for fixed foundations.

Ben Phenix on SoundCloud

On SoundCloud, you can listen to Ben Phenix’s extensions to physical modeling and granular synthesis: “The primary goal was to move beyond the normal per grain controls we see (amplitude, pitch, pan, grain size, etc) and be able to insert other processing at the individual grain level. For example, each grain could have its own filter settings, delay, or wave shaper. At that point, we may be departing from Curtis Road’s classical definitions but that is where the fun is!”

Looking back to move forward: Gualtieri’s BTF Project

Composers and performers have been adapting to the constraints of the pandemic in various ways while inventing new paradigms for live musical performance. Composer Vincenzo Gualtieri has chosen to focus on one of the essential aspects of making music together: attentive and reciprocal listening. Gualtieri’s work can now be heard on a new album: the (BTF) project, released on EMA Vinci Records.

Voicemail spam and Furby

Simon Hutchinson’s Messages from Rachel, for spam voicemails and Symbolic Sound’s Kyma and his Regression 2020 for Circuit-bent Furby sounds processed with Symbolic Sound’s Kyma.

Sonic Story-telling

Mei-ling Lee‘s Doctoral Recital ranges from gun violence to darkly poignant children’s stories.

Matteo Milani: Wasted Planet

Matteo Milani and Giovanni Dettori – Wasted Planet, an album created during the first lockdown, electronic layers synthesized or processed in Kyma, released on Miraloop Records in May 2020.

Fang Wan’s live electronics performances

Listen to Fang Wan‘s portfolio of live electronic performances using Kyma on Youtube.

Spectral Evolver

Will Klingenmeier’s SpectralEvolver channel includes an ever-evolving collection of sounds-of the-day, binaural walks, Kyma tutorials and more!

News from Kyma users:

Silence and a sense of space

Laura Tedeschini Lalli (Università di Roma Tre) presented the keynote lecture “Silence and a sense of space: Rome, Italy during complete lockdown and after” for “Sounds of the Pandemic” virtual conference 16-17 December 2020.

Delora KymaConnect 2.0

Doug Kraul’s KymaConnect 2.0.0 BETA 6 adds support for macOS 11.0 “Big Sur” and includes, for the first time, native operation on Apple Silicon Macs. This release was built with the latest releases of the macOS tools and SDK. The application binary is in Apple’s “Universal Binary” format that places native executables for both Intel and Apple Silicon processors in the same application bundle. macOS will automatically choose the Intel version if your Mac has an Intel processor, or select the Apple Silicon version if you have one of the new Macs with the M1 processor.

Amazon acquires Zoox

Amazon and Zoox announced in June 2020 that they’ve signed an agreement for Amazon to acquire Zoox — the pioneer ride-hailing company dedicated to designing autonomous technology from the ground up. Aicha Evans, Zoox CEO, and Jesse Levinson, Zoox co-founder and CTO, will continue to lead Zoox as a standalone business. Zoox has a very special sound-design team creating a sonic user experience for passengers and pedestrians, and rumor has it that a few of those sounds may have made their way into the backing music for the Zoox 14 December reveal video.

Craig Vear awarded a European Research Council Grant

Congratulations to Craig Vear for being awarded a 5-year European Research Council grant for frontier research into the Digital Score. Of 2506 applicants, 327 researchers received ERC_Research funding for research, to build up their teams & have far-reaching impact. Vear plans to continue to expand upon the foundational work begun in his book The Digital Score: Musicianship, Creativity and Innovation which features interviews with several Kyma practitioners and a photo of Anne La Berge on the cover.

In remembrance of those we have lost in 2020…

With sadness, we remember two friends and colleagues. We feel their absence and we salute them for their contributions to sound and music and the world:

Stanley Cowell (5 May 1941 — 17 December 2020)

A brilliant composer, pianist, and friend, Stanley Cowell’s masterful use of Kyma to enhance and extend the piano in ways that feels both musical and inevitable can be heard on his album Welcome to this New World (https://news.symbolicsound.com/2013/07/welcome-to-this-new-world/) and other recordings. Soft-spoken and deep-thinking, Cowell fervently believed that artists can have an affect on the world: “We are not just artists…we are citizens of the world. In our own personal ways, and when necessary, in unity with others, we should add our ‘fuel’ to the cleansing fire against injustice.”
https://www.wbgo.org/post/stanley-cowell-pianist-composer-and-educator-kaleidoscopic-view-jazz-dead-79#stream/0

Eugenio Giordani (21 March 1954 — 4 April 2020)

Eugenio Giordani’s reverberation algorithm (EuVerb) has touched the sounds of nearly every composer/sound designer in the Kyma community. We were saddened earlier this year to hear that Eugenio died of COVID-19 in April 2020. In September and October, there were several concerts held in honor of this polymath engineer/composer/performer https://www.lacertezza.it/commossi-omaggi-a-un-maestro/. And Kyma users will remember Eugenio each time they hear his exquisite, smooth reverberation enhancing their sounds and music.


Thanks to all of you for continuing to create and to give voice to the experiences of our times. We wish you a healthy, creative and productive 2021 filled with new ideas, new energy, and new sounds!

BTF: Looking back to move forward

 Album, Release  Comments Off on BTF: Looking back to move forward
Dec 312020
 

Over the course of 2020/21, composers and performers have been adapting to the constraints of the pandemic in various ways and inventing new paradigms for live musical performance. Composer Vincenzo Gualtieri has chosen to focus on one of the essential aspects of making music together: attentive and reciprocal listening.

Gualtieri’s work can now be heard on a new album: the (BTF) project, released on EMA Vinci Records: https://www.emavinci.it/lec/archives/1482.

“BTF” stands for feed-back to feed-forward, because of the extensive use of feedback principles in this work. In a deeper sense, though, BTF also stands for looking back to move forward.

Based on the premise of attentive and reciprocal listening, there is, in (BTF), an encounter between “electronic sounds” (processed in real time) and “acoustic sounds”. This interaction takes place within an environment of self-regulated feedback. The DSP system responsible for the digital treatment of sound metaphorically “listens” not only to the external acoustic energy state but also to its own own internal states. The same is required of the musical instrument performer.

As Gualtieri puts it, “Sound-producing systems – human and machine – create networks of circularly causal relationships and chains of self-generating (self-poietic processes), mutually co-determining sound events.”

Composing in this way requires an adaptation, a paradigm shift. If the digital processing of sound is linked to the acoustic properties of the environment, the quantity (and quality) of the generated sound events is no longer entirely predictable. Therefore the musical score asks the performer to take into account both the performance instructions and, at the same time, to continuously listen to the product of electronic processing and be freely influenced by it.

For (BTF) 1-4, Gaultieri worked in tandem with other musicians, with the composer managing live-electronics. Starting with (BTF) -5, he decided to perform the acoustic instruments himself. So, from (BTF) -5, onward, Gualtieri has automated the entire DSP process, resulting in two types of sound events: one managed by the digital system responsible for electronic processing (Kyma) and the other guided by music notation or word processing.

Capturing the spirit of invention inspired (and compelled) by worldwide lockdowns and isolation, Gualtieri writes:

La temporanea rinuncia a collaborare con altri musicisti mi ha permesso di concentrarmi diversamente
sull’esplorazione sonora e le sue forme organizzative.

– V. Gualtieri

(In English: “Temporarily giving up on collaborating with other musicians allowed me to focus differently
on sound exploration and its organizational forms.”)

An inspiring manifesto for what has and continues to be a challenging time for composers and performers worldwide.

Remote collaboration, telematic performances, and online learning

 Broadcast / Webcast, Concert, Event, Learning, Software, Telematic Performance, Web site  Comments Off on Remote collaboration, telematic performances, and online learning
May 092020
 

Networked collaboration, telematic performances, and online learning have been growing in popularity for several years, but the lockdowns and social-distancing guidelines precipitated by the global COVID-19 pandemic have accelerated the adoption of these modes of interaction. This is a brief (and evolving) report on some of the solutions your fellow Kyma-nauts have found for practicing creative collaborations, live performances, private tutoring, consulting, and teaching large online courses. Thanks for sharing your input, feedback and alternative solutions for distance-collaboration with the Kyma community!

Note: For example configurations showing how to get audio onto your computer and out onto a network, read this first.

Kyma Kata

One of the earliest ongoing examples is Alan Jackson’s Kyma Kata, a regular meeting of peers who practice Kyma programming together, that has been operating online in Google Hangouts for over a year before the crisis and recently celebrated their 100th session! (Did they know something the rest of the world didn’t?) The Kyma Kata currently meets twice a week, on Mondays and Tuesdays. They begin each session with “Prime Minister’s Question Time” (open question-and-answer session on how to do specific tasks in Kyma), followed by an exercise that each person works on independently for 30 minutes, after which they share and discuss their results. Ostensibly the session lasts for 2 hours, but when people really get interested in a problem, some of them stick with it for much longer (though there is no honor lost if someone has to leave after two hours).

Alan Jackson and the Tuesday Night Kyma Kata

Collaboration platform

The Kata use Google Meet (née Hangouts) for their meetings, primarily because it seems to work on everyone’s computer and it integrates well with Slack. To start a Hangout, they just type /hangout into the Slack channel.

Audio from Kyma

Kata participants focus on how to do something together, so screen-sharing is important and audio quality has been less important: they often play over the air, using the computer’s built-in microphone to send the audio.

Tuesday Kyma Kata: Andreas, Ben, Jason, Opal, Pete, Domenico, Simon, & Charlie

Andreas, Ben, Jason, Opal, Pete, Domenico, Simon & Charlie

 

For higher quality audio, Alan uses a small USB mixer plugged in to the Mac as the Hangouts audio source. Using the mixer, he can mix the Paca’s output and a microphone which provides a lot better quality than over-the-air through the laptop’s mic, although it’s still limited by Hangout’s audio quality, delay and bandwidth.

What is the Kata and How do I sign up?

The term, kata, which comes to us by way of karate, has been adopted by software engineers as a way to regularly practice their craft together by picking a problem and finding several different solutions. The point of a kata is not so much arriving at a correct answer as it is to practice the art of programming.

In the Kyma Kata, a group of aspiring Kymanistas come together regularly via teleconferencing  and the facilitator (Alan) introduces an exercise that everyone works independently for about half an hour, after which people take turns talking about their solutions. All levels of Kyma ability are welcome, so why not join the fun?

Improvisation with the Unpronounceables

The Unpronounceables are Robert Efroymson in Santa Fe New Mexico, Ilker Isikyakar in Albuquerque New Mexico, and Will Klingenmeier (ordinarily based in Colorado, but due to travel restrictions, on extended lockdown in Yerevan Armenia). To prepare for a live improvisation planned for KISS 2020, Robert proposed setting up some remote sessions using Jamulus.

The Unpronounceables

Collaboration platform

Using the Jamulus software, musicians can engage in real-time improvisation sessions over the Internet. A single server running the Jamulus server software collects audio data from each Jamulus client, mixes the audio data and sends the mix back to each client. Initially, Robert set up a private server for the group, but they now use one of the public Jamulus servers as an alternative. One of the amusing side-effects of using the public server is that they are occasionally joined by uninvited random guests who start jamming with them.

During a session, the Unpronounceables use a Slack channel to communicate with each other by text and Jamulus to time-align and mix the three audio sources and for sending the mix to each of the three locations.

Audio from Kyma

Each Unpronounceable uses a second interface to get audio from Kyma to the host computer. Robert uses a Behringer to come out of Kyma, and an IO/2 to get to his Mac. Ilker sends his MOTU Track 16 audio outputs to a Behringer; then selects the Behringer as an I/O in the Jamulus preference tab. Will uses a ZOOM H4n as his Kyma interface and sends the audio to an M-Audio Fast Track Pro which acts as the interface for Jamulus.

Ecosystemic audio in virtual rooms

Scott Miller and Pat O’Keefe’s HDPHN project has always been an exploration of what it means to be alone together — it’s a live public concert where each member of the audience wears headphones, rather than listening through speakers. When stay-at-home orders made it impossible for Scott in Otsego to meet in person with Pat in St. Paul, Minnesota, they started looking into how to move the live HDPHN performance onto the Internet.

 

When Earth Day Art Model 2020 shifted from a live to an online festival, Scott and Pat used this as an opportunity to dive in and perform HDPHN along with one of their older pieces Zeitgeist live through the Internet.

 

 

Audio from Kyma

Scott describes the audio routing for HDPHN as follows:

Pat’s mic comes over Zoom and out of my desktop headphone audio. It also goes into Kyma input 1 on my Traveller. With Zoom, I can’t get/send stereo from a live source. With two people (did this Friday) I bring the second person in on a separate Skype/Zoom/Facetime session on another device, and into Kyma input 2. With 2 inputs, I then mathematically cross-processing them in a virtual room.

I am sending Kyma’s processed/mixed output (Main 1-2) back into my desktop via Lynx E22 audio card, going into DSP-Quattro for compression and EQ, then to iShowU virtual audio interface 1) —> to Zoom for Pat’s monitoring, and 2) —>OBS and then to YouTube synced with Pat’s Zoom video. YouTube latency very bad and it wrecked chamber music with duo, but was fun for free improv with a different duo.

Live coding at a Virtual Club

On Monday, 11 May 2020, beginning at 20.30 CET Time, Lucretio will be live-coding in Kyma using a new Tool of his own design at The Circle UXR Zone. The Circle is a virtual club that is a UXR.zone — a decentralized social VR communication platform offering secure communication, free from user-tracking and ads. A spinoff project of the #30daysinVR transhumanist performance by Enea Le Fons, a UXR.zone allows participants to share virtual rooms and communicate via avatars across a range of devices: from VR headsets (main platform) to desktop and mobile phone browsers.

The avatars of people attending the UXR events are either anonymous (robots) or follow a strict dress code based on the CVdazzle research to stress the importance of cyber camouflage via aesthetics against dystopian surveillance measures happening in the real world. Click to enter the club

The belly of the BEAST

Simon Smith (who is actually quite a slender chap at the BEAST of Birmingham) has recently been tasked with researching online collaboration platforms for the BEAST, so we asked him for some tips from the front lines. He just completed a Sound and Music Workshop with Ximena Alarcon (earlier he helped Alarcon on a telematic performance using the Jacktrip software from Stanford).

In the workshop, she also mentioned:

  • Artsmesh A network music and performance management tool. Content creators run the Artsmesh client which streams live media point-to-point; audiences run a light Artsmesh client to watch the shows.
  • SoundJack is a realtime communication system with adjustable quality and latency parameters. Depending on the physical distance, network capacities, network conditions and routing, some degree of musical interaction is possible.
  • Jitsi,  an open source teleconferencing platform

Other options?

How have you been collaborating, teaching, consulting, creating during the lockdown? We’re interested in hearing your stories, solutions and experiences.

Have you used YouTube or Vimeo for live streaming with Kyma?

What’s your preferred video conferencing software for sending computer audio (Zoom, BigBlueButton, Meet)?

Have you been using remote desktop software (like Chrome Remote Desktop, Jump) to access your studio computer from home?

We welcome hearing about alternate solutions for this ongoing report.

New Pattern Generator for Kyma 7.25

 Release, Software, Sound Design, Sound for picture  Comments Off on New Pattern Generator for Kyma 7.25
Jun 122019
 

Generate sequences based on the patterns discovered in your MIDI files

Symbolic Sound today released a new module that generates endlessly evolving sequences based on the patterns it discovers in a MIDI file. HMMPatternGenerator, the latest addition to the library of the world’s most advanced sound design environment, is now available to sound designers, musicians, and researchers as part of a free software update: Kyma 7.25.

Composers and sound designers are masters of pattern generation — skilled at inventing, discovering, modifying, and combining patterns with just the right mix of repetition and variation to excite and engage the attention of a listener. HMMPatternGenerator is a tool to help you discover the previously unexplored patterns hidden within your own library of MIDI files and to generate endlessly varying event sequences, continuous controllers, and new combinations based on those patterns.

Here’s a video glimpse at some of the potential applications for the HMMPatternGenerator:

 

What can HMMPatternGenerator do for you?

Games, VR, AR — In an interactive game or virtual environment, there’s no such thing as a fixed-duration scene. HMMPatternGenerator can take a short segment of music and extend it for an indeterminate length of time without looping.

Live improvisation backgrounds — Improvise live over an endlessly evolving HMMPatternGenerator sequence based on the patterns found in your favorite MIDI files.

Keep your backgrounds interesting — Have you been asked to provide the music for a gallery opening, a dance class, a party, an exercise class or some other event where music is not the main focus? The next time you’re asked to provide “background” music, you won’t have to resort to loops or sample clouds; just create a short segment in the appropriate style, save it as a MIDI file, and HMMPatternGenerator will generate sequences in that style for as long as the party lasts — even after you shut down your laptop (because it’s all generated on the Paca(rana) sound engine, not on your host computer).

Inspiration under a deadline — Need to get started quickly? Provide HMMPatternGenerator with your favorite MIDI files, route its MIDI output stream to your sequencer or notation software, and listen while it generates endless recombinations and variations on the latent patterns lurking within those files. Save the best parts to use as a starting point for your new composition.

Sound for picture — When the director doubles the duration of a scene a few hours before the deadline, HMMPatternGenerator can come to the rescue by taking your existing cue and extending it for an arbitrary length of time, maintaining the original meter and the style but with continuous variations (no looping).

Structured textures — HMMPatternGenerator isn’t limited to generating discrete note events; it can also generate timeIndex functions to control other synthesis algorithms (like SumOfSine resynthesis, SampleClouds and more) or as a time-dependent control function for any other synthesis or processing parameter. That means you can use a MIDI file to control abstract sounds in a new, highly-structured way.

MIDI as code — If you encode the part-of-speech (like verb, adjective, noun, etc) as a MIDI pitch, you can compose a MIDI sequence that specifies a grammar for English sentences and then use HMMPatternGenerator to trigger samples of yourself speaking those words — generating an endless variety of grammatically correct sentences (or even artificial poetry). Imagine what other secret meanings you could encode as MIDI sequences — musical sequences that can be decrypted only when decoded by the Kyma Sound generator you’ve designed for that purpose.

Self-discovery — HMMPatternGenerator can help you tease apart what it is about your favorite music that makes it sound the way it does. By adjusting the parameters of the HMMPatternGenerator and listening to the results, you can uncover latent structures and hyper meters buried deep within the music of your favorite composers — including some patterns you hadn’t  even realized were hidden within your own music.

Remixes and mashups — Use HMMPatternGenerator to generate an never-ending stream of ideas for remixes (of one MIDI file) and amusing mashups (when you provide two or more MIDI files in different styles).

Galleries of possibilities — Select a MIDI file in the Kyma 7.25 Sound Browser and, at the click of a button, generate a gallery of hundreds of pattern generators, all based on that MIDI file. At that point, it’s easy to substitute your own synthesis algorithms and design new musical instruments to be controlled by the pattern-generator. Quickly create unique, high-quality textures and sequences by combining some of the newly-developed MIDI-mining pattern generators with the massive library of unique synthesis and processing modules already included with Kyma.

How does it work?

If each event in the original MIDI file is completely unique, then there is only one path through the sequence — the generated sequence is the same as the original MIDI sequence. Things start to get interesting when some of the events are, in some way, equivalent to others (for example, when events of the same pitch and duration appear more than once in the file).

HMMPatternGenerator uses equivalent events as pivot points — decision points at which it can choose to take an alternate path through the original sequence (the proverbial “fork in the road”). No doubt you’re familiar with using a common chord to pivot to another key; now imagine using a common state to pivot to a whole new section of a MIDI file or, if you give HMMPatternGenerator several MIDI files, from one genre to another.

By live-tweaking the strengths of three equivalence tests — pitch, time-to-next, and position within a hyper-bar — you can continuously shape how closely the generated sequence follows the original sequence of events, ranging from a note-for-note reproduction to a completely random sequence based only on the frequency with which that event occurs in the original file.

Other new features in Kyma 7.25 include:

▪ Optimizations to the Spherical Panner for 3d positioning and panning (elevation and azimuth) for motion-tracking VR or mixed reality applications and enhanced binaural mixes — providing up to 4 times speed increases (meaning you can track 4 times as many 3d sound sources in real time).

▪ Multichannel interleaved file support in the Wave editor

• New granular reverberation and 3d spatializing examples in the Kyma Sound Library

and more…

Availability

Kyma 7.25 is available as a free update starting today and can be downloaded from the Help menu in Kyma 7. For more information, please visit: symbolicsound.com.

Summary

The new features in the Kyma 7.25 sound design environment are designed to help you stay in the creative flow by adding automatic Gallery generation from MIDI files, and the HMMPatternGenerator module which can be combined with the extensive library of sound synthesis, pattern-generation, and processing algorithms already available in Kyma.

Background

Symbolic Sound revolutionized the live sound synthesis and processing industry with the introduction of Kyma in 1990. Today, Kyma continues to set new standards for sound quality, innovative synthesis and processing algorithms, rock-solid live performance hardware, and a supportive, professional Kyma community both online and through the annual Kyma International Sound Symposium (KISS).

For more information:

Website
Email
@SymbolicSound
Facebook


Kyma, Pacarana, Paca, and their respective logos are trademarks of Symbolic Sound Corporation. Other company and product names may be trademarks of their respective owners.

Jun 042019
 

Sound designers, electronic/computer musicians and researchers are invited to join us in Busan South Korea 29 August through 1 September 2019 for the 11th annual Kyma International Sound Symposium (KISS2019) — four days and nights of hands-on workshops, live electronic music performances, and research presentations on the theme: Resonance (공명).

Link where you can download the Korean, Japanese, or Chinese version of the poster.

“Resonance”, from the Latin words resonare (re-sound) and resonantia (echo), can be the result of an actual physical reflection, of an electronic feedback loop (as in an analog filter), or even the result of “bouncing” ideas off each other during a collaboration. When we say that an idea “resonates”, it suggests that we may even think of our minds as physical systems that can vibrate in sympathy to familiar concepts or ideas.

Photo by Belinda J Carr

At KISS2019, the concept of resonance will be explored through an opening concert dedicated to “ecosystemic” electronics (live performances in which all sounds are derived from the natural resonances of the concert hall interacting with the electronic resonances of speaker-microphone loops), through paper sessions dedicated to modal synthesis and the implementation of virtual analog filters in Kyma, through live music performances based on gravity waves, sympathetic brain waves, the resonances of found objects, the resonance of the Earth excited by an earthquake, and in a final rooftop concert for massive corrugaphone orchestra processed through Kyma, where the entire audience will get to perform together by swinging resonant tubes around their heads to experience collective resonance.

Sounds of Busan — two hands-on workshops open to all participants — focus on the sounds and datasets of the host city: Busan, South Korea. In part one, participants will take time series data from Busan Metropolitan City (for example, barometric pressure and sea level changes) and map those data into sound in order to answer the question: can we use our ears (as well as our eyes) to help discover patterns in data? In part two, participants will learn how to record, process, and manipulate 3d audio field recordings of Busan for virtual and augmented reality applications.

Several live performances also focus on the host city: a piece celebrating the impact of shipping containers on the international economy and on the port city of Busan; a piece inspired by Samul nori, traditional Korean folk music, in which four performers will play a large gong fitted with contact mics to create feedback loops; and a live performance of variations on the Korean folk song: Milyang Arirang, using hidden Markov models.

Hands-on Practice-based Workshops
In addition to a daily program of technical presentations and nightly concerts (https://kiss2019.symbolicsound.com/program-overview/), afternoons at KISS2019 are devoted to palindromic concerts (where composer/performers share technical tips immediately following the performance) and hands-on workshops open to all participants, including:

• Sounds of Busan I: DATA SONIFICATION
What do the past 10 years of meteorological data sound like? In this hands-on session, we will take time series data related to the city of Busan and map the data to sound. Can we hear patterns in data that we might not otherwise detect?

Photo by Belinda J Carr

• The Shape Atlas: MATHS FOR CONTROLLING SOUND
How can you control the way sound parameters evolve over time? Participants will work together to compile a dictionary associating control signal shapes with mathematical functions of time for controlling sound parameters.

• Sounds of Busan II: 3D SOUND TECHNIQUES
Starting with a collection of 3D ambisonic recordings from various locations in and around Busan, we will learn how to process, spatialize, mix down for interactive binaural presentation for games and VR.

Photo by Belinda J Carr

Networking Opportunities
Participants can engage with presenters and fellow symposiasts during informal discussions after presentations, workshops, and concerts over coffee, tea, lunches and dinners (all included with registration). After the symposium, participants can join their new-found professional contacts and friends on a tour of Busan (as a special benefit for people who register before July 1).

 

Sponsors and Organizers
Daedong College Department of New Music (http://eng.daedong.ac.kr/main.do)
Dankook University Department of New Music (http://www.dankook.ac.kr/en/web/international)
Symbolic Sound Corporation (https://kyma.symbolicsound.com/)
Busan Metropolitan City (http://english.busan.go.kr/index)

For more information
Questions
Website
Facebook
Twitter:

Registration
Student and early registration discounts are available for those registering prior to 1 July 2019

Photo by Belinda J Carr

Ghosts in the Uncanny Valley

 Concert, Event  Comments Off on Ghosts in the Uncanny Valley
Jan 292019
 

Composer/saxophonist Andrew Raffo Dewar will be at Mills College Center for Contemporary Music on Monday, February 4th, 2019 to perform his work, Ghosts in the Uncanny Valley II — a 35-minute composition for acoustic quartet improvising with live electronics programmed in the Kyma sound design environment. Kyma analyzes, manipulates, and expands upon the sounds of the acoustic instruments in real time, creating an electronically “extended” (and altered) quartet. The quartet features the composer on sax, Gino Robair (prepared piano), Kyle Bruckmann (oboe/English horn), and John Shiurba (acoustic guitar).

On January 31st 2019, Dewar will be at the Santa Monica Library to perform the premiere of his new piece for soprano sax and Kyma as part of their Sound Waves series. This performance starts at 7:30 pm and is free and open to the public.

 

Interview with Madison Heying

 Concert, Conference, Event, Festival, Interview  Comments Off on Interview with Madison Heying
Aug 162018
 

Madison Heying shows us the view from the Music Center at UC Santa Cruz

Madison Heying is a PhD candidate in cultural musicology at the University of California Santa Cruz where she focuses on experimental, electronic, and computer music. On any given day, you’re as likely to find Madison on a stage performing DYI analog electronic circuits with her partner David Kant as you are to find her holed up in the experimental music archives at the UCSC library. In between publishing scholarly articles and presenting papers at international musicology conferences, she also hosts a podcast and curates experimental music events around the Monterey Bay area as a member of Indexical, a composer-run artist collective that focuses on new chamber and experimental music, and especially music that lies outside of the aesthetic boundaries of major musical institutions.

Somehow Madison has also found time in her schedule to co-organize the Kyma International Sound Symposium this year in Santa Cruz on the themes: Altered States and Ecosystems. She sat down with us recently to talk a little about Santa Cruz, experimental music, and banana slugs…

Experimental, electronic, and computer music

Hi Madison. Could you please tell us what a cultural musicologist is (as distinct from historical musicology, etc)? What do you study and how?

A cultural musicologist is a music historian that pays particular attention to the people groups behind a given musical phenomenon. I think the attention given to cultural context has been a trend in musicology for a while now, but my PhD program makes it a priority. Many of us study living or recent composers and music-making communities and borrow a lot of our methodology and theory from ethnomusicology. My work broadly focuses on experimental, electronic, and computer music.

At UC Santa Cruz, it appears that experimental music is still very much ongoing and supported. Can you talk a little bit about what “Experimental Music” is and why UC Santa Cruz was and continues to be a strong center for this aesthetic or this mindset?

There is a really strong history of musical experimentation in the Bay Area in general, dating back to composers like Henry Cowell and Lou Harrison to the San Francisco Tape Music Center in the 1960s, and later programs at Mills College, CCRMA, and UCSC. James Tenney taught at UCSC for a year in the 70s. Gordon Mumma started the Electronic Music studio here, David Cope ran the Algorithmic Composition program for years. Along with the Cabrillo Music Festival (which used to be VERY experimental), Santa Cruz was something of a hub for weird music in the 70s and 80s. There’s a really strong tradition here of incorporating elements of non-Western music into a more experimental compositional practice, of developing hand-made electronics, and also big developments in DSP and computer music.

At UCSC there are currently some really exciting people on the faculty including composers Larry Polansky, David Dunn, and musicologist Amy C. Beal in the Music Department, sound artist Anna Friz and Yolande Harris in the Arts Division, and Kristin Erickson Galvin, who is also co-organising KISS2018, on the staff of the Digital Arts and New Media Program.

You’ve been learning Kyma and building analog circuits as part of your research. Does having hands-on experience with the tools change the way you view, understand, and report on the cultural implications and impact of technology?

Absolutely! Taking a hands on approach has given me significant insight not only into how a given technology works, but how it might have been used historically, and some of the reasons why a composer or musician employed the technology in a particular way.

The thing with Kyma in particular is that it’s such a rich, deep language, so I think even if I spent 20 years using it, I’d still learn new things. Having the hands-on experience has been a total necessity to just scratching the surface of understanding of how Kyma works and why it’s so unique. It’s also made a big difference to collaborate or work with people that know a lot more about electronics or programming; I’m able to learn so much by seeing how they tackle/think through problems and find solutions.

Kyma International Sound Symposium (KISS)

Kristin Erickson Galvin and Madison Heying at UCSC talking about their implementation of cellular automata in Kyma

What motivated you to co-host KISS2018 in Santa Cruz? What would you like to show people about Santa Cruz, your university, your home state? What are you hoping people will come away with after participating in this conference?

My first impulse was that co-hosting KISS2018 would be a very tangible way to give back to the Kyma community, who have given me so much! I also thought UCSC would be the perfect place to host KISS and I knew that this would be my last year here, so I figured, why not do it now?!

I think the first KISS you attended was KISS2015 in Bozeman Montana. What struck you about KISS that made it different from other conferences that you regularly attend?

I was particularly struck by how nice everyone is. At academic conferences people can be really cruel during the Q & A after a presentation or in down time. A good number of people are jockeying to make a good impression on senior scholars or prove their intelligence by making someone else look bad, there is definitely more of a hostile competitive atmosphere. It just takes time to find your people and to be comfortable being yourself in that kind of environment.

But at KISS, it’s different. Everyone is there to learn and share their work, so there is a much greater sense of camaraderie. If there is competition, it seems like it’s mostly self-imposed, that people just want to get better at using Kyma or their compositional or performative practice.

Madison in front of the Music Center Recital Hall at UCSC

Was KISS2016 in Leicester UK different from the experience you had in Montana? How was it different and how was it similar in terms of the people, the atmosphere, the content, the music? Has your picture of the Kyma community evolved over time and with more experience?

Yes, I think each KISS has its own flavor based on the host institution and the people that end up coming. On a personal level they were also different because in Bozeman I didn’t really know anyone except the people I came with. So I felt a bit more like a newbie outsider. But in Leicester, I felt like I was already part of the group and it was great to see so many familiar faces and reconnect with people I met in Bozeman (and of course to meet new people as well).

Are there some things that you’re particularly looking forward to for KISS2018?

For me it’s been really fascinating to see how people interpret the theme. I love the variety of approaches Kyma users take to composition and performance, it makes for really dynamic concerts. Each time I attend KISS there’s usually a few pieces that totally shock me and blow me away and leave me wondering how they did it or just in awe of someone’s prowess as a performer/composer. I’m looking forward to seeing the thing that’s just under everyone’s radar, but that’s going to be the really memorable piece.

Santa Cruz and the spirit of place

Do you believe there is such a thing as “spirit of place”? If so, then how does the natural, cultural, political environment of Santa Cruz affect you and your colleagues?

Yes, I do. I think the biggest thing I notice is that life moves at a slower pace in Santa Cruz than other places, people are rarely in a rush to do things. As an impatient person this is probably the best and most frustrating aspect of living here, it’s difficult to get other people to feel the same sense of urgency about something, but at the same time it also helps me slow down and “stop and smell the roses” as they say.

Madison at Seabright Beach

How is the atmosphere influenced by, yet distinct from, the culture of “The Valley”? Since it’s so close by, does Silicon Valley ever act as a magnet, draining people and activities away from Santa Cruz? Do people ever “escape” from the Valley and seek refuge in Santa Cruz?

Yes, it’s becoming more and more common for techies from “over the hill” to live in Santa Cruz and commute into Silicon Valley. They realized that the commute is the same as it is from San Francisco, with slightly cheaper rents and better beach access! In general I love being so close to Silicon Valley. Many of my close friends work for tech companies like Google, Facebook, or Uber. Some of the excitement and energy of their fast-paced lifestyles oozes into Santa Cruz and sends a jolt of fresh possibilities into this sleepy beach town. I also love to think about the history of the place, how since the 60s there’s a real convergence of counter-cultural values with the most cutting-edge, high-tech and commercial innovations. It makes for some interesting paradoxes, like the wealthy aging-hippy beach bum software developer 🙂

For those of us who are planning to come to KISS2018, what’s the one thing that every visitor to Santa Cruz absolutely, unequivocally, cannot miss seeing or experiencing on their first visit there?

Well, the best thing about Santa Cruz is that it has the beach and redwood forests, so I’d say they have to visit both things. To go for a hike in the redwoods, maybe on Pogonip trail near campus, or Nisene Marks, about 5 miles south. And then visit the beach. Seabright beach, near where I live, is great, because the tourists don’t know about it, so it’s not usually too crowded. If you don’t want to go in the water, a walk along West Cliff Drive will also blow you away, I think it’s probably one of the most beautiful beach walks in California! And of course you should probably take a ride on the Giant Dipper at the boardwalk!

Madison enjoys a Penny ice cream at the beach

Guilty pleasures?

Penny ice cream at the beach! (Sadly it does cost more than a penny but is worth it — some of the best ice cream I’ve ever had!) Also my favorite bakery/coffee shop is Companion Bakers. Both Companion and Penny have vegan/gf options, and REALLY good regular stuff too!

Should people bring their Zoom recorders to Santa Cruz? What is the must-record sound they have to capture while they are there?

Yes! The seals of the wharf are really fun to record. If you have a hydrophone there are also a lot of interesting sounds under the water, including snapping shrimp!

 
 

 

Banana slugs. Why or why not?

I am very pro-banana slugs! You really have to see one in person to appreciate them and what a ridiculous creature they are. I can’t imagine a better mascot to capture the spirit of this place.

How hearing can change the world

Thanks for taking time out to talk with us, Madison! To conclude, if there were one thing you could change that you think would be of most help to other people or to society as a whole, what would it be?

To be able to listen to someone that is different than you and have understanding and compassion, and to let that act of hearing change how you operate in the world. For everyone to have more empathy, to really understand that everyone has a singular view of the world, based on so many factors like where and how they were raised, race, gender, etc. and that everyone else’s experience is valid.

Thank you so much for taking the time to talk with us, Madison! We’re looking forward to having more discussions with you about life, empathy, experimental music, Kyma, and banana slugs at KISS2018: Altered States (6-9 September 2018 in Santa Cruz, California).

KISS2018: Altered States

 Concert, Conference, Event, Festival, Learning  Comments Off on KISS2018: Altered States
Jun 252018
 
A global community of sound designers & musicians meet to explore ways in which sound, music, and technology can alter state…

Sound designers, musicians, and sound-afficionados are invited to participate in the tenth annual Kyma International Sound Symposium (KISS2018) in Santa Cruz California from 6-9 September 2018 when Kyma practitioners at every level of experience — ranging from beginners to experts who make their living teaching, performing, and designing sounds with Kyma — will convene to present their most recent creative and technical work related to the conference theme, “Altered States” and sub-theme, “Ecosystems”.

Whether they interpret “Altered States” in terms of state machines for cryptography, shamanic trance states, stable/unstable states in a dynamical system, states of consciousness along the path to enlightenment, hidden states of a Markov model, or the ways in which active-listening can inspire changes to the state of the ecosystem, there is one point on which all the symposiasts agree: Sound and music can alter states.

KISS2018 Program Highlights

KISS2018 will feature over 25 hours of technical sessions, discussions, and live electronic music performances showcasing some of the most thought-provoking work created with the Kyma sound design environment this year. The full KISS2018 schedule is available online.

Here are a few highlights:

Gabriel Montufar (DJ Monti) is collaborating with the University of California, Santa Cruz (UCSC) Fencing Club to present En Garde, a unique live performance in which the movements and breath of fencers engaged in a live duel are transformed into intricate sounds intended to alter the state of the fencers and the outcome of the match.

The Tower of Voices is a ninety-three foot tall musical instrument containing forty wind chimes to represent the forty passengers and crew members of United Flight 93. Artist Ben Salzman (Hamilton College) and composer Jon Bellona (University of Oregon) will reflect on the states of existence between life and death as they reconstruct the compositional processes of their late friend and mentor Sam Pellman who composed the music for this installation. The formal dedication of the Tower of Voices will be held on 9 September, 2018 in Pennsylvania as part of this year’s 9/11 observances.

Kristin Erickson (aka Kevin Blechdom), Technical Coordinator for Digital Arts and New Media at UCSC, will present the premiere of her new operetta The Dolphinarium in collaboration with film and television producer, Matthew Galvin. Based on the groundbreaking research of physician, neuroscientist, psychoanalyst, psychonaut, philosopher, writer and inventor John C. Lilly, the operetta explores aspects of Lilly’s 1965 Dolphin Cohabitation experiments and his lifelong research into altered states.

Carla Scaletti, president of Symbolic Sound Corporation and co-creator of the Kyma language for sound design, will welcome symposium delegates with a keynote lecture on the conference theme of Altered States in relation to sound, programming languages, memory, and learning.

Italian DJ/producer Domenico Cipriani (Lucretio) is performing Predator/Prey, a living sonic ecosystem in which sounds are born, move, hunt, reproduce, and die within a quadraphonic listening space, inspired by John Holland’s Adaptation in Natural and Artificial Systems and Daniel Shiffman’s The Nature of Code. Cipriani, whose degree is in linguistics from the University of Padua, studies the relationship between functionalism and social semiotics. Inspired by Cristian Vogel´s 2016 performance at the Decipher Language party in Berlin, Cipriani’s recent focus has been digital audio programming and performing with the Symbolic Sound Kyma system.

Korean composer Kiyoung Lee and pianist/improviser Ha-Young Park from Dankook University will present Turritopsis dohrnii, a live performance based on the process of transdifferentiation performed by the “immortal jellyfish”, a biologically immortal species that can literally alter the state of its own cells.

Franz Danksagmüller, professor at the Musikhochschule Lübeck and the Royal Academy of Music in London and creator/performer of live electronics and sound design for John Malkovich’s “Just Call Me God”, will be performing emotional states — Lieder one Worte, a song cycle based on the utterances people make when they can’t find the right word or expression during a conversation.

Robert Efroymson, software developer and CEO of the high-speed optical communications firm Dynamic Photonics, will describe and demonstrate his new Cryptographic Music Sequencer modeled after the M-209 — a WWII era mechanical encryption device.

Garth Paine, Senior Sustainability Scientist and composer at Arizona State University, will present a keynote lecture on the Listen(n) project with a focus on the ways in which active-listening can inspire meaningful action toward changing the state of the environment.

and many others… (Click for the full schedule of concerts and talks)

Who should attend KISS2018?

For anyone who is obsessed with sound — whether a novice seeking to kickstart their career, an expert looking to take their mastery to the next level, or someone who’s simply curious about how sound and music can alter states — KISS2018 is an opportunity to be immersed in sound and ideas and surrounded by fellow sound enthusiasts for four days and nights of intensive discussion, learning, music, and forging new professional connections and lifelong friendships.

Registration for KISS2018 is open to all and includes access to the lectures, hands-on labs, lunches, dinners, coffee breaks and an opening reception and seven live performances at the UCSC Recital Hall, Digital Arts Research Center (DARC), including a special, outdoor concert among the redwoods at the Stanley Sinsheimer Glen.

Organizers

KISS2018 is being co-organized by the University of California, Santa Cruz (UCSC) Arts Division, the Digital Arts and New Media Research Center, and Symbolic Sound Corporation.

Contact information and details

For information on registration, travel/lodging information, and programming, please visit: http://kiss2018.symbolicsound.com

To follow the latest KISS2018 news and developments:
Facebook: http://on.fb.me/nI9ATE
Twitter: https://twitter.com/KymaSymposium

The KISS2018 Organizers would be happy to answer your questions via email.

Tactile Utterances

 Album, Concert, Event, Installation, Release  Comments Off on Tactile Utterances
Jun 182018
 

Composer/sonologist Roland Kuit encountered the paintings of Tomas Rajlich in 1992. ‘Fundamental Painting’, a minimalist strategy that explores the post-existential nature of the painting itself – its color, structure and surface — it is simply the painting as a painting. Tomas opened Kuit’s eyes to a kind of minimalism that Kuit recognized in his music at that time when he was working with semi-predictable chaotic systems. Kuit began creating works for Tomas Rajlich in 1993 and last year, Kuit released a new piece for Kyma-extended string quartet: Tactile Utterance – for Tomas Rajlich.

The world premiere of Tactile Utterance took place on 23 June 2017 in the Kampa Museum – The Jan and Meda Mládek Foundation in Prague (CZ) for the opening of a special Tomas Rajlich retrospective: Zcela abstraktní retrospektiva. Composed especially for the occasion, Kuit’s three part work Tactile Utterance, expresses 50 years of painting by Tomas Rajlich.

Kuit’s recent research into new compositional methods, algorithms, and spectral music came together in this work. His aim was to capture the process of painting: how can we relate acrylate polymers on canvas to sound? Using bowing without ‘tone’ as a metaphor for brushing a tangible thickness of color; pointing out the secants with very short percussive sounds on the string instruments as grid; dense multiphonics as palet knifes — broadened textures smeared out and dissolving into light.

The premiere, performed by the FAMA Quartet with Roland Kuit on Kyma, was very well received.

The Prague recordings

For the recording, made during 15-20 February 2018, Roland decided to record the string quartet alone and unprocessed so he could do post-processing and balancing in the studio. Recording engineer Milan Cimfe of the SONO Recording Studios in Prague used 3 sets of microphones: one to create a very ‘close to the skin’ recording of all string instruments; the second set overhead; and the third set as ‘room’ recording. Kuit took the recordings to Sweden to finish the mix and Kyma processing.

Album art © Tomas Rajlich, Acrylic on Linen, 1990-1991 c/o Pictoright Amsterdam 2018

Tactile Utterance – Roland Emile Kuit
For Tomas Rajlich

1/ BRUSH 00:14:42

From a pianissimo-bowed wood sounds to noise, to an elaborated crescendo ending in a broad fortissimo textural cluster: Kyma extends the string sounds with spectral holds.

2/ MAZE 00:12:06

When walking by a grid, we see it first condensed – then open – then condensed again in both horizontal and vertical directions. The string quartet interprets ‘intersections’ by means of percussive sounds like pizzicato, spiccato, martelé, col legno etc. These sounds are treated as particles copied 100 times with the Kyma system, resulting in a noise wall. A ritardando to the center of the piece allows these particles to be distinguished as single sounds. With these single sounds, Roland made “spectral pictures” that could be smeared to complement the grid lines, followed by an accelerando back to prestissimo particles again.

3/ SURFACE 00:14:59

Multiphonics morphing to airy flageolets and the Kyma system processing the string quartet in algorithmic multiplexed resynthesized sounds, dissolving them into a muffled softness.

Roland Emile Kuit – Kyma

FAMA Quartet:
David Danel, – violin
Roman Hranička – violin,
Ondřej Martinovský – viola
Balázs Adorján – violoncello

Recorded by Milan Cimfe at the Sono Recording Studios Prague

DONEMUS
Composers Voice: CV 229

Youtube:

© 2012 the eighth nerve Suffusion theme by Sayontan Sinha