At the IRCAM Forum Workshops @Seoul 6-8 November 2024, composer Steve Everett presented a talk on the compositional processes he used to create FIRST LIFE: a 75-minute mixed media performance for string quartet, live audio and motion capture video, and audience participation.
FIRST LIFE is based on work that Everett carried out at the Center of Chemical Evolution, a NSF/NASA funded project at multiple universities to examine the possibility of the building blocks of life forming in early Earth environments. He worked with stochastic data generated by Georgia Tech biochemical engineer Martha Grover and mapped them to standard compositional structures (not as a scientific sonification, but to help educate the public about the work of the center through a musical performance).
Data from IRCAM software and PyMOL were mapped to parameters of physical models of instrumental sounds in Kyma. For example, up to ten data streams generated by the formation of monomers and polymers in Grover’s lab were used to control parameters of the “Somewhat stringish” model in Kyma (such as delay rate, BowRate, position, decay, etc). Everett presented a poster about this work at the 2013 NIME Conference in Seoul, and has uploaded some videos from the premiere of First Life at Emory University.
Currently on the music composition faculty of the City University of New York (CUNY), Professor Everett is teaching a doctoral seminar on timbre in the spring (2025) semester and next fall he will co-teach a course on music and the brain with Patrizia Casaccia, director of the Neuroscience Initiative at the CUNY Advanced Science Research Center.
A highlight of this year’s Cortona Sessions for New Music will be Special Guest Artist, Anne La Berge. Known for her work blending composed and improvised music, sound art, and storytelling, Anne will be working closely with composers and will be coaching performers on improvisation with live Kyma electronics!
The Cortona Sessions for New Music is scheduled for 20 July – 1 August 2025 in Ede, Netherlands, and includes twelve days of intensive exploration of contemporary music, collaboration, and discussions on what it takes to make a career as a 21st-century musician.
Anne is eager to work with instrumentalists and composers looking to expand their solo or ensemble performances through live electronics, so if you or someone you know is interested in working with Anne this summer, consider applying for the 2025 Cortona Sessions!
Applications are open now (Deadline: 1 February 2025). You can apply as a Composer, a Performer, or as a Groupie (auditor). A full-tuition audio/visual fellowship is available for applicants who can provide audio/visual documentation services and/or other technological support.
At the invitation of UWL Lecturer Charlie Norton, Carla Scaletti presented a lecture/demonstration on Generative Sound Design in Kyma for students, faculty and guests at University of West London on 14 November 2024. As an unanticipated prelude, Pete Townshend (who, along with Joseph Townshend, works extensively with Kyma) welcomed the Symbolic Sound co-founders to his alma mater and invited attendees to tour the Townshend Studio following the lecture.
It seems that anywhere you look in the Townshend Studio, you see another rock legend. John Paul Jones (whose most recent live Kyma collaborations include Sons of Chipotle, Minibus Pimps, and Supersilent among others) recognized an old friend from across the room: a Yamaha GX-1 (1975), otherwise known as ‘The Dream Machine’ — the same model JPJ played when touring with Led Zeppelin and when recording the 1979 album “In Through The Out Door”. Yamaha’s first foray into synthesizers, only 10 were ever manufactured; it featured a ribbon controller and a keyboard that could also move laterally for vibrato. Other early adopters included ELP, Stevie Wonder and Abba.
Did you know that you could study for a degree in sound design and work with Kyma at Carnegie Mellon University? Joe Pino, professor of sound design in the School of Drama at Carnegie Mellon University, teaches conceptual sound design, modular synthesis, Kyma, film sound design, ear training and audio technology in the sound design program.
Sound design works in the spaces between reality and abstraction. They are less interesting as a collection of triggers for giving designed worlds reality. They are more effective when they trigger emotional responses and remembered experiences.
There are so many ways to learn Kyma (the online documentation, asking questions in the Kyma Discord, working with a private coach or group of friends…). This semester there are also two new university courses where, not only can you can learn Kyma, you’ll also have a chance to work on creative projects with a composer/mentor and interact with fellow Kyma sound designers in the studio while also earning credit toward a degree.
In “Sonic Narratives” you’ll learn to combine traditional instruments and electronic music technologies to explore storytelling through sound. Treating the language of sound as a potent narrative tool, the course covers advanced sound synthesis techniques such as Additive, Subtractive, FM, Granular, and Wavetable Synthesis using state-of-the-art tools like Kyma and Logic Pro. Beyond technical proficiency, students will explore how these synthesis techniques contribute to diverse fields, from cinematic soundtracks to social media engagement.
From the course catalog: The Kyma System is an advanced real-time sound synthesis and electro-acoustic music composition and performance software/hardware instrument. It is widely used in major film sound design studios, by composers across the globe and in scientific sound sonification. The Kyma system is a patcher like environment which can also be scripted and driven externally by OSC and MIDI. Algorithms can be placed in timelines for dynamic instantiation based on musical events or in grids and as fixed patches. The system has several very powerful FFT and spectral processing approach which can also be used live. In this class, learn about the potential of the system and several of the ways in which it can be used in creating innovative sound design and live electronics with instruments. The class is focused on students who are interested in electroacoustic music composition and realtime performance and more broadly in sound design.
These are not the only institutions of higher learning where Kyma knowledge is on offer this fall. Here’s a sampling of some other schools offering courses where you’ll learn to apply Kyma skills to sound design, composition, and data sonification:
University of Oregon (Jeffrey Stolet, Jon Bellona, Zachary Boyt)
University of New Haven (Simon Hutchinson)
Indiana University (Chi Wang)
Zhejiang Conservatory of Music (Fang Wang)
Sichuan Conservatory of Music (Iris Lu)
Wuhan Conservatory of Music (Sunhuimei Xia)
Dankook University, Seoul (Kiyoung Lee)
Musikhochschule Lübeck (Franz Danksagmüller)
If you are teaching a Kyma course this year, and don’t see yourself on the list, please let us know.
Franz Danksagmüller was in Waltershausen 25-28 August 2024 presenting workshops on how to integrate live electronics with the pipe organ. Here’s a photo of “the hand” controller designed by Franz, alongside an Emotiv EEG headband and Kyma Control.
Although from this vantage point, one might think that the organ loft is nearly paradisiacal…
…the organ builder took pains to remind the organist of the alternative (or at the very least, to have a laugh).
Music professor Mei-ling Lee was recently featured in the Haverford College blog highlighting her new course offering: “Electronic Music Evolution: From Foundational Basics to Sonic Horizons”, a course that provides students with an in-depth introduction to the history, theory, and practical application of electronic music from the telharmonium to present-day interactive live performances driven by cutting-edge technologies. Along the way, her students also cultivate essential critical listening skills, vital for both music creation and analysis.
In addition to introducing new courses this year, Dr. Lee also presented her paper “Exploring Data-Driven Instruments in Contemporary Music Composition” at the 2024 Society for Electro-Acoustic Music in the United States (SEAMUS) National Conference, held at the Louisiana State University Digital Media Center on 5 April 2024, and published as a digital proceeding through the LSU Scholarly Repository. This paper explores connections between data-driven instruments and traditional musical instruments and was also presented at the Workshop on Computer Music and Audio Technology (WOCMAT) National Conference in Taiwan in December 2023.
Lee’s electronic music composition “Summoner” was selected for performance at the MOXSonic conference in Missouri on 16 March 2024 and the New York City Electronic Music Conference (NYCEMF) in June 2024. Created using the Kyma sound synthesis language, Max software, and the Leap Motion Controller, it explores the concept of storytelling through the sounds of animals in nature.
Virtuoso organist / composer / live electronics performer, Franz Danksagmüller is presenting an afternoon workshop followed by an evening concert of new works for “hyper-organ” and Kyma electronics at the Orgelpark in Amsterdam, Friday 7 June 2024, as part of the 2024 International Orgelpark Symposium on the theme of ‘interfaces’ (in the broadest sense of the word).
In his symposium, “New Music, Traditional Rituals“, Danksagmüller wrestles with the question of how new music might play a role in liturgy and religious rituals. While the development of the so-called hyper-organ reveals the strong secular roots of the pipe organ, which, after all, spent the first 15 centuries of its life outside the church, the pipe organ has also been significantly formed by its six centuries within the church, a church which inspired organ builders and organists to create the magnificent instruments that remain an important part of Europe’s patrimony today.
Is there a role for new music in the church? In their pursuit of “truth” do scientists and theologians share any common ground? Danksagmüller does not shy away from the “big questions”!
Networked collaboration, telematic performances, and online learning have been growing in popularity for several years, but the lockdowns and social-distancing guidelines precipitated by the global COVID-19 pandemic have accelerated the adoption of these modes of interaction. This is a brief (and evolving) report on some of the solutions your fellow Kyma-nauts have found for practicing creative collaborations, live performances, private tutoring, consulting, and teaching large online courses. Thanks for sharing your input, feedback and alternative solutions for distance-collaboration with the Kyma community!
Note: For example configurations showing how to get audio onto your computer and out onto a network, read this first.
Kyma Kata
One of the earliest ongoing examples is Alan Jackson’s Kyma Kata, a regular meeting of peers who practice Kyma programming together, that has been operating online in Google Hangouts for over a year before the crisis and recently celebrated their 100th session! (Did they know something the rest of the world didn’t?) The Kyma Kata currently meets twice a week, on Mondays and Tuesdays. They begin each session with “Prime Minister’s Question Time” (open question-and-answer session on how to do specific tasks in Kyma), followed by an exercise that each person works on independently for 30 minutes, after which they share and discuss their results. Ostensibly the session lasts for 2 hours, but when people really get interested in a problem, some of them stick with it for much longer (though there is no honor lost if someone has to leave after two hours).
Kata participants focus on how to do something together, so screen-sharing is important and audio quality has been less important: they often play over the air, using the computer’s built-in microphone to send the audio.
For higher quality audio, Alan uses a small USB mixer plugged in to the Mac as the Hangouts audio source. Using the mixer, he can mix the Paca’s output and a microphone which provides a lot better quality than over-the-air through the laptop’s mic, although it’s still limited by Hangout’s audio quality, delay and bandwidth.
What is the Kata and How do I sign up?
The term, kata, which comes to us by way of karate, has been adopted by software engineers as a way to regularly practice their craft together by picking a problem and finding several different solutions. The point of a kata is not so much arriving at a correct answer as it is to practice the art of programming.
In the Kyma Kata, a group of aspiring Kymanistas come together regularly via teleconferencing and the facilitator (Alan) introduces an exercise that everyone works independently for about half an hour, after which people take turns talking about their solutions. All levels of Kyma ability are welcome, so why not join the fun?
Improvisation with the Unpronounceables
The Unpronounceables are Robert Efroymson in Santa Fe New Mexico, Ilker Isikyakar in Albuquerque New Mexico, and Will Klingenmeier (ordinarily based in Colorado, but due to travel restrictions, on extended lockdown in Yerevan Armenia). To prepare for a live improvisation planned for KISS 2020, Robert proposed setting up some remote sessions using Jamulus.
Collaboration platform
Using the Jamulus software, musicians can engage in real-time improvisation sessions over the Internet. A single server running the Jamulus server software collects audio data from each Jamulus client, mixes the audio data and sends the mix back to each client. Initially, Robert set up a private server for the group, but they now use one of the public Jamulus servers as an alternative. One of the amusing side-effects of using the public server is that they are occasionally joined by uninvited random guests who start jamming with them.
During a session, the Unpronounceables use a Slack channel to communicate with each other by text and Jamulus to time-align and mix the three audio sources and for sending the mix to each of the three locations.
Audio from Kyma
Each Unpronounceable uses a second interface to get audio from Kyma to the host computer. Robert uses a Behringer to come out of Kyma, and an IO/2 to get to his Mac. Ilker sends his MOTU Track 16 audio outputs to a Behringer; then selects the Behringer as an I/O in the Jamulus preference tab. Will uses a ZOOM H4n as his Kyma interface and sends the audio to an M-Audio Fast Track Pro which acts as the interface for Jamulus.
Ecosystemic audio in virtual rooms
Scott Miller and Pat O’Keefe’s HDPHN project has always been an exploration of what it means to be alone together — it’s a live public concert where each member of the audience wears headphones, rather than listening through speakers. When stay-at-home orders made it impossible for Scott in Otsego to meet in person with Pat in St. Paul, Minnesota, they started looking into how to move the live HDPHN performance onto the Internet.
When Earth Day Art Model 2020 shifted from a live to an online festival, Scott and Pat used this as an opportunity to dive in and perform HDPHN along with one of their older pieces Zeitgeist live through the Internet.
Audio from Kyma
Scott describes the audio routing for HDPHN as follows:
Pat’s mic comes over Zoom and out of my desktop headphone audio. It also goes into Kyma input 1 on my Traveller. With Zoom, I can’t get/send stereo from a live source. With two people (did this Friday) I bring the second person in on a separate Skype/Zoom/Facetime session on another device, and into Kyma input 2. With 2 inputs, I then mathematically cross-processing them in a virtual room.
I am sending Kyma’s processed/mixed output (Main 1-2) back into my desktop via Lynx E22 audio card, going into DSP-Quattro for compression and EQ, then to iShowU virtual audio interface 1) —> to Zoom for Pat’s monitoring, and 2) —>OBS and then to YouTube synced with Pat’s Zoom video. YouTube latency very bad and it wrecked chamber music with duo, but was fun for free improv with a different duo.
Live coding at a Virtual Club
On Monday, 11 May 2020, beginning at 20.30 CET Time, Lucretio will be live-coding in Kyma using a new Tool of his own design at The Circle UXR Zone. The Circle is a virtual club that is a UXR.zone — a decentralized social VR communication platform offering secure communication, free from user-tracking and ads. A spinoff project of the #30daysinVR transhumanist performance by Enea Le Fons, a UXR.zone allows participants to share virtual rooms and communicate via avatars across a range of devices: from VR headsets (main platform) to desktop and mobile phone browsers.
The avatars of people attending the UXR events are either anonymous (robots) or follow a strict dress code based on the CVdazzle research to stress the importance of cyber camouflage via aesthetics against dystopian surveillance measures happening in the real world. Click to enter the club
The belly of the BEAST
Simon Smith (who is actually quite a slender chap at the BEAST of Birmingham) has recently been tasked with researching online collaboration platforms for the BEAST, so we asked him for some tips from the front lines. He just completed a Sound and Music Workshop with Ximena Alarcon (earlier he helped Alarcon on a telematic performance using the Jacktrip software from Stanford).
In the workshop, she also mentioned:
Artsmesh A network music and performance management tool. Content creators run the Artsmesh client which streams live media point-to-point; audiences run a light Artsmesh client to watch the shows.
SoundJack is a realtime communication system with adjustable quality and latency parameters. Depending on the physical distance, network capacities, network conditions and routing, some degree of musical interaction is possible.
How have you been collaborating, teaching, consulting, creating during the lockdown? We’re interested in hearing your stories, solutions and experiences.
Have you used YouTube or Vimeo for live streaming with Kyma?
What’s your preferred video conferencing software for sending computer audio (Zoom, BigBlueButton, Meet)?
Have you been using remote desktop software (like Chrome Remote Desktop, Jump) to access your studio computer from home?
We welcome hearing about alternate solutions for this ongoing report.
Sound designers, electronic/computer musicians and researchers are invited to join us in Busan South Korea 29 August through 1 September 2019 for the 11th annual Kyma International Sound Symposium (KISS2019) — four days and nights of hands-on workshops, live electronic music performances, and research presentations on the theme: Resonance (공명).
“Resonanceâ€, from the Latin words resonare (re-sound) and resonantia (echo), can be the result of an actual physical reflection, of an electronic feedback loop (as in an analog filter), or even the result of “bouncing†ideas off each other during a collaboration. When we say that an idea “resonatesâ€, it suggests that we may even think of our minds as physical systems that can vibrate in sympathy to familiar concepts or ideas.
At KISS2019, the concept of resonance will be explored through an opening concert dedicated to “ecosystemic†electronics (live performances in which all sounds are derived from the natural resonances of the concert hall interacting with the electronic resonances of speaker-microphone loops), through paper sessions dedicated to modal synthesis and the implementation of virtual analog filters in Kyma, through live music performances based on gravity waves, sympathetic brain waves, the resonances of found objects, the resonance of the Earth excited by an earthquake, and in a final rooftop concert for massive corrugaphone orchestra processed through Kyma, where the entire audience will get to perform together by swinging resonant tubes around their heads to experience collective resonance. Sounds of Busan — two hands-on workshops open to all participants — focus on the sounds and datasets of the host city: Busan, South Korea. In part one, participants will take time series data from Busan Metropolitan City (for example, barometric pressure and sea level changes) and map those data into sound in order to answer the question: can we use our ears (as well as our eyes) to help discover patterns in data? In part two, participants will learn how to record, process, and manipulate 3d audio field recordings of Busan for virtual and augmented reality applications.
Several live performances also focus on the host city: a piece celebrating the impact of shipping containers on the international economy and on the port city of Busan; a piece inspired by Samul nori, traditional Korean folk music, in which four performers will play a large gong fitted with contact mics to create feedback loops; and a live performance of variations on the Korean folk song: Milyang Arirang, using hidden Markov models.
Hands-on Practice-based Workshops
In addition to a daily program of technical presentations and nightly concerts (https://kiss2019.symbolicsound.com/program-overview/), afternoons at KISS2019 are devoted to palindromic concerts (where composer/performers share technical tips immediately following the performance) and hands-on workshops open to all participants, including:
• Sounds of Busan I: DATA SONIFICATION
What do the past 10 years of meteorological data sound like? In this hands-on session, we will take time series data related to the city of Busan and map the data to sound. Can we hear patterns in data that we might not otherwise detect?
• The Shape Atlas: MATHS FOR CONTROLLING SOUND
How can you control the way sound parameters evolve over time? Participants will work together to compile a dictionary associating control signal shapes with mathematical functions of time for controlling sound parameters.
• Sounds of Busan II: 3D SOUND TECHNIQUES
Starting with a collection of 3D ambisonic recordings from various locations in and around Busan, we will learn how to process, spatialize, mix down for interactive binaural presentation for games and VR.
Networking Opportunities
Participants can engage with presenters and fellow symposiasts during informal discussions after presentations, workshops, and concerts over coffee, tea, lunches and dinners (all included with registration). After the symposium, participants can join their new-found professional contacts and friends on a tour of Busan (as a special benefit for people who register before July 1).
Sponsors and Organizers
Daedong College Department of New Music (http://eng.daedong.ac.kr/main.do)
Dankook University Department of New Music (http://www.dankook.ac.kr/en/web/international)
Symbolic Sound Corporation (https://kyma.symbolicsound.com/)
Busan Metropolitan City (http://english.busan.go.kr/index)