The inaugural event of the ligeti center “Artists and Scientists in Residence” program and christening of their brand new spatial audio system was the 4 April 2024 world premiere of Fredrik Mathias Josefson’s spatial audio composition “Above a Sea of Fog,” an immersive electroacoustic environment that employs recording, Kyma synthesis, and “negative space” to weave a spatial soundscape. Josefson also provided insights into his creative processes and spatial audio methodologies and discussed his two-month residency at the ligeti center.
Bringing together expertise from the realms of art, science, and technology, the “Production Lab” at the heart of the ligeti center serves as a presentation venue and an open production space for creative exploration. From immersive spatial audio to haptics, from cyber instruments to music therapy, and beyond, it is designed as a hub for interaction, collaboration, and critical discourse. As a fully equipped studio space, “Production Lab” provides creators with a range of options for immersive sound production and performance, including motion capture technologies, VR/XR solutions, and a free space for creative thinking and experimentation.
Listen to composer/performer Anne La Berge sustained by the Kyma infinite reverb during a joint performance of Orphax & MAZE Ensemble at the the Rewire Festival in Den Haag 7 April 2024.
Orphax’s recent work, En de Stilstaande Tijd, explores the concept of “frozen time”, and MAZE is all about “real time” — interacting with acoustic instruments and electronics in the moment. Check out this interview with Anne La Berge, Dario Caldrone and Garth Davis to find out how these time-bending artists managed to find a common language for their live collaboration.
“I used to have 30 guys helping me do this,” quips Jones as he adjusts a mic stand. “…but here I am.”
And indeed, there he was, all alone on the stage, performing a dizzying variety of instruments, from lap steel, to mandolin, to triple-neck guitar, grand piano, pipe organ, Kyma electronics, and of course his signature bass. Winning the prize for the festival’s most audacious entrance, Jones rose from the orchestra pit playing “Your Time Is Gonna Come” on a bright red-painted theater organ, to delighted squeals of recognition, masterfully tweaking his audience’s collective memories and, alternately coaxing them along with him into the future with live electro-acoustic performances like this one:
Bassist, keyboardist, mandolinist, composer (and much more), John Paul Jones – once in a little band called Led Zeppelin — has more recently toured as Them Crooked Vultures with Dave Grohl and Joshua Homme, appeared in Mark-Anthony Turnage’s opera ‘Anna Nicole,’ toured the US with Dave Rawlings and Gillian Welch, released Cloud to Ground (debut album of Minibus Pimps), and made several guest appearances with acclaimed Norwegian avant-garde group Supersilent.
John was also one of the first adopters of Kyma (on the original Capybara), and he continues to expand the frontiers of live interactive electronics performance today using the newest incarnation of the APU hardware — the Pacamara Ristretto — in conjunction with his bass, piano, and a menagerie of tiny controllers and other instruments (expect the unexpected).
“One of the world’s biggest music bashes” (NYT), the Big Ears Festival features visionary composers and musicians on concerts that transcend genre — bringing together trailblazers and iconoclasts, blending classical and contemporary composition, jazz, rock, world music, pop, avant-garde, ambient, experimental, and beyond.
Big Ears offers 200 performances during the festival situated in historic downtown Knoxville — at restored historic theaters, churches, refurbished warehouse spaces, museums, galleries, and clubs — with pop-up events and performances, exhibitions, films, literary readings, workshops, markets and talks taking place in cafes, bars, hotels, restaurants, all within walking distance or dedicated trolley service to nearby hotels.
The 2024 MOXsonic (Missouri Experimental Sonic Arts Festival) March 14-16, 2024 is an exploration of sonic possibilities and includes several opportunities to hear live Kyma electronic performances:
Mei-ling Lee’s “Summoner” is a mythical narrative that evokes the nocturnal summoning of peacocks and owls.
Mark Zaki’s “Masks” explores identity in the digital age, using violin, Kyma, and live video to delve into themes of self-curation and virtual anonymity.
In Mark Phillips’ “Dream Dance”, dreamy vocoders give way to an algorithmic synth groove, but only partially … and not for long.
Chi Wang will perform AEON on a new instrument she designed — the Yuan, a bamboo round-shaped interactive, data-driven controller and connected the sensors
Experience innovative music and engage with the creative minds shaping the future of sound! Tickets to all events are FREE and OPEN to the general public.
Anne LaBerge (flute & text) joins forces with guitarist and electronic musician Tom Baker (guitar & Theremin) on a concert tour exploring themes of racism and resilience. Their improvised performances, enhanced by Kyma electronics, will take place in Brussels (March 13), Rotterdam (March 17), and Amsterdam (March 18).
LaBerge and Baker draw inspiration from Louise Erdrich’s The Plague of Doves, a novel exploring the enduring impact of a racist act committed against four Ojibwe people in North Dakota in 1896.
Anne La Berge’s passion for the extremes in both composed and improvised music has led her to the fringes of storytelling and sound art as her sources of musical inspiration. Tom Baker is a composer, guitarist, improviser, and electronic musician, active in the Seattle new-music scene.
Audiences will experience Simon Hutchinson‘s music projected from a linear array of 15 giant speakers positioned underneath the 420 ft Boggy Creek overpass in Rosewood park east of Austin Texas on November 19th, 2022 from 6:30-9:30 pm.
Hutchinson is one of six composers commissioned to create works inspired by the past, present and (imagined) future sounds of transportation, utilizing the dramatic sonic movement capabilities of a new sound system designed and built by the Rolling Ryot arts collective to create an immersive audio experience.
“It’s been very interesting to think about what ‘multichannel’ means when channels are spread out across 420 ft, and especially fun problem-solving this in Kyma,” Simon explains, “Using Kyma I’ve set up some sounds to translate across the system very quickly, but in a unified motion. At other times, the panning happens very slowly, so slow that someone walking could outpace the sound as they walk across the space.”
In other sections, Hutchinson treated the individual channels as the individual keys of a giant instrument, and the “melody” traverses the broad, 420-ft space, with each speaker assigned only a single note. More details on this process are revealed in a video on Simon’s youtube channel.
The sound materials for the piece are from field recordings and analog synthesis samples processed through Kyma. Simon “unfolded” ambisonic field recordings across the 15 channel space, worked with mid-side transformations of stereo recordings, and leveraged the multichannel panning of the Kyma Timeline.
The Ghost Line X event is free and open to the public. All ages are welcome, and the audience is encouraged to bring lawn chairs and blankets and to bike or ride-share to Rosewood park.
On November 28 2022 at the Prater in Vienna, the audience will board a Luftwaggon of the Wiener Riesenrad ferris wheel to experience Bruno Liberda’s new composition still-kreisen-drehen-stehn / frieren die glockentöne am eingebildeten eis (still-circling-turning-standing / bell tones freeze on imaginary ice) for double carillon & Kyma.
Two carillons are played live and fed through Kyma — repeating, turning, or standing still through various granulations, feedbacks, ring modulations, pitch deteriorations, moving reverbs and more — creating a frosty new soundscape, while the public has a moving view over Vienna.
Wiener Riesenrad, Riesenradplatz 1, 1020 Wien, Austria
28.11.2022 Wiener Riesenrad, Riesenradplatz 1, 1020 Wien
1. Vorstellung: 18:00
2. Vorstellung: 19:00
Four ferris wheel wagons as floating, circling, stages for works by:
Bruno Liberda, Masao Ono, Anita Steinwidder, Christine Schörkhuber, Verena Dürr, Sophie Eidenberger, Stefanie Prenn.
Und er lässt es gehen
Alles wie es will
Dreht, und seine Leier
steht ihm nimmer still
(Wilhelm Müller, 1824)
I am Violet the Organ Grinder
And I grind all the live long day
I live for the organ, that I am grinding
I´ll die, but I won´t go away
(Prince, 1991)
Networked collaboration, telematic performances, and online learning have been growing in popularity for several years, but the lockdowns and social-distancing guidelines precipitated by the global COVID-19 pandemic have accelerated the adoption of these modes of interaction. This is a brief (and evolving) report on some of the solutions your fellow Kyma-nauts have found for practicing creative collaborations, live performances, private tutoring, consulting, and teaching large online courses. Thanks for sharing your input, feedback and alternative solutions for distance-collaboration with the Kyma community!
Note: For example configurations showing how to get audio onto your computer and out onto a network, read this first.
Kyma Kata
One of the earliest ongoing examples is Alan Jackson’s Kyma Kata, a regular meeting of peers who practice Kyma programming together, that has been operating online in Google Hangouts for over a year before the crisis and recently celebrated their 100th session! (Did they know something the rest of the world didn’t?) The Kyma Kata currently meets twice a week, on Mondays and Tuesdays. They begin each session with “Prime Minister’s Question Time” (open question-and-answer session on how to do specific tasks in Kyma), followed by an exercise that each person works on independently for 30 minutes, after which they share and discuss their results. Ostensibly the session lasts for 2 hours, but when people really get interested in a problem, some of them stick with it for much longer (though there is no honor lost if someone has to leave after two hours).
Kata participants focus on how to do something together, so screen-sharing is important and audio quality has been less important: they often play over the air, using the computer’s built-in microphone to send the audio.
For higher quality audio, Alan uses a small USB mixer plugged in to the Mac as the Hangouts audio source. Using the mixer, he can mix the Paca’s output and a microphone which provides a lot better quality than over-the-air through the laptop’s mic, although it’s still limited by Hangout’s audio quality, delay and bandwidth.
What is the Kata and How do I sign up?
The term, kata, which comes to us by way of karate, has been adopted by software engineers as a way to regularly practice their craft together by picking a problem and finding several different solutions. The point of a kata is not so much arriving at a correct answer as it is to practice the art of programming.
In the Kyma Kata, a group of aspiring Kymanistas come together regularly via teleconferencing and the facilitator (Alan) introduces an exercise that everyone works independently for about half an hour, after which people take turns talking about their solutions. All levels of Kyma ability are welcome, so why not join the fun?
Improvisation with the Unpronounceables
The Unpronounceables are Robert Efroymson in Santa Fe New Mexico, Ilker Isikyakar in Albuquerque New Mexico, and Will Klingenmeier (ordinarily based in Colorado, but due to travel restrictions, on extended lockdown in Yerevan Armenia). To prepare for a live improvisation planned for KISS 2020, Robert proposed setting up some remote sessions using Jamulus.
Collaboration platform
Using the Jamulus software, musicians can engage in real-time improvisation sessions over the Internet. A single server running the Jamulus server software collects audio data from each Jamulus client, mixes the audio data and sends the mix back to each client. Initially, Robert set up a private server for the group, but they now use one of the public Jamulus servers as an alternative. One of the amusing side-effects of using the public server is that they are occasionally joined by uninvited random guests who start jamming with them.
During a session, the Unpronounceables use a Slack channel to communicate with each other by text and Jamulus to time-align and mix the three audio sources and for sending the mix to each of the three locations.
Audio from Kyma
Each Unpronounceable uses a second interface to get audio from Kyma to the host computer. Robert uses a Behringer to come out of Kyma, and an IO/2 to get to his Mac. Ilker sends his MOTU Track 16 audio outputs to a Behringer; then selects the Behringer as an I/O in the Jamulus preference tab. Will uses a ZOOM H4n as his Kyma interface and sends the audio to an M-Audio Fast Track Pro which acts as the interface for Jamulus.
Ecosystemic audio in virtual rooms
Scott Miller and Pat O’Keefe’s HDPHN project has always been an exploration of what it means to be alone together — it’s a live public concert where each member of the audience wears headphones, rather than listening through speakers. When stay-at-home orders made it impossible for Scott in Otsego to meet in person with Pat in St. Paul, Minnesota, they started looking into how to move the live HDPHN performance onto the Internet.
When Earth Day Art Model 2020 shifted from a live to an online festival, Scott and Pat used this as an opportunity to dive in and perform HDPHN along with one of their older pieces Zeitgeist live through the Internet.
Audio from Kyma
Scott describes the audio routing for HDPHN as follows:
Pat’s mic comes over Zoom and out of my desktop headphone audio. It also goes into Kyma input 1 on my Traveller. With Zoom, I can’t get/send stereo from a live source. With two people (did this Friday) I bring the second person in on a separate Skype/Zoom/Facetime session on another device, and into Kyma input 2. With 2 inputs, I then mathematically cross-processing them in a virtual room.
I am sending Kyma’s processed/mixed output (Main 1-2) back into my desktop via Lynx E22 audio card, going into DSP-Quattro for compression and EQ, then to iShowU virtual audio interface 1) —> to Zoom for Pat’s monitoring, and 2) —>OBS and then to YouTube synced with Pat’s Zoom video. YouTube latency very bad and it wrecked chamber music with duo, but was fun for free improv with a different duo.
Live coding at a Virtual Club
On Monday, 11 May 2020, beginning at 20.30 CET Time, Lucretio will be live-coding in Kyma using a new Tool of his own design at The Circle UXR Zone. The Circle is a virtual club that is a UXR.zone — a decentralized social VR communication platform offering secure communication, free from user-tracking and ads. A spinoff project of the #30daysinVR transhumanist performance by Enea Le Fons, a UXR.zone allows participants to share virtual rooms and communicate via avatars across a range of devices: from VR headsets (main platform) to desktop and mobile phone browsers.
The avatars of people attending the UXR events are either anonymous (robots) or follow a strict dress code based on the CVdazzle research to stress the importance of cyber camouflage via aesthetics against dystopian surveillance measures happening in the real world. Click to enter the club
The belly of the BEAST
Simon Smith (who is actually quite a slender chap at the BEAST of Birmingham) has recently been tasked with researching online collaboration platforms for the BEAST, so we asked him for some tips from the front lines. He just completed a Sound and Music Workshop with Ximena Alarcon (earlier he helped Alarcon on a telematic performance using the Jacktrip software from Stanford).
In the workshop, she also mentioned:
Artsmesh A network music and performance management tool. Content creators run the Artsmesh client which streams live media point-to-point; audiences run a light Artsmesh client to watch the shows.
SoundJack is a realtime communication system with adjustable quality and latency parameters. Depending on the physical distance, network capacities, network conditions and routing, some degree of musical interaction is possible.
How have you been collaborating, teaching, consulting, creating during the lockdown? We’re interested in hearing your stories, solutions and experiences.
Have you used YouTube or Vimeo for live streaming with Kyma?
What’s your preferred video conferencing software for sending computer audio (Zoom, BigBlueButton, Meet)?
Have you been using remote desktop software (like Chrome Remote Desktop, Jump) to access your studio computer from home?
We welcome hearing about alternate solutions for this ongoing report.
Sound designers, electronic/computer musicians and researchers are invited to join us in Busan South Korea 29 August through 1 September 2019 for the 11th annual Kyma International Sound Symposium (KISS2019) — four days and nights of hands-on workshops, live electronic music performances, and research presentations on the theme: Resonance (공명).
“Resonanceâ€, from the Latin words resonare (re-sound) and resonantia (echo), can be the result of an actual physical reflection, of an electronic feedback loop (as in an analog filter), or even the result of “bouncing†ideas off each other during a collaboration. When we say that an idea “resonatesâ€, it suggests that we may even think of our minds as physical systems that can vibrate in sympathy to familiar concepts or ideas.
At KISS2019, the concept of resonance will be explored through an opening concert dedicated to “ecosystemic†electronics (live performances in which all sounds are derived from the natural resonances of the concert hall interacting with the electronic resonances of speaker-microphone loops), through paper sessions dedicated to modal synthesis and the implementation of virtual analog filters in Kyma, through live music performances based on gravity waves, sympathetic brain waves, the resonances of found objects, the resonance of the Earth excited by an earthquake, and in a final rooftop concert for massive corrugaphone orchestra processed through Kyma, where the entire audience will get to perform together by swinging resonant tubes around their heads to experience collective resonance. Sounds of Busan — two hands-on workshops open to all participants — focus on the sounds and datasets of the host city: Busan, South Korea. In part one, participants will take time series data from Busan Metropolitan City (for example, barometric pressure and sea level changes) and map those data into sound in order to answer the question: can we use our ears (as well as our eyes) to help discover patterns in data? In part two, participants will learn how to record, process, and manipulate 3d audio field recordings of Busan for virtual and augmented reality applications.
Several live performances also focus on the host city: a piece celebrating the impact of shipping containers on the international economy and on the port city of Busan; a piece inspired by Samul nori, traditional Korean folk music, in which four performers will play a large gong fitted with contact mics to create feedback loops; and a live performance of variations on the Korean folk song: Milyang Arirang, using hidden Markov models.
Hands-on Practice-based Workshops
In addition to a daily program of technical presentations and nightly concerts (https://kiss2019.symbolicsound.com/program-overview/), afternoons at KISS2019 are devoted to palindromic concerts (where composer/performers share technical tips immediately following the performance) and hands-on workshops open to all participants, including:
• Sounds of Busan I: DATA SONIFICATION
What do the past 10 years of meteorological data sound like? In this hands-on session, we will take time series data related to the city of Busan and map the data to sound. Can we hear patterns in data that we might not otherwise detect?
• The Shape Atlas: MATHS FOR CONTROLLING SOUND
How can you control the way sound parameters evolve over time? Participants will work together to compile a dictionary associating control signal shapes with mathematical functions of time for controlling sound parameters.
• Sounds of Busan II: 3D SOUND TECHNIQUES
Starting with a collection of 3D ambisonic recordings from various locations in and around Busan, we will learn how to process, spatialize, mix down for interactive binaural presentation for games and VR.
Networking Opportunities
Participants can engage with presenters and fellow symposiasts during informal discussions after presentations, workshops, and concerts over coffee, tea, lunches and dinners (all included with registration). After the symposium, participants can join their new-found professional contacts and friends on a tour of Busan (as a special benefit for people who register before July 1).
Sponsors and Organizers
Daedong College Department of New Music (http://eng.daedong.ac.kr/main.do)
Dankook University Department of New Music (http://www.dankook.ac.kr/en/web/international)
Symbolic Sound Corporation (https://kyma.symbolicsound.com/)
Busan Metropolitan City (http://english.busan.go.kr/index)
Composer/saxophonist Andrew Raffo Dewar will be at Mills College Center for Contemporary Music on Monday, February 4th, 2019 to perform his work, Ghosts in the Uncanny Valley II — a 35-minute composition for acoustic quartet improvising with live electronics programmed in the Kyma sound design environment. Kyma analyzes, manipulates, and expands upon the sounds of the acoustic instruments in real time, creating an electronically “extended†(and altered) quartet. The quartet features the composer on sax, Gino Robair (prepared piano), Kyle Bruckmann (oboe/English horn), and John Shiurba (acoustic guitar).
On January 31st 2019, Dewar will be at the Santa Monica Library to perform the premiere of his new piece for soprano sax and Kyma as part of their Sound Waves series. This performance starts at 7:30 pm and is free and open to the public.