R U I S is one of several pieces that have emerged from Kuit’s years of research conducted with Kyma on the phenomenon of noise.
Part of a cycle interpreting our increasingly complex world of information, RUIS explores collisions, dialogs, interplay, patterns, spreading, merging — sometimes the noise is even pulled loose and spread into ticks in the twelve-meter-wide horizontal panorama, each tap shifting in space as an audio object.
A study in spaciousness and complexity, frequencies are stacked or moved through the space, sometimes configured in a static distribution, while at other times as pseudo-chaotic movements. Each loudspeaker can serve either as an autonomous sound object or as an intrinsic part of the whole.
Likewise, the listener can take a seat or move through the room, receiving continuously changing reflective angles of incidence at the ears.
Kyma artists participated in multiple concerts during the SEAMUS ’25 conference at Purdue University in March 2025.
The winner of the 2025 SEAMUS/SWEETWATER commission prize was Yao Hsiao for her composition Daiyu, for voice, iPad and Kyma. Yao is a doctoral student at the University of Oregon, studying with Jeffrey Stolet. She completed her master’s degree with Chi Wang at Indiana University, where the piece was composed.
Scott Miller opened the conference with his COINCIDENT#7, performing live Kyma electronics with collaborators Sam Wells and Adam Vidiksis.
Featured on the same concert, Michael Wittgraf performed his piece for EWI and Kyma: Absence of Hope.
Sound artist/composer Mike Wittgraf performing with EWI & Kyma at SEAMUS ’25
In other SEAMUS concerts, Chi Wang performed her piece Fusion of the Horizons, for JoyCon, RingCon, Max, and Kyma.
Chi Wang at SEAMUS ’25 Photo by Mike Wittgraf
John Ritz presented his composition for cello and live adaptive Kyma signal processing.
Kelsey Wang performed her composition Mahjong: Peng Gang Chi, for Wacom tablet and Kyma.
Kelsey Wang performing with Kyma & Wacom tablet, photo by Mike Wittgraf
Among the fixed media pieces, several utilized Kyma for the sound design:
Minho Kang, The Mist, fixed media
Wyatt Cannon, Something About Birds, fixed media (8-channel, Kyma)
Mei-ling Lee, RUN
Brian Belet, My Last Tape piece
The closing event of the conference was Places | Spaces | Traces, a large telematic improvisation ensemble organized by conference host Tae Hong Park, which included Brian Belet performing electric bass and Kyma via remote connection from his home studio in Hawai’i.
Wittgraf’s Turning is a play on words inspired by the mechanism of Bost-Sandberg’s Kingma System® flute with Glissando Headjoint® and, according to Wittgraf, Bost-Sandberg “thrilled audience members every time she played” at EMM.
Bost-Sandberg also performed Wittgraf’s A Vox Novus short for flute and fixed media (produced with Kyma).
Mike Wittgraf’s Pacamara is apparently capable of drawing power from the air during the dress rehearsal for his fixed media piece.
Sharing a concert with Wittgraf, Brian Belet‘s fixed media composition My Last Tape Piece was performed at the conference as an 8.4.1 sound structure (8 surround sound speakers on the floor, plus 4 elevated quad speakers).
EMM features a 12-channel immersive system, Yamaha powered speakers and subwoofers, and a Digico S21 mixer (named “EMMilia”)
Two Kyma artists were featured in this year’s MOXsonic (Missouri Experimental Sonic Arts) Festival — three days of concerts, research presentations, workshops, installations, and conversations, 19-21 March 2025.
Wyatt Cannon presented “7”, a multichannel fixed media piece with sounds generated in Kyma. Wyatt, currently pursuing a Master of Music in Computer Music Composition at Indiana University with Chi Wang and John Gibson, uses sound manipulations inspired by evolution and astronomy to create long-form narratives that express the human urge to understand our place in the universe.
Mark Phillips performed his composition Waiting, for EWI and Kyma, featuring semi-autonomous algorithms guided and conducted using audio signals and MIDI data from the EWI while the performer responds to and interacts with the Kyma algorithms. No prerecorded audio or MIDI files were used in the live performance.
Here’s a 2018 lecture Mark presented on his use of the EWI with Kyma in another piece:
Anssi Laiho’s Teknofobia Ensemble is a live-electronics piece that combines installation and concert forms: an installation, because its sound is generated by machines; a concert piece, because it has a time-dependent structure and musical directions for performers. The premiere was 13 November 2024 at Valvesali in Oulu, Finland.
Laiho views technophobia, the fear of new technological advancements, as a subcategory of xenophobia, the fear of the unknown or of outsiders. His goal was to present both of these phobias in an absurd setting.
The composer writes that “the basic concept of technophobia — that ‘machines will replace us and make us irrelevant’— is particularly relevant today, as programs using artificial intelligence are becoming mainstream and are widely used across many industries.”
Teknofobia Ensemble poses the question: What if there were a planet inhabited by a mechanical species, and these machines came to Earth and tried to communicate with us via music? What would the music sound like, and would they first try to learn and imitate our culture in order to communicate with us?
Laiho’s aim was to reproduce the live-electronics environment he would normally work in, but to replace the human musicians with robots — not androids or simulants but “mechanical musicians”.
He asked himself, “What would it mean for my music and creative process if this basic assumption were to become true? As a composer living in the 2020s, do I still need musicians to perform my compositions? Wouldn’t it be easier to work with machines that always fulfill my requests? Can a mechanical musician interpret a musical piece on an emotional level, as a human being does, or does it simply apply virtuosity to the technical execution of the task?”
He then set out to prove himself wrong!
Teknofobia Ensemble consists of five prepared violins, each equipped with a Raspberry Pi that controls various types of electronic motors (solenoids, DC motors, stepper motors, and servos) through a Python program. This program converts OSC commands received from Kyma into PWM signals on the Raspberry Pi pins, which are connected to motor drivers.
In live performances, Kyma acts as the conductor for the ensemble, while Laiho views his role as primarily that of a “mixer for the band”.
The piece is structured as a 26-minute-long Kyma timeline, consisting of OSC instructions (the musical notation of the piece) for the mechanical violins. The live sound produced by the violins is routed back to Kyma via custom-made contact microphones for live electronic processing.
At the IRCAM Forum Workshops @Seoul 6-8 November 2024, composer Steve Everett presented a talk on the compositional processes he used to create FIRST LIFE: a 75-minute mixed media performance for string quartet, live audio and motion capture video, and audience participation.
FIRST LIFE is based on work that Everett carried out at the Center of Chemical Evolution, a NSF/NASA funded project at multiple universities to examine the possibility of the building blocks of life forming in early Earth environments. He worked with stochastic data generated by Georgia Tech biochemical engineer Martha Grover and mapped them to standard compositional structures (not as a scientific sonification, but to help educate the public about the work of the center through a musical performance).
Data from IRCAM software and PyMOL were mapped to parameters of physical models of instrumental sounds in Kyma. For example, up to ten data streams generated by the formation of monomers and polymers in Grover’s lab were used to control parameters of the “Somewhat stringish” model in Kyma (such as delay rate, BowRate, position, decay, etc). Everett presented a poster about this work at the 2013 NIME Conference in Seoul, and has uploaded some videos from the premiere of First Life at Emory University.
Currently on the music composition faculty of the City University of New York (CUNY), Professor Everett is teaching a doctoral seminar on timbre in the spring (2025) semester and next fall he will co-teach a course on music and the brain with Patrizia Casaccia, director of the Neuroscience Initiative at the CUNY Advanced Science Research Center.
Silvia Matheus will be performing with Kyma in Oakland California on 20 October 2024. A computer and electronic music composer, sound artist, and improviser, Silvia is teaming up with Ric Louchard, a pianist, composer, and improviser, to create The VisitoR: a performance where electronics and acoustics meet and interact.
The VisitoR goes beyond a simple two-person conversation, embracing a lively interplay of free-flowing actions. Matheus has designed hybrid environments with multiple speakers, allowing the musicians to move in and out at their own pace and engage as they wish. This setup encourages spontaneous interactions that evolve based on each participant’s desire to immerse themselves. Her soundscape blends the algorithmically generated sounds from Kyma with fixed composed material and layers in live-processed acoustic elements to create a rich and dynamic auditory experience.
The concert is part of the WEST OAKLAND SOUND SERIES, a weekly new music and experimental sound series in the Dresher Ensemble Studio:
Sunday 20 October 2024
Dresher Ensemble Studio
2201 Poplar Street
Oakland California
Future Music Oregon composers — past and present — brought their signature brand of live interactive electronic music performance to the Musicacoustica Hangzhu 2024 conference in September.
Characterized by custom controllers, exceptional Kyma sound design, live interactive graphics, and virtuosic stage presence, their performances left a lasting impression on the audience of fellow electroacoustic music composers. So much so, that rumor has it that the University of Oregon Summer Academy for electronic music (on hiatus due to pandemic disruptions) may resume this summer, opening the door to future collaborations among US and Chinese musicians.
Some highlights follow (photos, courtesy of Musicacoustica Hangzhu):
“Realm” composed and performed by Fang WAN, professor at Hangzhou Conservatory of Music“Summoner” for Leap Motion and custom software created with Max and Kyma, composed and performed by Mei-ling Lee, professor at Haverford College“Balance” for Kyma, Max, and GameTrak composed and performed by Jeffrey Stolet, professor at University of Oregon“Dimension reduction approach in the context of real time sound synthesis” a talk by Chi WANG, professor at Indiana University“Beyond Landscape” for contact microphone, Dry Garden, and Kyma, composed and performed by Tao LI, doctoral student at University of Oregon“Fusion of Horizons” for Nintendo Ring-Con, Joy-Con, Max & Kyma, composed & performed by Chi WANG, professor at Indiana University“Summoner” for Leap Motion and custom software created with Max and Kyma, composed and performed by Mei-ling Lee, professor at Haverford College“Developing Musical Intuition as a Pathway for Creating Artificial Intelligence Models” a talk by Jeffrey Stolet, professor at University of OregonGametrak Trio: Jeffrey Stolet performs with his former graduate students, Fang WAN (currently professor at Hangzhu Conservatory CN) and Chi WANG (currently professor at Indiana University)
Anssi Laiho is the sound designer and performer for Laboratorio — a concept developed by choreographer Milla Virtanen and video artist Leevi Lehtinen as a collection of “experiments” that can be viewed either as modules of the same piece or as independent pieces of art, each with its own theme. The first performance of Laboratorio took place in November 2021 in Kuopio, Finland.
Laboratorio Module 24, featuring Anssi performing a musical saw with live Kyma processing was performed in the Armastuse hall at Aparaaditehas in Tartu, Estonia and is dedicated to theme of identity and inspiration.
Anssi’s hardware setup, both in the studio and live on stage, consists of a Paca connected to a Metric Halo MIO2882 interface via bidirectional ADAT in a 4U mobile rack. Laiho has used this system for 10 years and finds it intuitive, because Metric Halo’s MIOconsole mixer interface gives him the opportunity to route audio between Kyma, the analog domain, and the computer in every imaginable way. When creating content as a sound designer, he often tries things out in Kyma in real-time by opening a Kyma Sound with audio input and listening to it on the spot. If it sounds good, he can route it back to his computer via MIOconsole and record it for later use.
His live setup for Laboratorio Module 24 is based on the same system setup. The aim of the hardware setup was to have as small a physical footprint as possible, because he was sharing the stage with two dancers. On stage, he had a fader-controller for the MIOconsole (to control feedback from microphones), an iPad running Kyma Control displaying performance instructions, a custom-made Raspberry Pi Wi-Fi footswitch sending OSC messages to Kyma, and a musical saw.
Kyma Control showing “Kiitos paljon” (“Thank you” in Finnish), Raspberry Pi foot switch electronics, rosin for the bow & foot switch controller
The instrument used in the performance is a Finnish Pikaterä Speliplari musical saw (speliplari means ‘play blade’). The instrument is designed by the Finnish musician Aarto Viljamaa. The plaintive sound of the saw is routed to Kyma through 2 microphones, which are processed by a Kyma Timeline. A custom-made piezo-contact microphone and preamp is used to create percussive and noise elements for the piece, and a small diaphragm shotgun microphone is employed for the softer harmonic material.
The way Anssi works with live electronics is by recording single notes or note patterns with multiple Kyma MemoryWriter Sounds. These sound recordings are then sampled in real-time or kept for later use in a Kyma timeline. He likes to think of this as a way of reintroducing a motive of the piece as is done in classical music composition. This also breaks the inherent tendency of adding layers when using looping samplers, which, in Anssi’s opinion, often becomes a burden for the listener at some point.
The Kyma sounds used in the performance Timeline are focused on capturing and resampling the sound played on the saw and controlling the parameters of these Sounds live, in timeline automation, presets, or through algorithmic changes programmed in Capytalk.
Laiho’s starting point for the design was to create random harmonies and arpeggiations that could then be used as accompaniment for an improvised melody. For this, he used the Live Looper from the Kyma Sound Library and added a Capytalk expression to its Rate parameter that selects a new frequency from a predefined selection of frequencies (intervals relative to a predefined starting note) to create modal harmony. He also created a quadrophonic version of the Looper and controlled the Angle parameter of each loop with a controlled random Capytalk expression that makes each individual note travel around the space.
Another Sound used in the performance is one he created a long time ago named Retrosampler. This sound captures only a very short sample of live sound and creates 4 replicated loops, each less than 1 second long. Each replicated sample has its own parameters that he controls with presets. This, together with the sine wave quality of the saw, creates a result that resembles a beeping sine wave analog synthesizer. The sound is replicated four times so he has the possibility to play 16 samples if he to presses “capture” 4 times.
The Retrosampler sound is also quadraphonic and its parameters are controlled by presets. His favorite preset is called “Line Busy” which is exactly what it sounds like. [Editor’s note: the question is whichbusy signal?]
For the noise and percussion parts of the performance, he used a sound called LiveCyclicGrainSampler, which is a recreation of an example from Jeffrey Stolet’s Kyma and the SumOfSines Disco Club book. This sound consists of a live looping MemoryWriter as a source for granular reverb and 5 samples with individual angle and rate parameter settings. These parameters were then controlled with timeline automation to create variation in the patterns they create.
Anssi also used his two favorite reverbs in the live processing: the NeverEngine Labs Stereo Verb, and Johannes Regnier’s Dattorro Plate.
Kyma is also an essential part of Laiho’s sound design work in the studio. One of the tracks in the performance is called “Experiment 0420” and it is his “Laboratory experiment” of Kyma processing the sound of an aluminum heat sink from Intel i5 3570K CPU played with a guitar pick. Another scene of the performance contains a song called “Tesseract Song” that is composed of an erratic piano chord progression and synthetic noise looped in Kyma and accompanied by Anssi singing through a Kyma harmonizer.
The sound design for the 50-minute performance consists of 11-12 minutes of live electronics, music composed in the studio, and “Spring” by Antonio Vivaldi. The overall design goal was to create a kaleidoscopic experience where the audience is taken to new places by surprising turns of events.
Circle 6 of Dante’s Inferno contains the fiery tombs of heretical souls, and in the venue for NeverEngineLabs (Cristian Vogel) 6 April octaphonic concert — a car park six floors beneath the Earth’s surface in the Providencia area of Santiago Chile — the lighting appears eerily appropriate.
NeverEngineLabs’ live setup in Santiago Chile 2024
Vogel, whose work focuses on experimental electronic music, electroacoustic music, and pushing compositional boundaries beyond style and convention writes “The sound check was a veritable wall of noise, never heard Ambisonics sound so intense… the subs were so full, car alarms were setting off on the floors above!
Cristian’s goals for 2024 and beyond are to explore socially relevant musical narratives that challenge the listener and expand the boundaries of what is possible in experimental electronic music.