True Crime

In early 2025, listen closely to the soundtrack of Lady of the Dunes, the new true crime series airing on the Oxygen Network. The series looks at a recently solved 50 year old cold case.

The director wanted the music to have a haunting quality to tie into the mystery and brutality of the case. That’s when they called on composer/editor John Balcom.

Balcom started out by recording plaintive, mysterious sounds, like wine glasses and a waterphone. Then he took sections from those recordings and treated them using Kyma’s Multigrid. There was a performative aspect to all of this; he would manipulate the sounds going into Kyma, as well as adjusting sounds in real time in the Multigrid, resulting in some disturbingly haunting sounds. Balcom then incorporated those sounds to build out full compositions.

Here are some audio examples of the textures he created using Kyma, along with a couple of tracks that incorporate those textures (Warning: the following clips contain haunting sounds and may be disturbing for all audiences. Listener discretion is advised! ⚠️ ):

 

Here’s a screen shot of one of Balcom’s Multigrids:

Balcolm concludes by saying: “The music would not have been the same without Kyma!”

New Kyma course offerings

There are so many ways to learn Kyma (the online documentation, asking questions in the Kyma Discord, working with a private coach or group of friends…). This semester there are also two new university courses where, not only can you can learn Kyma, you’ll also have a chance to work on creative projects with a composer/mentor and interact with fellow Kyma sound designers in the studio while also earning credit toward a degree.

At Haverford, Bryn Mawr, and Swarthmore, you can sign up for MUSC H268A Sonic Narratives – Storytelling through Sound Synthesis with Professor Mei-ling Lee.

In “Sonic Narratives” you’ll learn to combine traditional instruments and electronic music technologies to explore storytelling through sound. Treating the language of sound as a potent narrative tool, the course covers advanced sound synthesis techniques such as Additive, Subtractive, FM, Granular, and Wavetable Synthesis using state-of-the-art tools like Kyma and Logic Pro. Beyond technical proficiency, students will explore how these synthesis techniques contribute to diverse fields, from cinematic soundtracks to social media engagement.


2,332 miles (3 753 kms) to the west, Professor Garth Paine is offering a Kyma course at the ASU Herberger Institute for Design and the Arts in Tempe, Arizona.

From the course catalog: The Kyma System is an advanced real-time sound synthesis and electro-acoustic music composition and performance software/hardware instrument. It is widely used in major film sound design studios, by composers across the globe and in scientific sound sonification. The Kyma system is a patcher like environment which can also be scripted and driven externally by OSC and MIDI. Algorithms can be placed in timelines for dynamic instantiation based on musical events or in grids and as fixed patches. The system has several very powerful FFT and spectral processing approach which can also be used live. In this class, learn about the potential of the system and several of the ways in which it can be used in creating innovative sound design and live electronics with instruments. The class is focused on students who are interested in electroacoustic music composition and realtime performance and more broadly in sound design.


These are not the only institutions of higher learning where Kyma knowledge is on offer this fall. Here’s a sampling of some other schools offering courses where you’ll learn to apply Kyma skills to sound design, composition, and data sonification:

  • University of Oregon (Jeffrey Stolet, Jon Bellona, Zachary Boyt)
  • University of New Haven (Simon Hutchinson)
  • Indiana University (Chi Wang)
  • Zhejiang Conservatory of Music (Fang Wang)
  • Sichuan Conservatory of Music (Iris Lu)
  • Wuhan Conservatory of Music (Sunhuimei Xia)
  • Dankook University, Seoul (Kiyoung Lee)
  • Musikhochschule Lübeck (Franz Danksagmüller)

If you are teaching a Kyma course this year, and don’t see yourself on the list, please let us know.

Generative events

Explorations in the generative control of notes and rhythms, scales and modes — from human to algorithmic gesture

Composer Giuseppe Tamborrino uses computer tools not just for timbre, but also for the production of events in deferred time or in real time. This idea forms the basis for generative music — examples of which can be found throughout music history, for example Mozart’s Musical Dice Game.

In his research, Tamborrino carries out this process in various ways and with different software, but the goal is always the same: the generation of instructions for instruments — which he calls an “Electronic score”.

Here’s an example from one of his generative scores:

 

As part of the process, Tamborrino has always designed in a certain degree of variability, using different stochastic or totally random objectives to speed up the process of abstraction and improvisation that, once launched, are invariant. Often, though, this way of working resulted in small sections that he wanted to cut or correct or improve.

This motivated him to use Kyma to pursue a new research direction — called post-controlled generative events — with the aim of being able to correct and manage micro events.

This is three-step process:

  • A generational setting phase (pre-time)
  • A performance phase, recording all values in the Timeline (real time)
  • A post-editing phase of automatic “generative (after time) events

Tamborrino shared some of the results of his research on the Kyma Discord, and he invites others to experiment with his approach and to engage in an online discussion of these ideas.

Vision of things unseen

Motion Parallels, a collaboration between composer James Rouvelle and visual artist Lili Maya was developed earlier this year in connection with a New York Arts Program artist’s residency in NYC.

For the live performance Rouvelle compressed and processed the output of a Buchla Easel through a Kyma Timeline; on a second layer, the Easel was sampled, granulated and spectralized and then played via a midi-keyboard. The third layer of Sounds was entirely synthesized in Kyma and played via Kyma Control on the iPad.

For their performative works Rouvelle and Maya develop video and audio structures that they improvise through. Both the imagery and sound incorporate generative/interactive systems, so the generative elements can respond to them, and vice versa.

Rouvelle adds, “I work with Kyma every day, and I love it!”

Phantom of the Waltershausen organ

Franz Danksagmüller was in Waltershausen 25-28 August 2024 presenting workshops on how to integrate live electronics with the pipe organ. Here’s a photo of “the hand” controller designed by Franz, alongside an Emotiv EEG headband and Kyma Control.

Although from this vantage point, one might think that the organ loft is nearly paradisiacal…


…the organ builder took pains to remind the organist of the alternative (or at the very least, to have a laugh).

Audience choice: SEAMUS 2023

You can hear Kyma sounds in two of the selections of Volume 33 of audience-selected works from the 2023 SEAMUS National Conference in New York City.

Composer Chi Wang describes her instrumentation in Transparent Affordance as a “data-driven instrument.” Data is derived from touching, titling, and rotating an iPad along with a “custom made box” of her own design. Near the end of the piece Wang uses the “box” to great effect. She places the iPad in the box and taps on the outside. Data is still transmitted to Kyma even without direct interaction, though the sounds are now generated in a much more sparse manner as though the iPad needs to rest or the performer decides to contain it. Finally the top is put on the box as one last flurry of sound escapes.

Scott L. Miller describes his piece Eidolon as “…a phantom film score I thought I heard on a transatlantic flight…” Although the film may not exist to guide us through a narrative, the soundtrack does exist and takes the place of visual storytelling. Eidolon, scored for flute, violin, clarinet, and percussion with Kyma, blurs the lines between what the performers are creating and what the electronic sounds are, an aspect of how we hear the world: we can alternately isolate and focus on sounds to identify them and we can also widen our ears to absorb a sonic texture or a soundscape. Electronic sounds were synthesized in Kyma using additive synthesis, mostly partials 17 – 32 of sub-audio fundamentals.

The Seeing is Good


Rick Stevenson is a technology entrepreneur with an impressive track record: he co-founded three successful tech startups and played an instrumental role in the growth of several others. In addition to maintaining a nearly five decade relationship with the University of Queensland’s School of Information Technology and Electrical Engineering as a student, mentor, and industry advisor, Stevenson is also an accomplished astrophotographer, and one of his images was selected as the NASA Astronomy Picture of the Day.

NASA's Astronomy Picture of the Day


Eighth Nerve [EN] Your “Rungler” patch recently made a big splash on the Kyma Discord Community. Could you give a high-level explanation for how it works?

Rick Stevenson [RS]: The Rungler is a hardware component of a couple of instruments, the Benjolin and the Blippoo box, designed by Rob Hordijk. It’s based on an 8-bit shift register driven by two square wave oscillators. One oscillator is the clock for the shift register and the other is sampled to provide the binary data input to the shift register. When the clock signal becomes positive, the data signal is sampled to provide a new binary value. That new bit is pushed into the shift register, the rest of the bits are shuffled along and the oldest bit is discarded.

Rungler circuit

The value of the Rungler is read out of the shift register by a digital to analog converter. In the simplest version of this (the original Benjolin design) the oldest three bits in the shift register are interpreted as a binary number with a value between 0 and 7.

That part is fairly straightforward. The interesting wrinkle is that the frequency of each oscillator is modulated by the other oscillator and also the value of the Rungler. The result of this clever feedback architecture is that the Rungler exhibits an interesting controlled chaotic behavior. It settles into complex repeating (or almost repeating) patterns. Nudge the parameters and it will head off in a new direction before settling into a different pattern. Despite the simplicity of the design it can generate very interesting and intricate “melodies” and rhythms.

NOTE: Rick shared his Rungler Sound in the Kyma Community Library!


Artist/animator Rio Roye helped Rick test the Rungler Sound and came up with a pretty astounding range of sonic results!

 


[EN]: What is your music background? Is it something you’ve been interested in since childhood? Or a more recent development?

[RS]: I didn’t learn an instrument as a child. I’m not sure why. We had a piano in the house and my mother played. At high school I taught myself to play guitar and bass and I kept that up for a couple of years at University but eventually gave up due to lack of time and motivation. Quite a few years later when work demands became manageable and our children were semi-independent, I took up guitar again and started tinkering with valve amp building. These days I’m learning Touch Guitar and tinkering with synths and computer music. I spent some time learning Max and Supercollider then decided to take the plunge into Kyma.

[EN]: What were the best parts of your “new user experience” with Kyma?

[RS]: I really liked the wide range of sounds in the library. It’s great to be able to find an interesting sound, play with it, and then dig inside and try to figure out how it works.

I also appreciated the power of individual sounds coupled with the expressiveness of Capytalk. I spent quite a bit of time learning Max and moving to Kyma was a bit like switching from assembler to a high level language.


[EN]: I think you might be the first astrophotographer I’ve ever met! Could you please introduce us (as total novices) to the equipment and software that you use to capture & process images like these

False color image of the Helix Nebula, resembling a human iris
Helix Nebula – Narrowband bi-color + RGB stars. Image provided by Rick Stevenson.

[RS]: You can do astrophotography with a digital camera and conventional lens. A tripod is sufficient for nightscapes but you need some sort of simple tracking mount to do longer exposures. At the other end of the spectrum (haha) are telescopes, cameras and mounts specifically designed for astrophotography. The telescopes range from small refractors (with lenses) to large catadioptric scopes combining a large mirror (500mm in diameter or even more) with specialized glass optics.

Cameras consist of a CCD or CMOS sensor in a chamber protected from moisture, thermally bonded to a multistage Peltier cooler. Noise is the enemy of faint signals so it is not uncommon to cool the sensor to 30C or more below ambient temperature. The cameras are usually monochromatic and attached to a wheel containing filters. Mounts for astrophotography are solid, high precision devices that can throw around a heavy load and also track the movement of the sky with great accuracy. Summary: there’s lots of fancy hardware used for amateur astrophotography, ranging from relatively affordable to the price of a nice house!

On the software side, the main components are capture software which runs the mount, filterwheel and the camera(s), and processing software which turns the raw data into an image. Normal procedure is to take many exposures of a deep sky object from seconds to many minutes long and “stack” them to increase the signal to noise ratio. We also take a few different types of calibration images that are used to remove artifacts and nonlinearities from the camera sensor and optical train. Processing the raw data can be an involved process and is something of an art in itself. There are quite a few software packages and religious wars between their proponents are not uncommon.

[EN]: Speaking of software packages, what is PixInsight?

[RS]: PixInsight is the image processing and analysis software that I use. It has been developed by a small team of astronomers and software engineers based in Spain. It has a somewhat unfair reputation for being complicated and difficult to use. This is partly down to a GUI which is the same on Windows, Mac and Linux (but not native to any of them) and partly because it includes a wide range of different tools and the philosophy of the developers is to expose all of the useful parameters. When I started doing astrophotography I tried a few different software packages and PixInsight was the one that produced the best results for me. Some of the other commonly used packages hide a lot of the ugly details and offer a more guided experience which suits some types of imagers better. A more recent development is the use of machine learning in astronomical processing to do things like noise reduction and sharpening. I haven’t quite decided how I feel about that yet.

I think there are some interesting parallels between PixInsight and Kyma. Apart from the cross platform problem, both have to maintain that careful balance that offers a complex, highly technical set of features in a way that can satisfy the gamut of users from casual to expert.

[EN]: Do you have your own telescope? Do you visit telescopes in other parts of the world?

I have a few telescopes. Unfortunately, I live in the city and the light pollution prevents me from doing much data collection from home. When I get the opportunity I have a few favorite locations not too far away where I can set up under dark skies. Apart from dark skies, a steady atmosphere is important for high resolution imaging. This is called the “seeing” and it’s usually not that great around here.

Rick Stevenson and his C300

For several years I have been a member of a few small teams with automated telescopes in remote locations. That solves a lot of problems apart from fixing things when they go wrong! My first experience was at Sierra Remote Observatory near Yosemite in California. I also shared a scope with some friends in New Mexico and now I’m using one in the Atacama Desert in Chile. The skies in the Atacama are very dark and the seeing is amazing! I haven’t ever visited any of the sites but I did catch up with some team mates on a couple of business trips.

[EN]: Does your camera give you access to data files? From some of the caption descriptions, it sounds like your camera has sensors for “non-visible” regions of the spectrum, is that true?

[RS]: The local and the remote scopes all deliver the raw image and calibration data in an image format called “FITS” which is basically a header followed by 2D arrays of data values. A single image will almost always be produced from very many individual sub-exposures. The normal process is to calibrate and stack the data for each filter before combining the stacks to produce a color image.

The camera sensors will usually detect some UV and near-infrared as well as visible light, but they aren’t commonly used in ground-based imaging. I use red, green and blue filters for true color imaging, or I image in narrowband. Narrowband uses filters that pass very narrow frequency ranges corresponding to the emissions from specific ionized elements, usually Hydrogen alpha (a reddish color), Oxygen III (greenish blue) and Sulphur II (a different reddish color). Narrowband has the advantage of working even in light-polluted skies (and to some extent rejecting moonlight) and can show details in the structure of astronomical objects not visible in RGB images. The downside is that you need very long exposure times and narrowband images are false color. Many of the Hubble images you’ve undoubtedly seen are false color images using the SHO palette (red is Sulphur II, green is Hydrogen alpha and blue is Oxygen III).

Tarantula region in SHO palette, where SHO is an Acronym that stands for Sulphur II (mapped to Red), Hydrogen Alpha (mapped to Green) & Oxygen III (mapped to Blue). Three images are captured sequentially using 3 narrowband filters and then combined to create a false color image.
Tarantula region in SHO palette. Image provided by Rick Stevenson

[EN]: What is your connection with the Astronomical Association of Queensland?

[RS]: I have been a member of the AAQ for well over a decade. From 2013 to 2023 I was the director of the Astrophotography section. I helped members learn how to do astrophotography, organized annual astrophotography competitions and curated images for the club calendar.


[EN]: Have you ever seen a platypus in real life?

[RS]: Several times at zoos. Only a handful of times in the wild. They are mostly nocturnal and also very shy!

[EN]: In New Mexico (where I grew up), we all learned a song in elementary school with the lyrics: “Kookaburra sits in the old gum tree, merry merry king of the bush is he, Laugh Kookaburra, laugh Kookaburra, gay your life must be”. Is it a pseudo-Australian song invented for American kids or did you learn it in Australia too?

[RS]: I don’t know the origin but we have the same song. I have a toddler grandson who sings it! We have Kookaburras in the bushland behind our house that start doing their group calls around 3 or 4am.

Speaking of the Kookaburra song there was a story about it in the news a few years ago when the people who own the rights to the song won a plagiarism lawsuit against the band, Men at Work.

Listen for the Kookaburra song in the flute riff!

[EN]: What was the best part about growing up in Australia that those of us growing up in other parts of the world missed out on?

[RS]: Vegemite, perhaps? 🙂

The area around Brisbane in South-East Queensland has a lot going for it. The climate is pleasant and mild (except for some hot and humid days in summer). There are great beaches within an hour or so as well as forested areas for hiking. The educational and health systems are good, even the public / “free” parts (true in most of the major centres in Australia). Brisbane is large enough to have some culture and work opportunities without being too busy and fast-paced. It’s pretty good, but not perfect.


[EN]: What was your favorite course to teach at the School of Information Technology and Electrical Engineering, University of Queensland St Lucia?

[RS]: I was an Adjunct Professor at UQ ITEE for 21 years but I didn’t teach any courses. I acted as an industry advisor, was involved in bids to set up research centers in embedded systems and security, and I hired a lot of their graduates!

[EN]: Do you have a “favorite” programming language or family of languages?

[RS]: I rather liked Simula 67 though I only used it for one (grad student) project. I think it was one of the first OO languages if not the first. Most of my work programming experience was in assembler on microprocessors and C on micro and minicomputers. I was very comfortable in C but wouldn’t say I loved it. These days I quite like the Lisp family of languages.

[EN]: Are you one of the founders of OpenGear? In that role, do you do a lot of international travel to the company’s other offices?

[RS]: I was one of the founders and worked in a few roles up until Opengear was acquired in late 2019. Having stayed on after acquisitions in a couple of previous lives I was pleased that the new owners didn’t want me to hang around and I exited just in time for COVID. While at Opengear and in previous startups I made regular business trips, mostly to North America but also the UK and Europe and occasional trips to South-East Asia.

[EN]: What do you identify as the top three “open problems” or “grand challenges” in technology right now?

[RS]: In no particular order and not claiming to have any deep insights…

  • Achieving AGI [artificial general intelligence] is still an open problem and one that, in my opinion, is not going to get solved as quickly as a lot of people think. Maybe that’s a good thing?
  • I spent about a decade working on computer security products starting in the early 2000s and despite a lot of money and effort spent on band-aid solutions, things have only become worse since then. Fixing our whack-a-mole computer security model is certainly a grand challenge, not helped by the incentives that security vendors have to protect their recurring revenues. The recent outage caused by the CrowdStrike security agent crashing Microsoft computers demonstrates the fragility of our current approach.
  • The ability to create practical quantum computers still seems a bit of a reach though I hear that Wassenaar Arrangement countries are all quietly introducing export controls on quantum computers. Perhaps they know something I don’t.

[EN]: What’s next on the horizon for you? What Kyma project(s) are you planning to tackle next?

[RS]: I have a long list of potential projects but I think the next two I’ll tackle are some baby steps into sonification of astronomical data (thanks for the encouragement) and a Blippoo Box – another of Rob Hordijk’s chaotic generative synths.

[EN]: What are you most looking forward to learning about Kyma over the next year?

[RS]: There are many areas of Kyma that I have only dabbled in and even more that I haven’t touched at all. I’d like to get a lot more fluent with Smalltalk and Capytalk, do some projects with the Spectral and Morphing sounds and also plumb the mysteries (to me) of the Timeline and Multigrid. That should keep me busy for a while!

[EN]: Outside of Kyma, what would you most like to learn more about in the coming year?

[RS]: I have a couple of relatively new synths, a Synclavier Regen and a Buchla Easel, that I would like to spend a lot more time learning my way around. I also want to keep progressing with my Touch Guitar studies.


[EN]: Rick, thank you for the thought-provoking discussion! Can people get in touch with you on the Kyma Discord if they have questions, feedback, or proposals for collaboration?

[RS]: Yes!

Come up to the Lab

Anssi Laiho is the sound designer and performer for Laboratorio — a concept developed by choreographer Milla Virtanen and video artist Leevi Lehtinen as a collection of “experiments” that can be viewed either as modules of the same piece or as independent pieces of art, each with its own theme. The first performance of Laboratorio took place in November 2021 in Kuopio, Finland.

Laboratorio Module 24, featuring Anssi performing a musical saw with live Kyma processing was performed in the Armastuse hall at Aparaaditehas in Tartu, Estonia and is dedicated to theme of identity and inspiration.

Anssi’s hardware setup, both in the studio and live on stage, consists of a Paca connected to a Metric Halo MIO2882 interface via bidirectional ADAT in a 4U mobile rack. Laiho has used this system for 10 years and finds it intuitive, because Metric Halo’s MIOconsole mixer interface gives him the opportunity to route audio between Kyma, the analog domain, and the computer in every imaginable way. When creating content as a sound designer, he often tries things out in Kyma in real-time by opening a Kyma Sound with audio input and listening to it on the spot. If it sounds good, he can route it back to his computer via MIOconsole and record it for later use.

His live setup for Laboratorio Module 24 is based on the same system setup. The aim of the hardware setup was to have as small a physical footprint as possible, because he was sharing the stage with two dancers. On stage, he had a fader-controller for the MIOconsole (to control feedback from microphones), an iPad running Kyma Control displaying performance instructions, a custom-made Raspberry Pi Wi-Fi footswitch sending OSC messages to Kyma, and a musical saw.

Kyma Control showing “Kiitos paljon” (“Thank you” in Finnish), Raspberry Pi foot switch electronics, rosin for the bow & foot switch controller

The instrument used in the performance is a Finnish Pikaterä Speliplari musical saw (speliplari means ‘play blade’). The instrument is designed by the Finnish musician Aarto Viljamaa. The plaintive sound of the saw is routed to Kyma through 2 microphones, which are processed by a Kyma Timeline. A custom-made piezo-contact microphone and preamp is used to create percussive and noise elements for the piece, and a small diaphragm shotgun microphone is employed for the softer harmonic material.

The way Anssi works with live electronics is by recording single notes or note patterns with multiple Kyma MemoryWriter Sounds. These sound recordings are then sampled in real-time or kept for later use in a Kyma timeline. He likes to think of this as a way of reintroducing a motive of the piece as is done in classical music composition. This also breaks the inherent tendency of adding layers when using looping samplers, which, in Anssi’s opinion, often becomes a burden for the listener at some point.

The Kyma sounds used in the performance Timeline are focused on capturing and resampling the sound played on the saw and controlling the parameters of these Sounds live, in timeline automation, presets, or through algorithmic changes programmed in Capytalk.

Laiho’s starting point for the design was to create random harmonies and arpeggiations that could then be used as accompaniment for an improvised melody. For this, he used the Live Looper from the Kyma Sound Library and added a Capytalk expression to its Rate parameter that selects a new frequency from a predefined selection of frequencies (intervals relative to a predefined starting note) to create modal harmony. He also created a quadrophonic version of the Looper and controlled the Angle parameter of each loop with a controlled random Capytalk expression that makes each individual note travel around the space.

Another Sound used in the performance is one he created a long time ago named Retrosampler. This sound captures only a very short sample of live sound and creates 4 replicated loops, each less than 1 second long. Each replicated sample has its own parameters that he controls with presets. This, together with the sine wave quality of the saw, creates a result that resembles a beeping sine wave analog synthesizer. The sound is replicated four times so he has the possibility to play 16 samples if he to presses “capture” 4 times.

The Retrosampler sound is also quadraphonic and its parameters are controlled by presets. His favorite preset is called “Line Busy” which is exactly what it sounds like. [Editor’s note: the question is which busy signal?]

For the noise and percussion parts of the performance, he used a sound called LiveCyclicGrainSampler, which is a recreation of an example from Jeffrey Stolet’s Kyma and the SumOfSines Disco Club book. This sound consists of a live looping MemoryWriter as a source for granular reverb and 5 samples with individual angle and rate parameter settings. These parameters were then controlled with timeline automation to create variation in the patterns they create.

Anssi also used his two favorite reverbs in the live processing: the NeverEngine Labs Stereo Verb, and Johannes Regnier’s Dattorro Plate.

Kyma is also an essential part of Laiho’s sound design work in the studio. One of the tracks in the performance is called “Experiment 0420” and it is his “Laboratory experiment” of Kyma processing the sound of an aluminum heat sink from Intel i5 3570K CPU played with a guitar pick. Another scene of the performance contains a song called “Tesseract Song” that is composed of an erratic piano chord progression and synthetic noise looped in Kyma and accompanied by Anssi singing through a Kyma harmonizer.

The sound design for the 50-minute performance consists of 11-12 minutes of live electronics, music composed in the studio, and “Spring” by Antonio Vivaldi. The overall design goal was to create a kaleidoscopic experience where the audience is taken to new places by surprising turns of events.

Sounding the ocean

graph that depicts changes in the bottom pressure of the ocean over one month. The data was taken from sensors on the Juan de Fuca plate in the NE Pacific Ocean. The changes in bottom pressure in the graph depict daily fluctuations of the tidal cycle across the entire graph, but is marked by a large shift in bottom pressure in the middle of the data on April 24, 2015. The bottom pressure shifted as a result of the volcanic eruption — an indication that the seafloor dropped.In June 2024, Jon Bellona presented a paper at the 2024 International Conference on Auditory Display in Troy, NY on behalf of the NSF-funded Accessible Oceans project team. The paper, Iterative Design of Auditory Displays Involving Data Sonifications and Authentic Ocean Data, outlines their auditory display framework and their use of Kyma for all of their data sonification. Program with links to all papers are accessible online at https://icad2024.icad.org/program/

During the presentation, Jon played the full 2015 Axial Seamount Eruption. When an audience member asked about his use of earcon wrappers around each sonification, Jon shared a story about how his interviews with teachers at Perkins School for the Blind led him to include this feature.

In the coming year, Bellona plans to continue his work doing sonification with Kyma as part of a new project funded through NOAA: A Sanctuary in Sound: Increasing Accessibility to Gray’s Reef Data through Auditory Displays with Jessica Roberts, director of the Technology-Integrated Learning Environments (TILEs) Lab at Georgia Tech.

Willful Devices in Dublin

Composer Scott Miller’s Kyma control surface

Ecosystemics — the guiding principle for much of composer Scott L. Miller’s work over past two decades, constitutes an ecological approach to composition in which form is a dynamic process that is intimately tied to the ambience of the space in which the music occurs. In a live ecosystemic environment, Kyma Sounds are parametrically coupled with the environment via sound. As Miller explains in his two-part article for INSIGHTs magazine — Ecosystemic Programming and Composition:

In ecosystemic music, change in the sonic environment is continuously measured by Kyma with input from microphones. This change produces data that is mapped to control the production of sound. Environmental change may be instigated by performers, audience members, sound produced by the computer itself, and the ambience of the space(s) in general.

Sam Wells and Adam Vidiksis, collaborators on Miller’s new album of telematic ecosystemic music, Human Capital, describe performing with Miller’s Kyma environments as “like interacting with a living entity”.

On 2 August 2024, Scott will be joined by clarinetist Pat O’Keefe to perform as Willful Devices (a clarinet plus Kyma duo) at ClarinetFest in Dublin.

Collaborators since 2003, the duo’s name derives from the fact that both of them manipulate devices — one a clarinet, one a computer — to generate music. And that, despite their best efforts, these devices are never fully under their control, at times almost seeming to have a mind of their own. Rather than bemoaning this fact, Scott and Pat welcome the potential for unimagined sonic discoveries inherent in this unpredictability.

Friday’s setlist includes:

  • Piano – Forte I, Piano – Forte II, and Piano – Forte III telematic collaborations
  • Semai Seddi-Araban by Tanburi Cemil Bey, the premiere of the duo’s take on a classic Turkish semai.
  • Mirror Inside from Shape Shifting (2004), for clarinet and Kyma
  • Fragrance of Distant Sundays, the duo’s tribute to Carei Thomas, the Minneapolis improviser/composer who passed away in 2020