Umano Post Umano

On 19 December 2024, composer Agostino Di Scipio utilized live Kyma signal processing in his ambitious intermedia work premiered at the Nuova Consonanza festival.

Umano Post Umano is, in some sense, a review of Di Scipio’s artistic work, exposing the full range of performance practices he has explored over the last 25 years (since he first introduced Audible Ecosystemics). Umano Post Umano features music, electroacoustic environments and digital audio processing by Agostino Di Scipio, video projections and stage design by Matias Guerra, texts collected from job advertisements, legal and financial advertising and from Hesiod’s Theogony, as well as multiple performers on acoustic instruments, each of whom also operates a live camera.

The work has its origins in a 2008 commission by the Società dei Concerti “Barattelli” in L’Aquila, but work on the project had to be suspended and cancelled as a result of the devastating L’Aquila earthquake of April 2009. Revived and rethought for the Nuova Consonanza 2024 festival and the Pelanda space, the chamber music theater project evolved and emerged with a new title and concept: Umano post umana*.

Central to the work is the concept of sound as the interface. Sound events are never of solely human origin but are always hybrid and distributed according to ecosystemic dynamics where each component is in contact with every other part. The piece reflects, through the experience of sound, today’s exasperating conditions of precarious work, an expression of a conception of the human as a resource managed by mechanistic and algorithmic agents.

Each aspect of the performance – according to a reduced economy of instrumental, electroacoustic, computer and telematic means – evolves in relative autonomy but as a function of the specific performance space – “in real time” but above all “in real space”. What emerges is a coherent ensemble that is nevertheless precarious and subject to disorientation, drifts and possible failures. The overall whole emerges from the material interdependence of performers, shared spaces and creative appropriation of means.

Microphones and loudspeakers are irregularly scattered throughout the space and light, semi-transparent sheets cut the space irregularly. In the soft light, there are workstations with laptops, small percussion instruments and various accessories – as well as musical instruments: flutes, cello, bass clarinet, timpani (all equipped with electronic prostheses that “increase” but also “decrease” and over-determine the instrumental gestures). Each workstation is an autonomous instrumental-electroacoustic-computer chain, not subject to centralized direction or management.

The instruments themselves are mechanical components of a system, from whose functioning they are not independent. The performers have their own service lights (headlamps) and low-resolution webcams through which they watch (monitor?) each other. Lights and images pass through the sheets, projecting onto the walls, onto the performers, and onto the listeners. Some operators wander around in the dim light providing “emergency” technical maintenance to the various workstations.

In the economy of means thus designed, the occurrence and articulation of sounds remain tied to the here-and-now of performative circumstances and contingencies: is it possible that – from silence, from background noise, from acoustic residues of the place, from the mere co-presence of humans and machines – frictions and contacts are formed, that signals and a meaning arise? The sounds take shape from distributed relations, from uncertain and open co- and inter-dependencies, heard at times as atmospheric textures, and, at other times, as clear transient gestures.

Music is made first of all by listening: listening is an active part of the performative dynamic. One acts and is constantly acted upon, one is bound by what one intends to bind: here “music” is the tension of this being more and less of oneself. Performative tension in unstable equilibrium, for whose precariousness (tragic) we must be grateful. In sound we listen (welcome) the conditions of the happening of events and the conditions of our welcoming them (listening).

 

On 8 November 2024, Di Scipio performed in Essen Germany with his former student Dario Sanfilippo as part of the Philharmonie Essen NOW! Laissez vibrer Late Night Concert Machine Milieu. Machine Milieu is a joint live electronics project in which the two performers’ computer music systems are networked with each other. The idea is to view the human performer, the equipment, and the performance space as three places connected to each other through the medium of sound. According to Di Scipio and Sanfilippo, “Machine Milieu” can “develop an integral and potentially autonomous performance ecosystem based exclusively on location-specific acoustic information.” For this performance, the acoustic details were provided by the RWE Pavilion in Essen.


Di Scipio is the chair of the Electronic Music department at Conservatory of L’Aquila, and the chief coordinator of the doctoral board (supervising all PhDs in “artistic research in music”).

In 2020, he completed his PhD at University of Paris VIII with the dissertation, What is « living » in live electronics performance ? : an ecosystemic perspective on sound art and music creative practices, in which he explores the question of “liveness” in the performance of live electronic music, particularly in view of the fact that any performance approach today relies on a large set of heterogeneous technological infrastructures. He proposes a “systemic” view of liveness, and describes the operational details of his own artistic research endeavors. Finally, moving from the “living” character of electronic performance to the “lived” experience of sound, he poses the question of an “ecosystem consciousness” in the cognitive process of listening, particularly as it relates to compositional and sound art practices based on a strict economy of means.

Kyma and the space of computable sound @ ADC24

“…each instrument, each tool… implies an imaginable and explorable universe” — Jacques Attali

To Symbolic Sound co-founder, Carla Scaletti, every tool — from a user interface to a programming language, to an LLM — is a “map” to some underlying functionality. How do the design choices we make affect what people can imagine creating with those maps? How can we begin to navigate an abstract, completely unknown (and potentially infinite) space — like the space of all computable sound? How (and why) is the Kyma Sound graph radically different from other “visual” programming languages for sound and music?

Find out by watching her keynote address, presented at the 2024 Audio Developer Conference (ADC 24).

Violins abducted by aliens

They come in peace!

Anssi Laiho’s Teknofobia Ensemble is a live-electronics piece that combines installation and concert forms: an installation, because its sound is generated by machines; a concert piece, because it has a time-dependent structure and musical directions for performers. The premiere was 13 November 2024 at Valvesali in Oulu, Finland.

Laiho views technophobia, the fear of new technological advancements, as a subcategory of xenophobia, the fear of the unknown or of outsiders. His goal was to present both of these phobias in an absurd setting.

The composer writes that “the basic concept of technophobia — that ‘machines will replace us and make us irrelevant’— is particularly relevant today, as programs using artificial intelligence are becoming mainstream and are widely used across many industries.”

Teknofobia Ensemble poses the question: What if there were a planet inhabited by a mechanical species, and these machines came to Earth and tried to communicate with us via music? What would the music sound like, and would they first try to learn and imitate our culture in order to communicate with us?

Laiho’s aim was to reproduce the live-electronics environment he would normally work in, but to replace the human musicians with robots — not androids or simulants but “mechanical musicians”.

He asked himself, “What would it mean for my music and creative process if this basic assumption were to become true? As a composer living in the 2020s, do I still need musicians to perform my compositions? Wouldn’t it be easier to work with machines that always fulfill my requests? Can a mechanical musician interpret a musical piece on an emotional level, as a human being does, or does it simply apply virtuosity to the technical execution of the task?”

He then set out to prove himself wrong!

Teknofobia Ensemble consists of five prepared violins, each equipped with a Raspberry Pi that controls various types of electronic motors (solenoids, DC motors, stepper motors, and servos) through a Python program. This program converts OSC commands received from Kyma into PWM signals on the Raspberry Pi pins, which are connected to motor drivers.

In live performances, Kyma acts as the conductor for the ensemble, while Laiho views his role as primarily that of a “mixer for the band”.

The piece is structured as a 26-minute-long Kyma timeline, consisting of OSC instructions (the musical notation of the piece) for the mechanical violins. The live sound produced by the violins is routed back to Kyma via custom-made contact microphones for live electronic processing.

Toe and Shell (IRL)

Remember the lockdowns? Throughout those bleak days when musicians couldn’t perform in public, a small group of Kyma artists managed to find a way to continue meeting and making music together via Zoom in two virtual concerts they called Toe and Shell. In early November 2024 they decided to meet in person at Splendor, a former bathhouse that Anne La Berge has converted into a musical mecca in the center of Amsterdam, The Netherlands.

Setting up for a multi-system jam session

The first order of business was for the participants to collectively agree on a schedule of presentations, discussions, and public concerts. Then, over the course of the three day meetup, everyone had a chance to ask questions, experiment, share expertise, and improvise together.

Pete Johnston, Anne La Berge, Charlie Norton sharing expertise

As part of the Toe and Shell, Anne La Berge hosted three public concerts at Splendor.

Alan Jackson performing with Anne La Berge
Steve Ricks improvisation with trombone and Kyma

Alan Jackson conducting the collective schedule-making session

 

 

Emergent life, mind, and music

At the IRCAM Forum Workshops @Seoul 6-8 November 2024, composer Steve Everett presented a talk on the compositional processes he used to create FIRST LIFE: a 75-minute mixed media performance for string quartet, live audio and motion capture video, and audience participation.

FIRST LIFE is based on work that Everett carried out at the Center of Chemical Evolution, a NSF/NASA funded project at multiple universities to examine the possibility of the building blocks of life forming in early Earth environments. He worked with stochastic data generated by Georgia Tech biochemical engineer Martha Grover and mapped them to standard compositional structures (not as a scientific sonification, but to help educate the public about the work of the center through a musical performance).

Data from IRCAM software and PyMOL were mapped to parameters of physical models of instrumental sounds in Kyma. For example, up to ten data streams generated by the formation of monomers and polymers in Grover’s lab were used to control parameters of the “Somewhat stringish” model in Kyma (such as delay rate, BowRate, position, decay, etc). Everett presented a poster about this work at the 2013 NIME Conference in Seoul, and has uploaded some videos from the premiere of First Life at Emory University.

Currently on the music composition faculty of the City University of New York (CUNY), Professor Everett is teaching a doctoral seminar on timbre in the spring (2025) semester and next fall he will co-teach a course on music and the brain with Patrizia Casaccia, director of the Neuroscience Initiative at the CUNY Advanced Science Research Center.

Anne La Berge at the Cortona Sessions

A highlight of this year’s Cortona Sessions for New Music will be Special Guest Artist, Anne La Berge. Known for her work blending composed and improvised music, sound art, and storytelling, Anne will be working closely with composers and will be coaching performers on improvisation with live Kyma electronics!

Anne La Berge at KISS2017

The Cortona Sessions for New Music is scheduled for 20 July – 1 August 2025 in Ede, Netherlands, and includes twelve days of intensive exploration of contemporary music, collaboration, and discussions on what it takes to make a career as a 21st-century musician.

Anne is eager to work with instrumentalists and composers looking to expand their solo or ensemble performances through live electronics, so if you or someone you know is interested in working with Anne this summer, consider applying for the 2025 Cortona Sessions!

Applications are open now (Deadline: 1 February 2025). You can apply as a Composer, a Performer, or as a Groupie (auditor). A full-tuition audio/visual fellowship is available for applicants who can provide audio/visual documentation services and/or other technological support.

Generative sound design at University of West London

At the invitation of UWL Lecturer Charlie Norton, Carla Scaletti presented a lecture/demonstration on Generative Sound Design in Kyma for students, faculty and guests at University of West London on 14 November 2024. As an unanticipated prelude, Pete Townshend (who, along with Joseph Townshend, works extensively with Kyma) welcomed the Symbolic Sound co-founders to his alma mater and invited attendees to tour the Townshend Studio following the lecture.

After the seminar, graduating MA students Vinayak Arora and Sabin Pavel (hat) posed with Kurt Hebel & Carla Scaletti (center) and Charlie Norton (distant upper right background)
UWL Professor of Music Production Justin Paterson and Trombonist/Composer/Kyma Sound Installation artist Robert Jarvis discuss the extensive collection of instruments in the Townshend Studio

It seems that anywhere you look in the Townshend Studio, you see another rock legend. John Paul Jones (whose most recent live Kyma collaborations include Sons of Chipotle, Minibus Pimps, and Supersilent among others) recognized an old friend from across the room: a Yamaha GX-1 (1975), otherwise known as ‘The Dream Machine’ — the same model JPJ played when touring with Led Zeppelin and when recording the 1979 album “In Through The Out Door”. Yamaha’s first foray into synthesizers, only 10 were ever manufactured; it featured a ribbon controller and a keyboard that could also move laterally for vibrato. Other early adopters included ELP, Stevie Wonder and Abba.

JPJ recollects his days of touring with the GX1 and the roadie who took up temporary accommodation in its huge flight case, as Alan, two students, Bruno and Robert look on.
Charlie Norton with Alan Jackson (back), JP Jones (at Yamaha GX-1), Carla Scaletti & Kurt Hebel in the Townshend Studio
Alan Jackson and Pete Johnston pondering the EMS Vocoder
Composer Bruno Liberda (the tall one) with Symbolic Sound co-founder Carla Scaletti

 

Brainwaves, Hyper-organs, DJ sets & Opposites

Franz Danksagmüller has had a densely packed fall 2024 concert season of hyper-organ plus Kyma performances in London, Köln, Helsinki, Lübeck and Berlin.

Between October performances of Nosferatu with his students from the Royal Academy of Music in London, Franz Danksagmüller traveled to Cologne to perform a concert with Kyma and the Hyper-organ at Kunst-Station St. Peter in Cologne on 18 October 2024. Here are some photos from Cologne including one taken from inside the organ (placing the mics for live Kyma input can be a risky business!)

Franz Danksagmüller placing mics inside the organ at Kulturstation St. Peter, Cologne (narrow and treacherous paths…)
Cologne, Kunststation St. Peter (which despite it’s name is a functioning church with a congregation and services) – looking down from the balcony to the mobile organ console.
Cologne, Kunststation St. Peter organ console with Franz Danksagmüller’s Glove controller and Muse EEG scanner
Cologne, Kunststation St. Peter, the horizontal trumpets…
Cologne, Kunststation St. Peter with paper score & iPad running Kyma Control

After the film festival at St Albans in London, it was on to Helsinki for Danksagmüller, where he performed with Kyma and the new hyper-organ at the Musiikkitalo on 11 November 2024 — the first time that this organ was played with live-electronics and video! Danksagmüller performed his new composition there is no free will for EEG scanner and Kyma and a live-electronics “remix” of a prelude by D. Buxtehude.

Helsinki, Musiikkitalo, view from the stage of the organ console
Helsinki, Musiikkitalo, organ console
Helsinki, Musiikkitalo, Franz Danksagmüller featured on a sign outside the concert hall
Helsinki Musiikkitalo; the entire organ is in a swell box with wooden shutters (in addition to the individual swell boxes) for dynamics. The twisted organ pipes are fully functional!
Helsinki organ console with laptop and Kyma Control on iPad

Following the Helsinki performance, Danksagmüller returned to Lübeck to perform an hour-long DJ set with Kyma and variety of controllers for an after party following a concert at the Kunsttankstelle Lübeck.

Lübeck, Kunsttankstelle, Franz Danksagmüller’s live DJ setup
Kulturtankstelle Lübeck; the first part of the performance (Danksagmüller joined at the end of their first part and played the last part)

And the busy concert season is not over yet! Danksagmüller’s students are performing their own compositions for hyper-organ and Kyma at the Musikhochschule Lübeck on 8 December, and Franz will perform with organ and Kyma in Berlin on Friday the 13th at the Zum Heilbronnen church on their Opposites Attract series. Danksagmüller’s “Light and Shadow” — featuring music by W. Byrd, J. S. Bach and F. Danksagmüller — is framed by other concerts on themes like “Fire and Water”, “Life and Death”, “Beginnings and Endings”, and other evocative opposite-pairings.

Audio Developer Conference 2024

Carla Scaletti, creator of the Kyma language and co-founder of Symbolic Sound, was invited to present the closing keynote for the 2024 Audio Developer Conference in Bristol, UK on 13 November 2024.

Scaletti spoke about strategies for exploring the boundary between the known and the unknown through “recursive construction” and reminded software developers that the risky kinds of thinking required for creating something truly new are best supported by facing the unknown together.

She also touched on some of the design decisions behind the Kyma Sound (why it is a function, rather than a wiring diagram), the ways in which a user-interface defines an explorable universe of computable sound, the influence of programming languages on what you can imagine and create, and the co-evolution of the Kyma hardware and software.

An annual event celebrating all audio development technologies, from music applications and game audio to audio processing and embedded systems, ADC’s mission is to help attendees acquire and develop new skills, and build a network that will support their career development. It also showcases academic research and facilitates collaborations between research and industry. ADC will begin releasing videos of the ADC 2024 presentations on YouTube, beginning in 2025.

 

Kyma developers visit DiGiCo

DiGiCo R&D staff with their flagship console

On Friday afternoon, 15 November 2024, Pete Johnston (software department) and Michael Aitchison​​​​ (head of R&D) invited Carla Scaletti to present a seminar on sound synthesis for the R&D team at DiGiCo.

Following the lecture, Pete Johnston (who routinely prototypes and tests new signal processing algorithms in Kyma first before implementing them on the embedded processors in the live consoles) led the guests on a tour of DiGiCo’s testing facility and answered questions about the fully redundant, live fall-back dual consoles and the on-call 24/7 worldwide user support that DiGiCo provides for their live pro consoles.

Matt, Carla, Alan, Pete, and Robin discussing Capytalk expressions after the lecture at DiGiCo