Violins abducted by aliens

They come in peace!

Anssi Laiho’s Teknofobia Ensemble is a live-electronics piece that combines installation and concert forms: an installation, because its sound is generated by machines; a concert piece, because it has a time-dependent structure and musical directions for performers. The premiere was 13 November 2024 at Valvesali in Oulu, Finland.

Laiho views technophobia, the fear of new technological advancements, as a subcategory of xenophobia, the fear of the unknown or of outsiders. His goal was to present both of these phobias in an absurd setting.

The composer writes that “the basic concept of technophobia — that ‘machines will replace us and make us irrelevant’— is particularly relevant today, as programs using artificial intelligence are becoming mainstream and are widely used across many industries.”

Teknofobia Ensemble poses the question: What if there were a planet inhabited by a mechanical species, and these machines came to Earth and tried to communicate with us via music? What would the music sound like, and would they first try to learn and imitate our culture in order to communicate with us?

Laiho’s aim was to reproduce the live-electronics environment he would normally work in, but to replace the human musicians with robots — not androids or simulants but “mechanical musicians”.

He asked himself, “What would it mean for my music and creative process if this basic assumption were to become true? As a composer living in the 2020s, do I still need musicians to perform my compositions? Wouldn’t it be easier to work with machines that always fulfill my requests? Can a mechanical musician interpret a musical piece on an emotional level, as a human being does, or does it simply apply virtuosity to the technical execution of the task?”

He then set out to prove himself wrong!

Teknofobia Ensemble consists of five prepared violins, each equipped with a Raspberry Pi that controls various types of electronic motors (solenoids, DC motors, stepper motors, and servos) through a Python program. This program converts OSC commands received from Kyma into PWM signals on the Raspberry Pi pins, which are connected to motor drivers.

In live performances, Kyma acts as the conductor for the ensemble, while Laiho views his role as primarily that of a “mixer for the band”.

The piece is structured as a 26-minute-long Kyma timeline, consisting of OSC instructions (the musical notation of the piece) for the mechanical violins. The live sound produced by the violins is routed back to Kyma via custom-made contact microphones for live electronic processing.

Emergent life, mind, and music

At the IRCAM Forum Workshops @Seoul 6-8 November 2024, composer Steve Everett presented a talk on the compositional processes he used to create FIRST LIFE: a 75-minute mixed media performance for string quartet, live audio and motion capture video, and audience participation.

FIRST LIFE is based on work that Everett carried out at the Center of Chemical Evolution, a NSF/NASA funded project at multiple universities to examine the possibility of the building blocks of life forming in early Earth environments. He worked with stochastic data generated by Georgia Tech biochemical engineer Martha Grover and mapped them to standard compositional structures (not as a scientific sonification, but to help educate the public about the work of the center through a musical performance).

Data from IRCAM software and PyMOL were mapped to parameters of physical models of instrumental sounds in Kyma. For example, up to ten data streams generated by the formation of monomers and polymers in Grover’s lab were used to control parameters of the “Somewhat stringish” model in Kyma (such as delay rate, BowRate, position, decay, etc). Everett presented a poster about this work at the 2013 NIME Conference in Seoul, and has uploaded some videos from the premiere of First Life at Emory University.

Currently on the music composition faculty of the City University of New York (CUNY), Professor Everett is teaching a doctoral seminar on timbre in the spring (2025) semester and next fall he will co-teach a course on music and the brain with Patrizia Casaccia, director of the Neuroscience Initiative at the CUNY Advanced Science Research Center.

Anne La Berge at the Cortona Sessions

A highlight of this year’s Cortona Sessions for New Music will be Special Guest Artist, Anne La Berge. Known for her work blending composed and improvised music, sound art, and storytelling, Anne will be working closely with composers and will be coaching performers on improvisation with live Kyma electronics!

Anne La Berge at KISS2017

The Cortona Sessions for New Music is scheduled for 20 July – 1 August 2025 in Ede, Netherlands, and includes twelve days of intensive exploration of contemporary music, collaboration, and discussions on what it takes to make a career as a 21st-century musician.

Anne is eager to work with instrumentalists and composers looking to expand their solo or ensemble performances through live electronics, so if you or someone you know is interested in working with Anne this summer, consider applying for the 2025 Cortona Sessions!

Applications are open now (Deadline: 1 February 2025). You can apply as a Composer, a Performer, or as a Groupie (auditor). A full-tuition audio/visual fellowship is available for applicants who can provide audio/visual documentation services and/or other technological support.

Percussion Robot

Ping Pong Percussion represents a new, hybrid art form created by experimental composer / visual artist, Giuseppe Tamborrino and featuring a robotic instrument that he designed and built.

Part robot, part sound sculpture, part musical composition, part video art — Ping Pong Percussion Experimental Sampling with Wifi Servo Motor & Live Granular Synthesis. Laterza (TA) 2024-10-6 blends robotically-actuated acoustic percussion sounds with live Kyma granular synthesis and leads the viewer on a path from real world to imaginary visuals.

Giuseppe Tamborrino’s Wi-Fi servo motor-controlled sound sculpture

In his compositions, Laterza-based composer Giuseppe Tamborrino combines jazz scales, Greek modes and personalized scales with partially tonal, tonal and non-tonal timbres, blended with instrumental acoustic effects. Each work takes a different form, some are stochastically shaped, others reflect the golden section, others take on algorithmic structures experimentally generated by custom software for computer music and sound.

Generative events

Explorations in the generative control of notes and rhythms, scales and modes — from human to algorithmic gesture

Composer Giuseppe Tamborrino uses computer tools not just for timbre, but also for the production of events in deferred time or in real time. This idea forms the basis for generative music — examples of which can be found throughout music history, for example Mozart’s Musical Dice Game.

In his research, Tamborrino carries out this process in various ways and with different software, but the goal is always the same: the generation of instructions for instruments — which he calls an “Electronic score”.

Here’s an example from one of his generative scores:

 

As part of the process, Tamborrino has always designed in a certain degree of variability, using different stochastic or totally random objectives to speed up the process of abstraction and improvisation that, once launched, are invariant. Often, though, this way of working resulted in small sections that he wanted to cut or correct or improve.

This motivated him to use Kyma to pursue a new research direction — called post-controlled generative events — with the aim of being able to correct and manage micro events.

This is three-step process:

  • A generational setting phase (pre-time)
  • A performance phase, recording all values in the Timeline (real time)
  • A post-editing phase of automatic “generative (after time) events

Tamborrino shared some of the results of his research on the Kyma Discord, and he invites others to experiment with his approach and to engage in an online discussion of these ideas.

Vision of things unseen

Motion Parallels, a collaboration between composer James Rouvelle and visual artist Lili Maya was developed earlier this year in connection with a New York Arts Program artist’s residency in NYC.

For the live performance Rouvelle compressed and processed the output of a Buchla Easel through a Kyma Timeline; on a second layer, the Easel was sampled, granulated and spectralized and then played via a midi-keyboard. The third layer of Sounds was entirely synthesized in Kyma and played via Kyma Control on the iPad.

For their performative works Rouvelle and Maya develop video and audio structures that they improvise through. Both the imagery and sound incorporate generative/interactive systems, so the generative elements can respond to them, and vice versa.

Rouvelle adds, “I work with Kyma every day, and I love it!”

Come up to the Lab

Anssi Laiho is the sound designer and performer for Laboratorio — a concept developed by choreographer Milla Virtanen and video artist Leevi Lehtinen as a collection of “experiments” that can be viewed either as modules of the same piece or as independent pieces of art, each with its own theme. The first performance of Laboratorio took place in November 2021 in Kuopio, Finland.

Laboratorio Module 24, featuring Anssi performing a musical saw with live Kyma processing was performed in the Armastuse hall at Aparaaditehas in Tartu, Estonia and is dedicated to theme of identity and inspiration.

Anssi’s hardware setup, both in the studio and live on stage, consists of a Paca connected to a Metric Halo MIO2882 interface via bidirectional ADAT in a 4U mobile rack. Laiho has used this system for 10 years and finds it intuitive, because Metric Halo’s MIOconsole mixer interface gives him the opportunity to route audio between Kyma, the analog domain, and the computer in every imaginable way. When creating content as a sound designer, he often tries things out in Kyma in real-time by opening a Kyma Sound with audio input and listening to it on the spot. If it sounds good, he can route it back to his computer via MIOconsole and record it for later use.

His live setup for Laboratorio Module 24 is based on the same system setup. The aim of the hardware setup was to have as small a physical footprint as possible, because he was sharing the stage with two dancers. On stage, he had a fader-controller for the MIOconsole (to control feedback from microphones), an iPad running Kyma Control displaying performance instructions, a custom-made Raspberry Pi Wi-Fi footswitch sending OSC messages to Kyma, and a musical saw.

Kyma Control showing “Kiitos paljon” (“Thank you” in Finnish), Raspberry Pi foot switch electronics, rosin for the bow & foot switch controller

The instrument used in the performance is a Finnish Pikaterä Speliplari musical saw (speliplari means ‘play blade’). The instrument is designed by the Finnish musician Aarto Viljamaa. The plaintive sound of the saw is routed to Kyma through 2 microphones, which are processed by a Kyma Timeline. A custom-made piezo-contact microphone and preamp is used to create percussive and noise elements for the piece, and a small diaphragm shotgun microphone is employed for the softer harmonic material.

The way Anssi works with live electronics is by recording single notes or note patterns with multiple Kyma MemoryWriter Sounds. These sound recordings are then sampled in real-time or kept for later use in a Kyma timeline. He likes to think of this as a way of reintroducing a motive of the piece as is done in classical music composition. This also breaks the inherent tendency of adding layers when using looping samplers, which, in Anssi’s opinion, often becomes a burden for the listener at some point.

The Kyma sounds used in the performance Timeline are focused on capturing and resampling the sound played on the saw and controlling the parameters of these Sounds live, in timeline automation, presets, or through algorithmic changes programmed in Capytalk.

Laiho’s starting point for the design was to create random harmonies and arpeggiations that could then be used as accompaniment for an improvised melody. For this, he used the Live Looper from the Kyma Sound Library and added a Capytalk expression to its Rate parameter that selects a new frequency from a predefined selection of frequencies (intervals relative to a predefined starting note) to create modal harmony. He also created a quadrophonic version of the Looper and controlled the Angle parameter of each loop with a controlled random Capytalk expression that makes each individual note travel around the space.

Another Sound used in the performance is one he created a long time ago named Retrosampler. This sound captures only a very short sample of live sound and creates 4 replicated loops, each less than 1 second long. Each replicated sample has its own parameters that he controls with presets. This, together with the sine wave quality of the saw, creates a result that resembles a beeping sine wave analog synthesizer. The sound is replicated four times so he has the possibility to play 16 samples if he to presses “capture” 4 times.

The Retrosampler sound is also quadraphonic and its parameters are controlled by presets. His favorite preset is called “Line Busy” which is exactly what it sounds like. [Editor’s note: the question is which busy signal?]

For the noise and percussion parts of the performance, he used a sound called LiveCyclicGrainSampler, which is a recreation of an example from Jeffrey Stolet’s Kyma and the SumOfSines Disco Club book. This sound consists of a live looping MemoryWriter as a source for granular reverb and 5 samples with individual angle and rate parameter settings. These parameters were then controlled with timeline automation to create variation in the patterns they create.

Anssi also used his two favorite reverbs in the live processing: the NeverEngine Labs Stereo Verb, and Johannes Regnier’s Dattorro Plate.

Kyma is also an essential part of Laiho’s sound design work in the studio. One of the tracks in the performance is called “Experiment 0420” and it is his “Laboratory experiment” of Kyma processing the sound of an aluminum heat sink from Intel i5 3570K CPU played with a guitar pick. Another scene of the performance contains a song called “Tesseract Song” that is composed of an erratic piano chord progression and synthetic noise looped in Kyma and accompanied by Anssi singing through a Kyma harmonizer.

The sound design for the 50-minute performance consists of 11-12 minutes of live electronics, music composed in the studio, and “Spring” by Antonio Vivaldi. The overall design goal was to create a kaleidoscopic experience where the audience is taken to new places by surprising turns of events.

Acrylic Sounds

Giuseppe Dante Tamborrino asks:

Why is it that a Picasso painting can be widely known and understood by everyone, while sound abstractions are still considered academic and incomprehensible?

Tamborrino’s answer to that question, Acrylic Sounds, was born in January 2024 in the Laterza province of Taranto – Puglia – Italy in the garage of the professor and composer of electro-acoustic music.

Between 2019-2021 (in the period of COVID-19), Tamborrino created a series of abstract paintings using some of the same algorithms he has been using to generate CSound scores for the past 10 years. His idea was to bring his students closer to the concept of sound abstraction by applying the same principles of abstraction to paintings as to musical scores.

Tamborrino does not call himself a painter and has always argued that painting is painting, sound is sound, and sculpture is sculpture; each has a different role, but sometimes they use the same concepts.

While waiting for his score generation algorithms to compile, Tamborrino engaged in creative outbursts with a sponge and a brush as he drew lines or experimented with random color transformations obtained by sponging, likening it to techniques of sound morphing.

Taking these 60 semi/casual paintings as inspiration, he then realized them as Sounds in Kyma. For each painting, he created a formal pre-design and customized Smalltalk scripts to get closer to the meaning of the picture under analysis.

La Sinusoide by Giuseppe Tamborrino

 
For the sonorisation of the painting “La Sinusoide” he used a Capytalk expression that allows you to control the formants of a filter and the index of the formants with the Dice tool of the Kyma Virtual Control Surface (VCS), generating several layers gradually and quickly with the “smooth” Capytalk function.

He also used the Kyma RE Analysis Tool for the generation of a Resonator-Exciter filter, creating transformations of the classic sine wave with the human voice.

He used a real melismatic choir, because the painting represents a talking machine…

 

“Le Radio” by Giuseppe Tamborrino

For “Le Radio”, Tamborrino tried to simulate the search for the right radio station, transforming songs between them. To do this he reiterated several times sounds and music produced by a group of songs in the same family using a simple “ring modulator” to suggest AM radio and used a Capytalk expression to emulate the gradual spectral transformation effect of switching radio stations combined with random gestures to simulate the classic noise between the station and the music. Finally, he used granular synthesis to create glitch rhythmic transitions and figurations and combined selected abstract material as multiple tracks of a Timeline.

“The Mask of the Seagulls” by Tamborrini

“The Mask of the Seagulls” was inspired by the observation of an elderly man annoyed by the anti-COVID mask and some seagulls that repeatedly circled around him, chanting and emitting verses as if nature were making fun of him.

To express the annoyance of the man, Tamborrino simply recorded his own breath recorded through a mask.

For the creation of this syneathesia, Tamborrino emulated the behavior of cheerful and playful seagulls, with a script for the management of the density, frequency, and duration of the grains of granular synthesis; in exponential mode with decelerations and accelerations and friction functions for physics-based controls and swarming.

All of these layers were then assembled in a multi-track Timeline.

 
Tamborrino plans to publish the work as a book of paintings with QR codes for listening and will exhibit the work as paintings paired with performances on an acousmonium.

Some of Tamborrino’s recent work can be heard at the upcoming New York City Electroacoustic Music Festival, and he has recently released a new album under the Stradivarius Records label.

Sound Artist, Giuseppe Dante Tamborrini, with his “magic wand” (used for both recycled percussion & painting)