Composer Chi Wang describes her instrumentation in Transparent Affordance as a “data-driven instrument.” Data is derived from touching, titling, and rotating an iPad along with a “custom made box” of her own design. Near the end of the piece Wang uses the “box” to great effect. She places the iPad in the box and taps on the outside. Data is still transmitted to Kyma even without direct interaction, though the sounds are now generated in a much more sparse manner as though the iPad needs to rest or the performer decides to contain it. Finally the top is put on the box as one last flurry of sound escapes.
Scott L. Miller describes his piece Eidolon as “…a phantom film score I thought I heard on a transatlantic flight…” Although the film may not exist to guide us through a narrative, the soundtrack does exist and takes the place of visual storytelling. Eidolon, scored for flute, violin, clarinet, and percussion with Kyma, blurs the lines between what the performers are creating and what the electronic sounds are, an aspect of how we hear the world: we can alternately isolate and focus on sounds to identify them and we can also widen our ears to absorb a sonic texture or a soundscape. Electronic sounds were synthesized in Kyma using additive synthesis, mostly partials 17 – 32 of sub-audio fundamentals.
During the presentation, Jon played the full 2015 Axial Seamount Eruption. When an audience member asked about his use of earcon wrappers around each sonification, Jon shared a story about how his interviews with teachers at Perkins School for the Blind led him to include this feature.
In the July 11 2024 of Physics Magazine, host Julie Gould speaks with scientists who rely on senses other than sight, such as hearing and touch, to interpret data and communicate their research. They use sonification — the transformation of data into sound — to “listen” to hydrogen bonds, interpret gravitational-wave signals, and communicate a wide range of astrophysical data. Sonification also offers tools for visually impaired researchers and scientific outreach.
Included in the podcast are Martin Gruebele (University of Illinois at Urbana-Champaign) and Carla Scaletti (Symbolic Sound Corporation) describing how they could hear patterns of hydrogen-bond formation during protein folding that had been missed when relying solely on visual representations. (Their research is described in the 28 May 2024 issue of Proceedings of the National Academy of Sciences: “Hydrogen bonding heterogeneity correlates with protein folding transition state passage time as revealed by data sonification”).
Data sonification, often used for outreach, education and accessibility, is also an effective tool for scientific exploration and discovery!
Working from the Lindorff-Larsen et al Science (2011) atomic-level molecular dynamics simulation of multiple folding and unfolding events in the WW domain, we heard (and analytically confirmed) correlations between hydrogen bond dynamics and the speed of a protein (un)folding transition.
(1) Symbolic Sound Corporation, Champaign, IL 61820, United States;
(2) Department of Chemistry, University of Illinois Urbana-Champaign, IL 61801, United States;
(3) Center for Biophysics and Quantitative Biology, University of Illinois Urbana-Champaign, IL 61801, United States;
(4) School of Chemical Sciences, University of Illinois Urbana-Champaign, IL 61801, United States;
(5) Beckman Institute for Advanced Science and Technology, University of Illinois Urbana-Champaign, IL 61801, United States;
(6) National Center for Supercomputer Applications, University of Illinois Urbana-Champaign, IL 61801, United States;
(7) School of Music, University of Illinois Urbana-Champaign, IL 61801, United States;
(8) Department of Physics, University of Illinois Urbana-Champaign, IL 61801, United States;
(9) Musikhochschule Lübeck, 23552 Lübeck, Germany
Amy Bower (WHOI) and Jon Bellona (UO) invite you to participate in a survey for “Accessible Oceans“, a project focused on designing effective auditory displays to enhance the perception and understanding of ocean data for visitors to museums, aquariums and science centers, including those who are blind or have low vision. Your feedback is crucial in evaluating the effectiveness of their auditory display prototypes.
In this survey, you will listen to a series of auditory displays featuring data sonification, narration, and music. The survey will take approximately 15 minutes to complete. Please note that this is not a test of your abilities; rather, it is an opportunity for you to contribute to the advancement of inclusive learning experiences for everyone.
You can participate in this study if you are 12 years old or older, understand English, and are physically located in the US. To participate, simply click on this survey link. Headphones are recommended for the best listening experience. The survey will be closed on June 1 2024.
Sound designers, electronic/computer musicians and researchers are invited to join us in Busan South Korea 29 August through 1 September 2019 for the 11th annual Kyma International Sound Symposium (KISS2019) — four days and nights of hands-on workshops, live electronic music performances, and research presentations on the theme: Resonance (공명).
“Resonanceâ€, from the Latin words resonare (re-sound) and resonantia (echo), can be the result of an actual physical reflection, of an electronic feedback loop (as in an analog filter), or even the result of “bouncing†ideas off each other during a collaboration. When we say that an idea “resonatesâ€, it suggests that we may even think of our minds as physical systems that can vibrate in sympathy to familiar concepts or ideas.
At KISS2019, the concept of resonance will be explored through an opening concert dedicated to “ecosystemic†electronics (live performances in which all sounds are derived from the natural resonances of the concert hall interacting with the electronic resonances of speaker-microphone loops), through paper sessions dedicated to modal synthesis and the implementation of virtual analog filters in Kyma, through live music performances based on gravity waves, sympathetic brain waves, the resonances of found objects, the resonance of the Earth excited by an earthquake, and in a final rooftop concert for massive corrugaphone orchestra processed through Kyma, where the entire audience will get to perform together by swinging resonant tubes around their heads to experience collective resonance. Sounds of Busan — two hands-on workshops open to all participants — focus on the sounds and datasets of the host city: Busan, South Korea. In part one, participants will take time series data from Busan Metropolitan City (for example, barometric pressure and sea level changes) and map those data into sound in order to answer the question: can we use our ears (as well as our eyes) to help discover patterns in data? In part two, participants will learn how to record, process, and manipulate 3d audio field recordings of Busan for virtual and augmented reality applications.
Several live performances also focus on the host city: a piece celebrating the impact of shipping containers on the international economy and on the port city of Busan; a piece inspired by Samul nori, traditional Korean folk music, in which four performers will play a large gong fitted with contact mics to create feedback loops; and a live performance of variations on the Korean folk song: Milyang Arirang, using hidden Markov models.
Hands-on Practice-based Workshops
In addition to a daily program of technical presentations and nightly concerts (https://kiss2019.symbolicsound.com/program-overview/), afternoons at KISS2019 are devoted to palindromic concerts (where composer/performers share technical tips immediately following the performance) and hands-on workshops open to all participants, including:
• Sounds of Busan I: DATA SONIFICATION
What do the past 10 years of meteorological data sound like? In this hands-on session, we will take time series data related to the city of Busan and map the data to sound. Can we hear patterns in data that we might not otherwise detect?
• The Shape Atlas: MATHS FOR CONTROLLING SOUND
How can you control the way sound parameters evolve over time? Participants will work together to compile a dictionary associating control signal shapes with mathematical functions of time for controlling sound parameters.
• Sounds of Busan II: 3D SOUND TECHNIQUES
Starting with a collection of 3D ambisonic recordings from various locations in and around Busan, we will learn how to process, spatialize, mix down for interactive binaural presentation for games and VR.
Networking Opportunities
Participants can engage with presenters and fellow symposiasts during informal discussions after presentations, workshops, and concerts over coffee, tea, lunches and dinners (all included with registration). After the symposium, participants can join their new-found professional contacts and friends on a tour of Busan (as a special benefit for people who register before July 1).
Sponsors and Organizers
Daedong College Department of New Music (http://eng.daedong.ac.kr/main.do)
Dankook University Department of New Music (http://www.dankook.ac.kr/en/web/international)
Symbolic Sound Corporation (https://kyma.symbolicsound.com/)
Busan Metropolitan City (http://english.busan.go.kr/index)
After the 2010 El Mayor Cucapah 7.2 magnitude earthquake in northern Mexico, seismologist Alejandro González Ortega interviewed Don Chayo, a Cucapah native who witnessed the surface rupture. When Don Chayo drew parallels to the origin stories of the Cucapah people, González began to wonder if these stories may have recounted earlier seismic events that had been passed down over the generations.
Over the next several years, González and his colleague, choreographer/physicist Minerva Muñoz, created a performance piece based on 3D seismological data collected by 12 measurement stations during the event. Muñoz enlisted the help of composer Carla Scaletti to map the data to sound using Kyma and artist David Olivares to map the data to video using Unity.
As Muñez and González conducted further research and interviews with the Cucapah elders, a much more disturbing story began to emerge — that of a displaced people whose livelihood was being cut off and whose very language was being forgotten. What had originally been intended as a science/art collaboration about seismic activity began to morph into a deeper metaphor for displacement, disruption and loss.
The result — Wà Shpá, A journey in bare feet — is a poem in movement, images, sounds and words that explores pilgrimage, displacement, change, the relationship of humans with the environment, transformation and resilience.
The sound and visuals were created from seismological data and satellite geodesics of the El Cucapah Mw 7.2 earthquake that occurred on April 4, 2010. Consistent in many details with the cosmogony myths narrated by Don Chayo that had been passed down over generations, El Mayor-Cucapah Mw 7.2 was the most intense earthquake recorded in this region over the last century.
Wà Shpá, A journey in bare feet is an elegy to the ancestors and to the women and men of today; to the people of the river, of the earth, fire and wind. It is a glimpse into a universe in which animals are gods, and meaning is associated with each of the four cardinal directions, colors, the power of nature and of the land.
“Cosmogony of an Event, El Mayor Cucapah Mw 7.2” is an inter and trans disciplinary dialog of artistic creation and research combining the myths of Cucapah cosmogenesis and the scientific studies of El Mayor-Cucapah Mw 7.2, weaving a network of collaboration, tradition, scientific research, knowledge and experiences, but above all, creating a dialog between scientists, artists, native community, collaborators and the general public who participate in this live performance/ritual.
Credits:
Direction, stage creation and interpretation: Minerva Muñoz *
Production: Alejandro González, Minerva Muñoz / La Machina Productions
Scenic Advisor: Jorge Folgueira
Lighting: Minerva Muñoz
Composition and sound design: Carla Scaletti
Visual Art: David Olivares
Video: Marco Meza, Rommel Vázquez
Aerial Video (drone): Alejandro González
Photography: Alfredo Ruiz and Rommel Vázquez
Science: Javier González-GarcÃa and Alejandro González
Audio Engineer: Rommel Vázquez
Scenography: Leoncio GarcÃa
Makeup: Rosario MartÃnez
Lighting technician: Miguel Tamayo
Communication and networks: Stephanie Lozano
Support: Juan Sánchez
Music 507 Design Patterns for Sound and Music. The art of sound design is vast, but there are certain patterns that seem to come up over and over again. Once you start to recognize those patterns, you can immediately apply them in new situations and start combining them to create your own sound design and live performance environments. We will look at design patterns that show up in audio synthesis and processing, parameter control logic, and live performance environments, all in the context of the Kyma sound design environment.
Registration open starting November 2017 (course starts in January 2018)
Music 507 Spring 2018
Mondays, 7:00 – 8:50 pm
Music Building 4th floor, Experimental Music Studios
University of Illinois Urbana-Champaign
Robert Jarvis‘ sound art installation aroundNorth allows listeners to experience the near
universe as they have never heard it before. As the Earth spins on its axis, and day becomes night becomes day, our view of the near universe changes in terms of the changing positions of the stars in the sky. One star appears to stay stationary (the North Star); and the rest take about 23 hours 56 minutes and 4 seconds to complete a full revolution.
‘aroundNorth’ offers listeners an opportunity to hear this phenomenon in real time. As each star crosses equally spaced virtual lines emanating from Celestial North Pole, a corresponding sound is heard that maps the star’s position in the sky, size, distance from Earth, brightness and temperature, creating a mesmerising sound map of the universe as viewed by our turning planet.
‘aroundNorth’ humanizes the astronomical, giving us an emotional key to help us relate the unfathomable heavens to our own experiences of time and space. With echoes of a Neolithic monument of ancient myth, the installation introduces us to a universe full of interest, encouraging us to think differently about the cosmos and our place within it.
Jarvis presented his installation on 15 October 2016 in a rather neolithic setting — the Beaghmore Stones Circle complex, preceded by an installation performance at Antrim Castle Gardens.
For more information, future showings, or to invite Robert Jarvis to create an aroundNorth experience in your city, see the aroundNorth web site.
When PhD candidate Madison Heying discovered there was a Kyma system at the University of California at Santa Cruz and that Kristin Erickson, Technical Coordinator of the Digital Arts & New Media center also had a personal Kyma system, they decided to organize the Kyma Klub — an informal group of students and staff members who meet weekly to read through Kyma X Revealed and teach themselves Kyma. The first public performance by club members was AQULAQUTAQU — a sci-fi operetta by Madison Heying & Kristin Erickson (voice & Kyma) with Matthew Galvin (voice & video), David Kant (voice), and Maya Galvin (narrator) — that they premiered at KISS2015 in Bozeman Montana (home of first contact).
In early February 2016, UCSC faculty composer Larry Polansky invited Kyma creators Carla Scaletti and Kurt Hebel to UC Santa Cruz where Carla presented a graduate colloquium on data sonification and a seminar on sound design in Kyma 7.
Here, Madison and Kristin are presenting some of the generative algorithms they implemented in Kyma for AQULAQUTAQU:
After the seminar, the Kyma Klub invited Kurt and Carla to Kristin’s studio where David Kant interviewed Kurt,
and Kristin Erickson interviewed Carla, while Matthew Galvin filmed their responses in front of a green screen for an as-yet-undisclosed proposal the Kyma Klub members have in mind to make for KISS2016.
Note the special blacklight Kyma Klub T-shirts (with matching event posters) designed and printed by Kristin and Madison for the visit.