Sounding the ocean

graph that depicts changes in the bottom pressure of the ocean over one month. The data was taken from sensors on the Juan de Fuca plate in the NE Pacific Ocean. The changes in bottom pressure in the graph depict daily fluctuations of the tidal cycle across the entire graph, but is marked by a large shift in bottom pressure in the middle of the data on April 24, 2015. The bottom pressure shifted as a result of the volcanic eruption — an indication that the seafloor dropped.In June 2024, Jon Bellona presented a paper at the 2024 International Conference on Auditory Display in Troy, NY on behalf of the NSF-funded Accessible Oceans project team. The paper, Iterative Design of Auditory Displays Involving Data Sonifications and Authentic Ocean Data, outlines their auditory display framework and their use of Kyma for all of their data sonification. Program with links to all papers are accessible online at https://icad2024.icad.org/program/

During the presentation, Jon played the full 2015 Axial Seamount Eruption. When an audience member asked about his use of earcon wrappers around each sonification, Jon shared a story about how his interviews with teachers at Perkins School for the Blind led him to include this feature.

In the coming year, Bellona plans to continue his work doing sonification with Kyma as part of a new project funded through NOAA: A Sanctuary in Sound: Increasing Accessibility to Gray’s Reef Data through Auditory Displays with Jessica Roberts, director of the Technology-Integrated Learning Environments (TILEs) Lab at Georgia Tech.

The Sounds of Data

In the July 11 2024 of Physics Magazine, host Julie Gould speaks with scientists who rely on senses other than sight, such as hearing and touch, to interpret data and communicate their research. They use sonification — the transformation of data into sound — to “listen” to hydrogen bonds, interpret gravitational-wave signals, and communicate a wide range of astrophysical data. Sonification also offers tools for visually impaired researchers and scientific outreach.

Included in the podcast are Martin Gruebele (University of Illinois at Urbana-Champaign) and Carla Scaletti (Symbolic Sound Corporation) describing how they could hear patterns of hydrogen-bond formation during protein folding that had been missed when relying solely on visual representations. (Their research is described in the 28 May 2024 issue of Proceedings of the National Academy of Sciences: “Hydrogen bonding heterogeneity correlates with protein folding transition state passage time as revealed by data sonification”).

Data sonification for scientific exploration & discovery

Data sonification, often used for outreach, education and accessibility, is also an effective tool for scientific exploration and discovery!

Working from the Lindorff-Larsen et al Science (2011) atomic-level molecular dynamics simulation of multiple folding and unfolding events in the WW domain, we heard (and analytically confirmed) correlations between hydrogen bond dynamics and the speed of a protein (un)folding transition.

The results were published this week in The Proceedings of the National Academy of Sciences (PNAS), vol. 121 no. 22, 28 May 2024: “Hydrogen bonding heterogeneity correlates with protein folding transition state passage time as revealed by data sonification”

Congratulations to everyone in the Biophysics Sonification Group:

Carla Scaletti (1), Premila P. Samuel Russell (2), Kurt J. Hebel (1), Meredith M. Rickard (2), Mayank Boob (2), Franz Danksagmüller (9), Stephen A. Taylor (7), Taras V. Pogorelov (2,3,4,5,6), and Martin Gruebele (2,3,5,8)

(1) Symbolic Sound Corporation, Champaign, IL 61820, United States;
(2) Department of Chemistry, University of Illinois Urbana-Champaign, IL 61801, United States;
(3) Center for Biophysics and Quantitative Biology, University of Illinois Urbana-Champaign, IL 61801, United States;
(4) School of Chemical Sciences, University of Illinois Urbana-Champaign, IL 61801, United States;
(5) Beckman Institute for Advanced Science and Technology, University of Illinois Urbana-Champaign, IL 61801, United States;
(6) National Center for Supercomputer Applications, University of Illinois Urbana-Champaign, IL 61801, United States;
(7) School of Music, University of Illinois Urbana-Champaign, IL 61801, United States;
(8) Department of Physics, University of Illinois Urbana-Champaign, IL 61801, United States;
(9) Musikhochschule Lübeck, 23552 Lübeck, Germany

Kyma creator on the cover of Computer Music Journal


The Spring 2016 issue of Computer Music Journal (Volume 40 Issue 1) includes a transcript of Carla Scaletti‘s keynote address for the 41st International Computer Music Conference.

In Looking Back, Looking Forward, Scaletti uses mythology, evolutionary anthropology, nostalgia research and a story about the origins of Kyma to illustrate the idea that software is “hardware with cognitive fluidity”.

 

The Listening Back, Listening Forward issue marks the beginning of Computer Music Journal‘s 40th year of publication and, citing recent research showing that nostalgia enhances creativity, CMJ editor Douglas Keislar invites readers to share their own computer music stories for possible publication as letters to the editor throughout this anniversary year.

Also in this issue: Silvia Matheus reviews The Seventh KYMA International Sound Symposium (KISS2015)!

Effects of pitch-shifting on stuttering

 

Torrey Loucks, professor of Speech and Hearing Science & researcher with Beckman Institute’s Cognitive Neuroscience group & Illinois International Stuttering Research Program

When you listen over headphones to your own voice through a pitch-shifter, do you find yourself mimicking the pitch shift? Or do you automatically counter the pitch shift by shifting in the opposite direction?  According to speech and hearing science professor Torrey Loucks, most people compensate by shifting in the opposite direction to correct the deviation. But there is another group who follow the pitch changes.  We don’t yet understand why these individual differences in auditory vocal responses occur.

In a recently published study, Professor Loucks and his graduate students HeeCheong Chon and Woojae Han at the University of Illinois in the department of Speech and Hearing Science utilized Kyma for real-time pitch shifting in an experiment that attempts to increase understanding of the neural mechanisms underlying stuttering.

In this study, a group of adults who stutter and typically fluent adults were asked to produce the same vowel sound while monitoring their own voices through headphones.  Kyma was used to shift the pitch of the subject’s voice up or down by as much as 200 cents (2 half-steps) for a duration of 500 ms.

In most cases, the participants adapted by lowering or raising the pitch of their voices to counter (rather than mirror) the pitch-shift imposed by Kyma. In the stuttering participants, the adaptive response to the pitch-shift was significantly delayed as compared to the responses of non-stutterers. The stuttering participants also tended to have a lower magnitude of pitch shift responses.

In their report, “Audiovocal integration in adults who stutter” recently published in the International Journal of Language & Communication Disorders, the authors analyze and discuss some of the implications of this result.

One theoretical prediction is that persons who stutter rely more strongly on auditory feedback to produce speech, whereas typically fluent speakers use a more robust internal predictive model of the expected result and are less reliant on audio feedback. On the other hand, there are also studies suggesting that persons who stutter tend to have slower auditory reaction times and are less adept at pitch tracking than typically fluent speakers. Loucks et al conclude that, although their results “do not negate arguments that adults who stutter are more dependent on feedback, their dependence is not expressed through a more reactive pitch-shift response”.

In an article describing his research with the Beckman Institute’s Cognitive Science group, Professor Loucks explains some of the wide-ranging implications of research on stuttering:

Stuttering is very interesting because it appears to occur at the junction point between formulating what you want to say and actually being able to express it.

He also corrects some outdated preconceptions on the phenomenon of stuttering:

There are no predisposing events that make a person stutter that could be prevented either by being a better parent or being a different sort of child. No one is to blame for the occurrence of stuttering because it is a biological disorder.

Nonetheless, the stigma of stuttering can be reduced considerably by realizing that we need to accept communication disorders as occurring in the population. There’s nothing negative about having a communication disorder and people should know that stuttering does not affect a person’s ability to learn, to succeed academically, all of those things. It has nothing to do with intelligence and should not be in any way a barrier to a person realizing their full potential. It is a neurobiological disorder that no one could prevent, but which we can find better ways to treat and possibly cure in the future.

Professor Loucks on University of Illinois campus