Problem Statement: Fluidity, Timbre, and Time


Dear Kyle, Dear Emma — 

Thanks so much for your enthused responses to my request! I’m so happy to share these ideas with you and hopefully find a way to connect & collaborate on this.

Below, I’ll include a couple of musical and visual examples of some of my working materials, what I feel could serve as viable “next steps” for this work, what seems to be missing or beyond my capabilities, etc. I look forward to bouncing some of these ideas around!

Previous Work: Musical Analogies for Naturally-Occurring Fluid Systems


I recently discussed some of these ideas in a virtual presentation & roundtable with other composers at a conference: Spatialization, Orchestration and Perception: IRCAM Forum Workshops ‘Hors Les Murs’ Montréal 2021. My talk was focused on two of my recent pieces of music, but I’ve excerpted just a couple of minutes for you that outlines my specific attempt to create swirling, vortex-like musical “objects” (this includes musical examples too, so you can also hear what’s going on) —

Since I wrote this music in 2019, I’ve had some discussion with Leigh Orf at UW-Madison about future projects in which I use some of his data modeling violent storm systems to create 3-dimensional trajectories for sonic “particles,” i.e. short, continuous bits of sound waves, whose movement can be traced in multichannel speaker systems or over headphones in a virtual sound field. As described in the video, there are also these interesting timbral artefacts that result from the fast-moving shapes of these kinds of trajectories, which can be used to create wholly new musical objects and to understand what we might presently hear in natural forms of movement (e.g. wind, rain, etc.).

With the pandemic putting these plans on hold, though, we’re mostly at a standstill.

In the meantime, my attention has turned toward understanding how specific fluid-like motion can be modeled by electronic and acoustic instruments, and therefore translated into a musical language that can be (1) perceived by the listener, and (2) notated in a score for living, breathing musicians (i.e. not to limit findings to the world of infinite possibilities that can be realized only with digital means; in a studio, with synthesizers, etc.).

So, I’ve worked a little bit with mocking up chaotic attractors for musical generation. Here’s a short excerpt of one of these attempts: a combination of different attractors whose output is cast in 3 dimensions. The data here is used to generate the white lines below (they’re trajectories for movement), and the data governs movement of sound “particles” in a virtual space as well as their pitch and various timbral distortion levels. This isn’t the most interesting musical example, but it’s a start(!) It’s recorded in a binaural sound field; so, it sounds best over headphones — 

Current Work: Finding Instrumental Solutions for Fluid/Stream-like Target Sounds


In order to find the best approximations of these sounds by instruments, I’ve taken underwater recordings made with a hydrophone, segmented them into individual “grains” of sound in aT stream-like fashion (e.g. one grain or “particle” for each “bubble”-like segment in the original recording), and performed audio descriptor matching to find the closest match given a corpus of audio samples. What I mean by “corpus” is just a large directory of recordings of individual notes and playing techniques on individual instruments. Their combination and superimposition can be used to roughly approximate what an orchestra might sound like. This way, each component in time of the “target” (the underwater recording) is matched with the best possible instrumental “solution.”

I’m currently working on a piece whose instrumentation includes bass flute and tenor saxophone. For these particular sounds, I’ve found that a combination of key click actions in the woodwinds (when a performer simply opens & closes the keys of the instrument with their fingers alone, causing a resonance along its conical bore; i.e. without blowing any air into the mouthpiece), and a series of soft beatings on variously small and medium-sized wood blocks (in the percussion), closely approximate the timbral qualities of water in this source recording. These results can then be notated, although what you see below has to be worked over a bit more (like, “cleaned up”!) before I can show this to a performing musician.

First I play back the original water stream (target), broken up into individual “grains,” represented by the small dots (notes) in the uppermost staff. The green playhead of course indicates the traversal of the target sound file from left to right. Then, I mute the target sounds (uppermost “grains” then turn purple), so that you can hear the instrumental approximation directly. Finally, I unmute and play everything together:

This provides a somewhat satisfactory result to my ear at least; some, but not all, of the changes in spectral energy are properly matched to the wood block sounds. But if I take another recording, let’s say, of a more turbulent, eddying, vortex-like underwater motion, and run the same calculation, the results aren’t as convincing:

Sorry for the noisiness of the playback on the target sound at the beginning (it’s because these grains aren’t “windowed” properly). I’d have to really massage this to get it to work, and the palette of sounds necessary to recreate this effect necessarily would need to extend beyond what I’ve assigned these instruments. Here, I’m using contrabass flute and tubax (contrabass saxophone), to fill out the orchestration with increasingly lower key click sounds — in response to my observation of much more low energy in the turbulent water recording. I’m keeping at it but it takes a long period of study to properly optimize how this grain matching scheme captures the sonic character I’m after — which, no doubt, is a far more global property beyond the “instant-to-instant” microscopic nature of individual grain matching.

On further reflection, one of the difficulties here is that the hydrophone recordings are monaural — the lack of at least a stereo capture ensures that we’ll hear no sense of spatial morphology, so at best, only a timbral and temporal matching are possible in this way. Underwater recordings are also highly low-pass filtered due to the water’s density, which removes many of the higher frequency components listeners would need as aural cues to locate the position of sounds. This approach may certainly help me get somewhat closer, but it’s also beyond my capacities to derive from this kind of approach any sort of rule set, or any general grammar that would describe these patterns of movement in time, timbre, or space. Like the underwater examples, there are extreme challenges to recording the chaos of, say, the inside of a cyclone, so a generative model for this kind of motion would certainly work better.

There is a musical research group in Marseille that tries to categorize certain qualities in movement and shape (i.e. “morphology”) for different sounds we’ve encountered in modern music — a topic that’s been explored by semiotics scholars who have tried to describe the abstract, semi-pitched sounds of electronic and computer music, and even present-day contemporary instrumental music that often imitates these electronic sounds. They currently have identified just one category of motion to describe any sensation of circular motion, which they simply call “Vortex.”

Obviously, there are so many different qualities of circular motion in the real world! The more I listen and attempt to capture these qualities in my environment — and to recreate them through musical mimesis — the more I realize what I need is just a deeper understanding of how this nonlinear motion works — unlike linear motion, which underbellies the most traditional forms of musical notation (though interestingly enough, not of dance notation…).

Ultimately, I hope to build on this kind of syntactical research with a more data-driven approach to the representation of turbulence in music, where physical models of turbulent motion are used to describe rhythmic & timbral shapes that can be translated into higher grammatical levels of music — that is, not just individual notes but larger phrases or spans of music, even whole sections or whole pieces (i.e. at the “song”-level) that can be built on new topologies of chaotic, fluid-like, wind-like, or otherwise irrational patterns or sensations of motion.

Towards this end, it would help for someone like me to have a better understanding of how not only to measure, but to generate different patterns of chaotic movement, or to derive the initial conditions that generally lead to certain qualities of chaotic motion and are perceivable to a listener. What do certain types of swirling eddies sound like? What is the effect of a change in Reynolds number on certain musical systems? How might listeners distinguish between a more “laminar” sensation of movement (i.e. movement in rhythm, pitch, “layers” of material in different sections of an orchestra, etc.) versus some category of turbulent motion? From my vantage point, these complex questions are highly constrained by the specific instrumental and technological forces at a composer’s disposal, which changes from piece to piece but nevertheless marks a complex interaction between the various musical parameters.

Music Analysis: Observing + Measuring Chaos in Preexisting Music


The most immediate context in which I can put this to work is for an upcoming doctoral exam I need to give in April, if that’s not too soon. Having a general sense of how chaotic rhythms/durations can be generated from changes in the input parameters to attractor equations, for example, might help me describe music such as this (if you listen to just a few seconds you’ll get the gist):

This section displays a kind of “granular synthesis” going on here, and suggests a semi-cyclic pattern of motion that can be described, I think, in its temporal profile and the quality of patterns of “grains” that occasionally repeat themselves. Moreover, though, there is for me a general kind of “sensation” of this movement. I certainly hear emergent shapes and patterns here, but we basically don’t have a language to discern what’s going on as listeners to this extremely dense musical material; composers may know what they’re doing when they create these sounds, but we have limited analytical faculties on the other side of the stage…

Still, I’d like to find ways of describing this motion, perhaps deciding whether it comes close to some ideal topological “model” of fluid motion. With audio analysis software, it’s easy to measure exact points in a recording where changes in energy occur, such as where a new note onset occurs (i.e. the energy difference in a signal where a pianist strikes a key, etc.) using vertical markers placed in the audio file that measure the inter-onset time delays between events, like this:

Here, a sample of a person singing has been segmented into individual syllables.

In our software, it’s also easy to derive the inter-onset time delays from patterns in the output of a nonlinear equation, such as this example using a Lorentz attractor:

This is the sort of technique I’m using in the video above (the second video from the top, “Meditation on a Nonlinear System”) to derive 3D positions for those binaural sound particles. Such tools make it at least foreseable that I could analyze an audio stream for instantaneous changes in energy, or movement in some other audio descriptor (e.g. the spectral centroid, kurtosis, rolloff, crest, etc.) and place markers, or place markers manually according to what I hear directly (though it’s fiercely time-consuming). The analysis excerpt’s inter-onset profile can then be compared to an “ideal” succession of inter-onset delays in a model to measure how close the observed motion comes to some form of turbulent motion. It could be compared with other excerpts and given a rating, or perhaps it could be used as input to some kind of automated learning algorithm for deriving properties.

These are just a few initial ideas, but before writing more I’ll leave it at that. I’m so curious to know what your reactions might be to these ideas.

Thank you so much once again for taking the time to read this, and for your curiosity and interest!

Many thanks and looking forward to speaking soon — 
Louis