Colormatrics: Sounds of Mars

︎︎︎Visualization, Sonification
︎︎︎Generative, Interactive

Nasa's recording of "Sound of Mars" was converted into a giant monochrome image and then transferred into sound again using my line-by-line sonification system. With this feature, the user can navigate and zoom in and out on the data and warp the sonification time through hand-gestures. The user can select 5 representive colors and flexibly change the tonal character of the sound. 

Coming in January.

2022




Silent Colormatrics

︎︎︎Visualization
︎︎︎Generative

Supernova 7th Dimension presented by Denver Digerati

Silent Screen at the 16th and Arapahoe screen, September 17

2022




Immersive Mindfulness

︎︎︎Sonification, Visualization
︎︎︎Responsive, Interactive

Six users were performed together and an interactive sound from each user’s EEG was transformed into an immersive 360 degree audiovisual experience through 128 multi-layered speakers in the Cube at the Moss Arts Center.

For more information, see the solo demo below.

*Science, Engineering, Art and Design (SEAD) grant award, ICAT, Virginia Tech

Sonification
Woohun Joo

Visualization
Woohun Joo, Boyoung Lee

Project Lead
Zach Gould

Thanks to
Tanner Upthegrove (ICAT)
Brandon Hale (ICAT)
David Franusich (ICAT)
Chris Aimone (Muse)
John Sheeran (Muse)

2022










Immersive Mindfulness (Solo Demo)

︎︎︎Sonification, Visualization
︎︎︎Responsive, Interactive

Immersive Mindfulness is EEG-driven sonification for meditation. The sonification engine consists of one four-voice sampler and one custom synth that I designed.

The sampler section plays four choir voices mixed in four different pitches, and the synthesizer section has three components, each with seven voices with different pitches. Each voice has two main oscillators (triangle and square), and the mixing level changes according to the incoming data.

Pitch in the sampler and the synth, the timbre of the synth, the sound modulation level, and the mixing ratio between the sampler and the synth continuously change by my using heart rate, stillness, EEG amplitude, and alpha power from Muse S.

*Science, Engineering, Art and Design (SEAD) grant award, ICAT, Virginia Tech

Credits
Zach Gould
Aanu Ojelade

Thanks to
Chris Aimone (Muse)
John Sheeran (Muse)

2022






Point, Line, and Plane 1 (Beta)

︎︎︎Sonification, Visualization
︎︎︎Generative, Interactive

I conducted five sonification user studies to examine whether people can associate minimalistic shapes with sound with no pre-training. Surprisingly, the correct answer rate for each study was very high regardless of age, educational background, and listening devices.

Point, Line, and Plane is the first artistic audiovisual work based on the same sonification method with the user studies except for colors and the number of the shapes. Additive, subtractive, and waveshaping synthesis are used and the characteristics of the sound range from soft and pad-like sounds to chaotic sounds.

The line-by-line method and the object-oriented method that I newly developed for image sonification were applied to the background and the objects separately, and each object is controllable through MIRA. The sound of each object is spatialized in multi-layer speaker environments or virtual listening environments (i.e., headphones or earphones). Please note that due to the nature of (universal) HRTF-based binaural panners, the spatial sound in the demo video may not be clearly recognizable. 

︎︎︎Related Publication (upcoming)

2022