Spatial Audio and Psychoacoustics in AI Contexts
Exploring AI-enhanced spatial audio for adaptive environments and psychoacoustic impact.
Spatial audio forms an important part of my research, particularly in relation to its psychoacoustic effects and applications in AI-enhanced environments such as virtual reality and gaming. My work at CEMI with the 44-speaker dome involves advanced spatialization techniques like Vector-Based Amplitude Panning (VBAP) and Ambisonics. I developed Python integrations with Ableton Live via the AbletonOSC library (to which I am a named contributor) and SpatGRIS, enabling object-level spatial movement in a flexible, venue-agnostic framework. This work investigates how AI and automated systems can enhance spatial sound experiences, making them adaptive to both environment and audience feedback.
​
Additionally, my stereo field analysis project, conducted in MATLAB, explores semiotic interpretation of stereo field dynamics in popular music, advancing our understanding of spatial audio's communicative potential.