Abstract Accepted for Conference!

Rai Sato

Rai's submitted abstract was accepted for a conference. He will be attending and giving a presentation at the 24th International Congress on Acoustics (ICA2022) in Korea. Congratulations!


Research title

Real-Time Spatial Sound Rendering System Using LiDAR Sensor For Auditory Augmented Reality Application

Authors

Rai Sato (Embodied Cognitive Science Unit, Okinawa Institute of Science and Technology, Japan), Takumi Ito (Gatari Inc., Japan) and Sungyoung Kim (College of Engineering Technology, Rochester Institute of Technology, USA)

Abstract

The advent of affordable technologies for interactive spatial audio has facilitated the delivery of auditory augmented reality (AAR) experiences. The basic idea behind AAR is to superimpose sounds on top of the real world as its user moves through it, unlike games and VR where people experience pseudo-interactions in a virtual space. However, the nature of AR prevents developers from imagining in which space users will experience the AR application, making it more difficult to process sounds based on spatiality than in games and VR. Therefore, we developed a real-time spatialized audio reproduction system using space data automatically obtained by the AR application. This system provides dynamic spatial acoustics according to a space where it operates by the following processes: 1) capture room shape from mesh data generated by LiDAR sensor; 2) calculate room impulse response on each sound and user position based on an image-source method; and 3) rotate the sound image according to the user's direction and provide spatialized sounds in binaural format. Since these processes require only the game engine's AR library and existing audio middleware to compute spatial acoustics, it is possible to achieve personalized spatial acoustics and evaluate its plausibility for AR and audio researchers.