eess.AS - 2023-11-06

HRTF Estimation in the Wild

  • paper_url: http://arxiv.org/abs/2311.03560
  • repo_url: None
  • paper_authors: Vivek Jayaram, Ira Kemelmacher-Shlizerman, Steven M. Seitz
  • for: 这个论文旨在创建更真实的听觉 spatial audio 经验,通过采用个性化 HRTF 估算。
  • methods: 该论文提出了一种基于听觉记录和头部跟踪数据的个性化 HRTF 估算方法,不需要专门的设备或测试。
  • results: 该研究表明,通过分析不同环境中听觉数据,可以准确地估算个人化 HRTF,并且在虚拟环境中提高声音的地理位置和避免前后混淆。
    Abstract Head Related Transfer Functions (HRTFs) play a crucial role in creating immersive spatial audio experiences. However, HRTFs differ significantly from person to person, and traditional methods for estimating personalized HRTFs are expensive, time-consuming, and require specialized equipment. We imagine a world where your personalized HRTF can be determined by capturing data through earbuds in everyday environments. In this paper, we propose a novel approach for deriving personalized HRTFs that only relies on in-the-wild binaural recordings and head tracking data. By analyzing how sounds change as the user rotates their head through different environments with different noise sources, we can accurately estimate their personalized HRTF. Our results show that our predicted HRTFs closely match ground-truth HRTFs measured in an anechoic chamber. Furthermore, listening studies demonstrate that our personalized HRTFs significantly improve sound localization and reduce front-back confusion in virtual environments. Our approach offers an efficient and accessible method for deriving personalized HRTFs and has the potential to greatly improve spatial audio experiences.
    摘要 HEAD-RELATED TRANSFER FUNCTIONS (HRTFs) 是创造充气空间声音体验中的关键因素。然而,人员对HRTF的个性化差异较大,传统方法估计个性化HRTF 昂贵、耗时、需要特殊设备。我们想象一个世界,在日常环境中使用耳机记录数据来确定个性化HRTF。在这篇论文中,我们提出了一种新的方法,只需要在实际环境中采集听觉双耳记录和头部跟踪数据,就可以准确估计个性化HRTF。我们发现,我们预测的HRTF与在静音室中测量的真实HRTF几乎相同。此外,听测研究表明,我们的个性化HRTF可以明显提高虚拟环境中的声音定位和前后混乱减少。我们的方法可以快速、高效地获得个性化HRTF,并且有可能大幅改善空间声音体验。