for: The paper aims to improve automatic speech recognition (ASR) performance for low-resource languages, specifically Indian languages like Bengali and Bhojpuri.
methods: The paper uses self-supervised learning (SSL) based acoustic models like wav2vec2 and large-scale multi-lingual training like Whisper, and explores the use of adaptation and fine-tuning techniques to overcome the low-resource nature of the data.
results: The paper aims to understand the importance of each modality (acoustics and text) in building a reliable ASR system for low-resource languages, and to explore the applicability of these approaches to various languages spoken around the world.Abstract
Automatic speech recognition (ASR) performance has improved drastically in recent years, mainly enabled by self-supervised learning (SSL) based acoustic models such as wav2vec2 and large-scale multi-lingual training like Whisper. A huge challenge still exists for low-resource languages where the availability of both audio and text is limited. This is further complicated by the presence of multiple dialects like in Indian languages. However, many Indian languages can be grouped into the same families and share the same script and grammatical structure. This is where a lot of adaptation and fine-tuning techniques can be applied to overcome the low-resource nature of the data by utilising well-resourced similar languages. In such scenarios, it is important to understand the extent to which each modality, like acoustics and text, is important in building a reliable ASR. It could be the case that an abundance of acoustic data in a language reduces the need for large text-only corpora. Or, due to the availability of various pretrained acoustic models, the vice-versa could also be true. In this proposed special session, we encourage the community to explore these ideas with the data in two low-resource Indian languages of Bengali and Bhojpuri. These approaches are not limited to Indian languages, the solutions are potentially applicable to various languages spoken around the world.
摘要
自动语音识别(ASR)性能在最近几年内有了惊人的提升,主要归功于基于自我超级学习(SSL)的声音模型,如wave2vec2以及大规模多语言训练如Whisper。然而,低资源语言仍然存在巨大的挑战,主要是因为语音和文本数据的可用性受限。这更加复杂,因为印度语言有多种方言。然而,许多印度语言可以分组,并且共享同一个字母和 grammatical structure。这使得可以应用大量的适应和精度调整技术来缓解低资源数据的问题,使用已有的资源更加有利。在这个特别会议中,我们邀请社区探讨以下想法:使用声音和文本Modalities 之间的关系来构建可靠的 ASR。可能是,一个语言有充足的声音数据,可以减少文本 corpora 的需求。或者,由于各种预训练声音模型的可用性,可以相反的情况。我们鼓励社区在孟买利语和帕雷语两种低资源印度语言中进行研究。这些方法不仅适用于印度语言,而且可能适用于世界各地的语言。