cs.SD - 2023-07-21

Integrating Pretrained ASR and LM to Perform Sequence Generation for Spoken Language Understanding

  • paper_url: http://arxiv.org/abs/2307.11005
  • repo_url: None
  • paper_authors: Siddhant Arora, Hayato Futami, Yosuke Kashiwagi, Emiru Tsunoo, Brian Yan, Shinji Watanabe
  • for: 这个研究旨在提出一个三阶段终端对话系统(E2E SLU),实现对话识别和语言模型(LM)的统合,并且解决预先训练的语音识别(ASR)和LM之间的词汇差异问题。
  • methods: 本研究使用三阶段E2E SLU系统,首先使用ASR子网络预测ASR转译,然后使用LM子网络做初步的SLU预测,最后使用妥协子网络根据ASR和LM子网络的表现进行最终预测。
  • results: 根据两个SLU资料集(SLURP和SLUE)的实验结果,提出的三阶段E2E SLU系统在处理具有声音挑战的声明时表现更好,特别是在SLUE资料集上。
    Abstract There has been an increased interest in the integration of pretrained speech recognition (ASR) and language models (LM) into the SLU framework. However, prior methods often struggle with a vocabulary mismatch between pretrained models, and LM cannot be directly utilized as they diverge from its NLU formulation. In this study, we propose a three-pass end-to-end (E2E) SLU system that effectively integrates ASR and LM subnetworks into the SLU formulation for sequence generation tasks. In the first pass, our architecture predicts ASR transcripts using the ASR subnetwork. This is followed by the LM subnetwork, which makes an initial SLU prediction. Finally, in the third pass, the deliberation subnetwork conditions on representations from the ASR and LM subnetworks to make the final prediction. Our proposed three-pass SLU system shows improved performance over cascaded and E2E SLU models on two benchmark SLU datasets, SLURP and SLUE, especially on acoustically challenging utterances.
    摘要 在 latest research, there has been an increased interest in integrating pre-trained speech recognition (ASR) and language models (LM) into the SLU framework. However, previous methods often struggle with a vocabulary mismatch between pre-trained models, and LM cannot be directly utilized as they diverge from its NLU formulation. In this study, we propose a three-pass end-to-end (E2E) SLU system that effectively integrates ASR and LM subnetworks into the SLU formulation for sequence generation tasks. In the first pass, our architecture predicts ASR transcripts using the ASR subnetwork. This is followed by the LM subnetwork, which makes an initial SLU prediction. Finally, in the third pass, the deliberation subnetwork conditions on representations from the ASR and LM subnetworks to make the final prediction. Our proposed three-pass SLU system shows improved performance over cascaded and E2E SLU models on two benchmark SLU datasets, SLURP and SLUE, especially on acoustically challenging utterances.