cs.SD - 2023-07-14

Towards dialect-inclusive recognition in a low-resource language: are balanced corpora the answer?

  • paper_url: http://arxiv.org/abs/2307.07295
  • repo_url: None
  • paper_authors: Liam Lonergan, Mengjie Qian, Neasa Ní Chiaráin, Christer Gobl, Ailbhe Ní Chasaide
  • for: 本研究旨在描述如何使语音识别系统在不同 диалект的语言中表现准确。
  • methods: 研究人员使用12个语音识别系统,首先使用基线dialect-balanced训练数据集,然后使用基线数据集中dialect-specific材料的修改版本。
  • results: 结果显示,dialect-balanced数据集不会在不同的 диалект中产生相同的表现, UlDIialeкти consistently underperforms,而 Mu диалект则具有最低的wer。Co和Mu диалект之间存在密切的关系,但这种关系不是对称的。这些结果将导向未来的数据集收集和系统建立策略,以优化在不同 диаLECT中的表现准确性。
    Abstract ASR systems are generally built for the spoken 'standard', and their performance declines for non-standard dialects/varieties. This is a problem for a language like Irish, where there is no single spoken standard, but rather three major dialects: Ulster (Ul), Connacht (Co) and Munster (Mu). As a diagnostic to quantify the effect of the speaker's dialect on recognition performance, 12 ASR systems were trained, firstly using baseline dialect-balanced training corpora, and then using modified versions of the baseline corpora, where dialect-specific materials were either subtracted or added. Results indicate that dialect-balanced corpora do not yield a similar performance across the dialects: the Ul dialect consistently underperforms, whereas Mu yields lowest WERs. There is a close relationship between Co and Mu dialects, but one that is not symmetrical. These results will guide future corpus collection and system building strategies to optimise for cross-dialect performance equity.
    摘要 听说系统通常是为口语标准建立的,其表现在非标准方言下降。这是一个问题,因为如爱尔兰语言中没有单一的口语标准,而是有三大方言: Ulster(Ul)、Connacht(Co)和Munster(Mu)。为了评估说话人的方言对识别表现的影响,12个听说系统在基础的方言均衡训练集上进行了训练,然后使用基础集的修改版本,其中方言特有的材料被 subtracted 或 added。结果表明,不同方言的表现不具有相似性:Ul方言一直表现不佳,而 Mu 方言具有最低 WERs。Co 和 Mu 方言之间存在密切的关系,但这种关系不是对称的。这些结果将导引未来的资料采集和系统建设策略,以优化在不同方言之间的表现 equity。

Replay to Remember: Continual Layer-Specific Fine-tuning for German Speech Recognition

  • paper_url: http://arxiv.org/abs/2307.07280
  • repo_url: None
  • paper_authors: Theresa Pekarek Rosin, Stefan Wermter
  • for: 这 paper 是为了研究大规模自动语音识别(ASR)模型在更小的频谱上的表现和稳定性。
  • methods: 作者使用了大规模 multilingual 模型,通过选择性冻结部分模型参数进行适应更小的频谱,并应用经验回放来实现 kontinual learning。
  • results: 研究发现,通过添加原频谱的一部分数据,可以在新频谱上达到 Word-Error-Rates(WER)低于5%,同时稳定总语音识别性能。
    Abstract While Automatic Speech Recognition (ASR) models have shown significant advances with the introduction of unsupervised or self-supervised training techniques, these improvements are still only limited to a subsection of languages and speakers. Transfer learning enables the adaptation of large-scale multilingual models to not only low-resource languages but also to more specific speaker groups. However, fine-tuning on data from new domains is usually accompanied by a decrease in performance on the original domain. Therefore, in our experiments, we examine how well the performance of large-scale ASR models can be approximated for smaller domains, with our own dataset of German Senior Voice Commands (SVC-de), and how much of the general speech recognition performance can be preserved by selectively freezing parts of the model during training. To further increase the robustness of the ASR model to vocabulary and speakers outside of the fine-tuned domain, we apply Experience Replay for continual learning. By adding only a fraction of data from the original domain, we are able to reach Word-Error-Rates (WERs) below 5\% on the new domain, while stabilizing performance for general speech recognition at acceptable WERs.
    摘要 自动语音识别(ASR)模型在无监督或自监督训练技术的引入后已经表现出了显著的进步,但这些进步仅限于一些语言和发音人群。传输学习可以使大规模多语言模型适应不仅低资源语言,还可以适应更特定的发音人群。然而,在新领域数据进行精细调整通常会导致原领域性能下降。因此,我们在实验中检查了大规模ASR模型在更小的领域上的表现如何,以及如何在 selectively 冻结模型部分 During training 中保持一定的总体语音识别性能。进一步增加ASR模型对词汇和发音人群外的Robustness,我们应用经验回放 для持续学习。只添加原领域数据的一小部分,我们可以在新领域下达 Word-Error-Rates(WER)低于5%,而同时稳定总体语音识别性能。

AudioInceptionNeXt: TCL AI LAB Submission to EPIC-SOUND Audio-Based-Interaction-Recognition Challenge 2023

  • paper_url: http://arxiv.org/abs/2307.07265
  • repo_url: https://github.com/stevenlauhkhk/audioinceptionnext
  • paper_authors: Kin Wai Lau, Yasar Abbas Ur Rehman, Yuyang Xie, Lan Ma
  • for: 本研究旨在提出一种用于2023年 Epic-Kitchen EPIC-SOUNDS音频基于互动识别挑战的提交。目标是学习音频样本与其相应的动作标签之间的映射。
  • methods: 我们提出了一种简单 yet effective的单流Convolutional Neural Network(CNN)架构 AudioInceptionNeXt,该架构在时域-频谱增幅log-mel-spectrogram上运行。以启发了InceptionNeXt的设计为基础,我们提议在AudioInceptionNeXt块中使用平行多缘分割卷积kernel,这使得模型更好地学习时间和频率信息。
  • results: 我们的方法在挑战测试集上实现55.43%的top-1准确率,在公共领先榜上排名第一。代码可以在https://github.com/StevenLauHKHK/AudioInceptionNeXt.git上获取。
    Abstract This report presents the technical details of our submission to the 2023 Epic-Kitchen EPIC-SOUNDS Audio-Based Interaction Recognition Challenge. The task is to learn the mapping from audio samples to their corresponding action labels. To achieve this goal, we propose a simple yet effective single-stream CNN-based architecture called AudioInceptionNeXt that operates on the time-frequency log-mel-spectrogram of the audio samples. Motivated by the design of the InceptionNeXt, we propose parallel multi-scale depthwise separable convolutional kernels in the AudioInceptionNeXt block, which enable the model to learn the time and frequency information more effectively. The large-scale separable kernels capture the long duration of activities and the global frequency semantic information, while the small-scale separable kernels capture the short duration of activities and local details of frequency information. Our approach achieved 55.43% of top-1 accuracy on the challenge test set, ranked as 1st on the public leaderboard. Codes are available anonymously at https://github.com/StevenLauHKHK/AudioInceptionNeXt.git.
    摘要 这份报告介绍我们在2023年 Epic-Kitchen EPIC-SOUNDS 音频基于交互认知挑战中的技术细节。任务是学习音频示例与其对应的动作标签之间的映射。为了实现这个目标,我们提议一种简单 yet 高效的单流 CNN 建 architecture AudioInceptionNeXt,该架构在时域-频谱响应的 Log-Mel спектрограм中运行。受 InceptionNeXt 的设计启发,我们提议在 AudioInceptionNeXt 块中使用并行多级分割 convolutional 核,这些核 enable 模型更好地学习时间和频谱信息。大规模分割核捕捉活动的长时间和全局频谱 semantic 信息,而小规模分割核捕捉活动的短时间和局部频谱信息。我们的方法在挑战测试集上达到了 55.43% 的 top-1 精度,排名公共排行板上第一名。代码可以在 https://github.com/StevenLauHKHK/AudioInceptionNeXt.git 上anonymous 获取。

Mega-TTS 2: Zero-Shot Text-to-Speech with Arbitrary Length Speech Prompts

  • paper_url: http://arxiv.org/abs/2307.07218
  • repo_url: None
  • paper_authors: Ziyue Jiang, Jinglin Liu, Yi Ren, Jinzheng He, Chen Zhang, Zhenhui Ye, Pengfei Wei, Chunfeng Wang, Xiang Yin, Zejun Ma, Zhou Zhao
  • for: 可以Synthesize unseen speaker的speech with arbitrary-length prompts
  • methods: 使用 multi-reference timbre encoder和prosody language model,并 introduce arbitrary-source prompts and phoneme-level auto-regressive duration model
  • results: 可以 achieve improved performance with longer speech prompts and synthesize identity-preserving speech with a short prompt of an unseen speaker
    Abstract Zero-shot text-to-speech aims at synthesizing voices with unseen speech prompts. Previous large-scale multispeaker TTS models have successfully achieved this goal with an enrolled recording within 10 seconds. However, most of them are designed to utilize only short speech prompts. The limited information in short speech prompts significantly hinders the performance of fine-grained identity imitation. In this paper, we introduce Mega-TTS 2, a generic zero-shot multispeaker TTS model that is capable of synthesizing speech for unseen speakers with arbitrary-length prompts. Specifically, we 1) design a multi-reference timbre encoder to extract timbre information from multiple reference speeches; 2) and train a prosody language model with arbitrary-length speech prompts; With these designs, our model is suitable for prompts of different lengths, which extends the upper bound of speech quality for zero-shot text-to-speech. Besides arbitrary-length prompts, we introduce arbitrary-source prompts, which leverages the probabilities derived from multiple P-LLM outputs to produce expressive and controlled prosody. Furthermore, we propose a phoneme-level auto-regressive duration model to introduce in-context learning capabilities to duration modeling. Experiments demonstrate that our method could not only synthesize identity-preserving speech with a short prompt of an unseen speaker but also achieve improved performance with longer speech prompts. Audio samples can be found in https://mega-tts.github.io/mega2_demo/.
    摘要 <>零批 Text-to-Speech 目标是synthesize voice with unseen speech prompts。前一代大规模多 speaker TTS 模型已经成功实现了这个目标,但是大多数它们只能使用短的 speech prompts。短 speech prompts 的有限信息使得 fine-grained identity imitation 的性能受到了很大的限制。在这篇论文中,我们介绍 Mega-TTS 2,一种可以 synthesize speech for unseen speakers with arbitrary-length prompts 的通用零批多 speaker TTS 模型。具体来说,我们:1. 设计了多 references timbre encoder,以EXTRACT timbre information from multiple reference speeches。2. 并使用 arbitrary-length speech prompts 进行训练 prosody language model。这些设计使得我们的模型适用于不同的提示长度,从而扩展了 speech quality 的Upper bound for zero-shot text-to-speech。此外,我们还引入了arbitrary-source prompts,这里利用了多个 P-LLM 输出的概率来生成表达性和控制的 prosody。此外,我们还提出了一种phoneme-level auto-regressive duration model,以INTRODUCE in-context learning capabilities to duration modeling。实验表明,我们的方法可以不仅synthesize identity-preserving speech with a short prompt of an unseen speaker,还可以 achieved improved performance with longer speech prompts。Audio samples can be found in .

Low Rank Properties for Estimating Microphones Start Time and Sources Emission Time

  • paper_url: http://arxiv.org/abs/2307.07096
  • repo_url: None
  • paper_authors: Faxian Cao, Yongqiang Cheng, Adil Mehmood Khan, Zhijing Yang, S. M. Ahsan Kazmiand Yingxiu Chang
  • for: 这 paper 是为了解决不精确的时间信息问题,例如麦克风和源localization 而写的。
  • methods: 这 paper 使用了一种基于low-rank property (LRP)的方法,具体来说是利用LRP的低级结构来形成linear constraint,从而解决UTIm的不确定性问题。
  • results: 实验结果表明,这 paper 的方法在比较 existed state-of-the-art 方法时表现出了更高的性能, measured 通过Recovery number 和 reduced estimation errors of UTIm。
    Abstract Uncertainty in timing information pertaining to the start time of microphone recordings and sources' emission time pose significant challenges in various applications, such as joint microphones and sources localization. Traditional optimization methods, which directly estimate this unknown timing information (UTIm), often fall short compared to approaches exploiting the low-rank property (LRP). LRP encompasses an additional low-rank structure, facilitating a linear constraint on UTIm to help formulate related low-rank structure information. This method allows us to attain globally optimal solutions for UTIm, given proper initialization. However, the initialization process often involves randomness, leading to suboptimal, local minimum values. This paper presents a novel, combined low-rank approximation (CLRA) method designed to mitigate the effects of this random initialization. We introduce three new LRP variants, underpinned by mathematical proof, which allow the UTIm to draw on a richer pool of low-rank structural information. Utilizing this augmented low-rank structural information from both LRP and the proposed variants, we formulate four linear constraints on the UTIm. Employing the proposed CLRA algorithm, we derive global optimal solutions for the UTIm via these four linear constraints.Experimental results highlight the superior performance of our method over existing state-of-the-art approaches, measured in terms of both the recovery number and reduced estimation errors of UTIm.
    摘要 <>传感器记录的开始时间和发源时间的不确定性在各种应用中具有重要挑战性,如共同扬声器和发源器localization。传统优化方法,直接估算这些未知时间信息(UTIm),经常与LRP方法相比,表现不足。LRP包含额外的低级结构,使得可以在UTIm中增加直线约束,以帮助形式化相关的低级结构信息。这种方法使得我们可以在初始化过程中获得全球最优解。然而,初始化过程通常含有Randomness,导致获得局部最优解。本文提出了一种新的combined low-rank approximation(CLRA)方法,旨在 mitigate这种随机初始化的影响。我们提出了三种新的LRP变体,基于数学证明,使得UTIm可以借鉴更加丰富的低级结构信息。通过这些增强的低级结构信息,我们将UTIm转化为四个直线约束。采用我们提出的CLRA算法,我们可以从这些四个直线约束中获得全球最优解。实验结果表明,我们的方法在与现有状态的方法相比,具有更好的性能, measured in terms of both the recovery number and reduced estimation errors of UTIm。Note: Please note that the translation is in Simplified Chinese, which is the standard writing system used in mainland China. If you need the translation in Traditional Chinese, please let me know.

Leveraging Pretrained ASR Encoders for Effective and Efficient End-to-End Speech Intent Classification and Slot Filling

  • paper_url: http://arxiv.org/abs/2307.07057
  • repo_url: None
  • paper_authors: He Huang, Jagadeesh Balam, Boris Ginsburg
  • for: 本研究旨在提出一种使用ASR预训练的encoder初始化一个端到端Conformer-Transformer模型,以实现新的状态对SLURP数据集的Intent分类和槽填充(SICSF)。
  • methods: 我们提出了一种使用ASR预训练的encoder初始化一个端到端Conformer-Transformer模型,并对SLURP数据集进行训练。我们还对自我学习预训练(SSL)和ASR预训练进行比较,并证明ASR预训练是更有效的。为了探索参数效率,我们冻结encoder并添加Adapter模块,并证明只有ASR预训练的encoder可以保持参数效率。
  • results: 我们的模型在SLURP数据集上实现了新的状态对Intent分类和槽填充的最佳Result,即90.14% Intent准确率和82.27% SLURP-F1。此外,我们还对端到端模型与分解模型(ASR+NLU)进行了深入比较,并证明端到端模型在参数效率和性能之间具有优势。最后,我们的模型成为了首个实现与分解模型相同性的E2E模型。
    Abstract We study speech intent classification and slot filling (SICSF) by proposing to use an encoder pretrained on speech recognition (ASR) to initialize an end-to-end (E2E) Conformer-Transformer model, which achieves the new state-of-the-art results on the SLURP dataset, with 90.14% intent accuracy and 82.27% SLURP-F1. We compare our model with encoders pretrained on self-supervised learning (SSL), and show that ASR pretraining is much more effective than SSL for SICSF. To explore parameter efficiency, we freeze the encoder and add Adapter modules, and show that parameter efficiency is only achievable with an ASR-pretrained encoder, while the SSL encoder needs full finetuning to achieve comparable results. In addition, we provide an in-depth comparison on end-to-end models versus cascading models (ASR+NLU), and show that E2E models are better than cascaded models unless an oracle ASR model is provided. Last but not least, our model is the first E2E model that achieves the same performance as cascading models with oracle ASR. Code, checkpoints and configs are available.
    摘要 我们研究了speech意图分类和插槽填充(SICSF),我们提议使用已经预训练的语音识别(ASR)Encoder来初始化一个端到端(E2E)Conformer-Transformer模型,这些模型在SLURP数据集上达到了新的州OF-the-art结果,具有90.14%的意图精度和82.27%的SLURP-F1。我们与self-supervised learning(SSL)预训练器进行比较,并发现ASR预训练是对SICSF的 much more effective than SSL。为了探索参数效率,我们冻结Encoder并添加Adapter模块,并发现只有ASR预训练的Encoder可以保持参数效率,而SSL预训练的Encoder需要全部finetuning才能达到相似的结果。此外,我们还提供了端到端模型与杂合模型(ASR+NLU)的深入比较,并发现E2E模型比杂合模型更好,除非提供了oracle ASR模型。最后,我们的模型是第一个E2E模型,可以与杂合模型具有oracle ASR模型的性能相同。代码、checkpoints和配置都可以获得。

Adapting an ASR Foundation Model for Spoken Language Assessment

  • paper_url: http://arxiv.org/abs/2307.09378
  • repo_url: None
  • paper_authors: Rao Ma, Mengjie Qian, Mark J. F. Gales, Kate M. Knill
  • for: 本研究旨在改善大规模预训练ASR模型的输出,以提供准确的候选者评估和反馈。
  • methods: 本文提出了两种解决方案:一是精度地练习,二是软提示调整。两种方法都在公共演讲数据集和英语学习数据集上进行了实验。
  • results: 实验结果显示,通过精度地练习和软提示调整,可以有效地改变Whisper的解码行为,以生成候选者实际上说的话。
    Abstract A crucial part of an accurate and reliable spoken language assessment system is the underlying ASR model. Recently, large-scale pre-trained ASR foundation models such as Whisper have been made available. As the output of these models is designed to be human readable, punctuation is added, numbers are presented in Arabic numeric form and abbreviations are included. Additionally, these models have a tendency to skip disfluencies and hesitations in the output. Though useful for readability, these attributes are not helpful for assessing the ability of a candidate and providing feedback. Here a precise transcription of what a candidate said is needed. In this paper, we give a detailed analysis of Whisper outputs and propose two solutions: fine-tuning and soft prompt tuning. Experiments are conducted on both public speech corpora and an English learner dataset. Results show that we can effectively alter the decoding behaviour of Whisper to generate the exact words spoken in the response.
    摘要 ‪《一个精准和可靠的口语评估系统中的关键部分是底层ASR模型。最近,大规模预训练ASR基础模型如Whisper已经被提供。这些模型的输出设计为人类可读,包括括号、阿拉伯数字形式的数字和缩写。然而,这些特征不实用于评估候选人的能力和提供反馈。我们需要准确转录候选人所说的话。在这篇论文中,我们对Whisper输出进行了详细分析,并提出了两种解决方案:细化和软提示调整。我们在公共演讲 corpora 和英语学习 dataset 上进行了实验,结果表明我们可以有效地改变Whisper的解码行为,以产生候选人实际上说的话。‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬