[ICASSP 2020] End-to-End Multi-speaker Speech Recognition with Transformer

Описание к видео [ICASSP 2020] End-to-End Multi-speaker Speech Recognition with Transformer

Johns Hopkins University Ph.D. candidate Xuankai Chang presents his paper titled "End-to-End Multi-speaker Speech Recognition with Transformer" for the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), held virtually May 4-8 2020. The paper was co-authored with Wangyou Zhang and Yanmin Qian (Shanghai Jiao Tong University), Jonathan Le Roux (MERL), and Shinji Watanabe (Johns Hopkins University).

Paper: https://ieeexplore.ieee.org/document/..., https://www.merl.com/publications/TR2...

Abstract: Recently, fully recurrent neural network (RNN) based endto-end models have been proven to be effective for multi-speaker speech recognition in both the single-channel and multi-channel scenarios. In this work, we explore the use of Transformer models for these tasks by focusing on two aspects. First, we replace the RNN-based encoder-decoder in the speech recognition model with a Transformer architecture. Second, in order to use the Transformer in the masking network of the neural beamformer in the multi-channel case, we modify the self-attention component to be restricted to a segment rather than the whole sequence in order to reduce computation. Besides the model architecture improvements, we also incorporate an external dereverberation preprocessing, the weighted prediction error (WPE), enabling our model to handle reverberated signals. Experiments on the spatialized wsj1-2mix corpus show that the Transformer-based models achieve 40.9% and 25.6% relative WER reduction, down to 12.1% and 6.4% WER, under the anechoic condition in single-channel and multi-channel tasks, respectively, while in the reverberant case, our methods achieve 41.5% and 13.8% relative WER reduction, down to 16.5% and 15.2% WER.

Комментарии

Информация по комментариям в разработке