Research breakthrough possible @S-Logix pro@slogix.in

Office Address

  • 2nd Floor, #7a, High School Road, Secretariat Colony Ambattur, Chennai-600053 (Landmark: SRM School) Tamil Nadu, India
  • pro@slogix.in
  • +91- 81240 01111

Social List

Exploring self-attention mechanisms for speech separation - 2023

exploring-self-attention-mechanisms-for-speech-separation-ieee-acm-transactions-on--audio-speech,-and-language-processing.jpg

Research Paper on Exploring Self-Attention Mechanisms for Speech Separation

Research Area:  Machine Learning

Abstract:

Transformers have enabled impressive improvements in deep learning. They often outperform recurrent and convolutional models in many tasks while taking advantage of parallel processing. Recently, we proposed the SepFormer, which obtains state-of-the-art performance in speech separation with the WSJ0-2/3 Mix datasets. This paper studies in-depth Transformers for speech separation. In particular, we extend our previous findings on the SepFormer by providing results on more challenging noisy and noisy-reverberant datasets, such as LibriMix, WHAM!, and WHAMR!. Moreover, we extend our model to perform speech enhancement and provide experimental evidence on denoising and dereverberation tasks. Finally, we investigate, for the first time in speech separation, the use of efficient self-attention mechanisms such as Linformers, Lonformers, and ReFormers. We found that they reduce memory requirements significantly. For example, we show that the Reformer-based attention outperforms the popular Conv-TasNet model on the WSJ0-2Mix dataset while being faster at inference and comparable in terms of memory consumption.

Keywords:  

Author(s) Name:  Cem Subakan,Mirco Ravanelli,Samuele Cornell,Francois Grondin,Mirko Bronzi

Journal name:  IEEE/ACM Transactions on Audio,Speech,and Language Processing

Conferrence name:  

Publisher name:  IEEE

DOI:  10.1109/TASLP.2023.3282097

Volume Information:  Volume 31,Pages2169-2180(2023)