Publication result detail

Multilingual Sequence-to-Sequence Speech Recognition: Architecture, Transfer Learning, and Language Modeling

CHO, J.; BASKAR, M.; LI, R.; WIESNER, M.; MALLIDI, S.; YALTA, N.; KARAFIÁT, M.; WATANABE, S.; HORI, T.

Original Title

Multilingual Sequence-to-Sequence Speech Recognition: Architecture, Transfer Learning, and Language Modeling

English Title

Multilingual Sequence-to-Sequence Speech Recognition: Architecture, Transfer Learning, and Language Modeling

Type

Paper in proceedings (conference paper)

Original Abstract

Sequence-to-sequence (seq2seq) approach for low-resourceASR is a relatively new direction in speech research. The approachbenefits by performing model training without using lexicon andalignments. However, this poses a new problem of requiring moredata compared to conventional DNN-HMM systems. In this work,we attempt to use data from 10 BABEL languages to build a multilingualseq2seq model as a prior model, and then port them towards4 other BABEL languages using transfer learning approach. We alsoexplore different architectures for improving the prior multilingualseq2seq model. The paper also discusses the effect of integrating arecurrent neural network language model (RNNLM) with a seq2seqmodel during decoding. Experimental results show that the transferlearning approach from the multilingual model shows substantialgains over monolingual models across all 4 BABEL languages.Incorporating an RNNLM also brings significant improvements interms of %WER, and achieves recognition performance comparableto the models trained with twice more training data.

English abstract

Sequence-to-sequence (seq2seq) approach for low-resourceASR is a relatively new direction in speech research. The approachbenefits by performing model training without using lexicon andalignments. However, this poses a new problem of requiring moredata compared to conventional DNN-HMM systems. In this work,we attempt to use data from 10 BABEL languages to build a multilingualseq2seq model as a prior model, and then port them towards4 other BABEL languages using transfer learning approach. We alsoexplore different architectures for improving the prior multilingualseq2seq model. The paper also discusses the effect of integrating arecurrent neural network language model (RNNLM) with a seq2seqmodel during decoding. Experimental results show that the transferlearning approach from the multilingual model shows substantialgains over monolingual models across all 4 BABEL languages.Incorporating an RNNLM also brings significant improvements interms of %WER, and achieves recognition performance comparableto the models trained with twice more training data.

Keywords

Automatic speech recognition (ASR), sequence tosequence, multilingual setup, transfer learning, language modeling

Key words in English

Automatic speech recognition (ASR), sequence tosequence, multilingual setup, transfer learning, language modeling

Authors

CHO, J.; BASKAR, M.; LI, R.; WIESNER, M.; MALLIDI, S.; YALTA, N.; KARAFIÁT, M.; WATANABE, S.; HORI, T.

RIV year

2020

Released

18.12.2018

Publisher

IEEE Signal Processing Society

Location

Athens

ISBN

978-1-5386-4334-1

Book

Proceedings of 2018 IEEE WORKSHOP ON SPOKEN LANGUAGE TECHNOLOGY (SLT 2018)

Pages from

521

Pages to

527

Pages count

7

URL

BibTex

@inproceedings{BUT163489,
  author="CHO, J. and BASKAR, M. and LI, R. and WIESNER, M. and MALLIDI, S. and YALTA, N. and KARAFIÁT, M. and WATANABE, S. and HORI, T.",
  title="Multilingual Sequence-to-Sequence Speech Recognition: Architecture, Transfer Learning, and Language Modeling",
  booktitle="Proceedings of 2018 IEEE WORKSHOP ON SPOKEN LANGUAGE TECHNOLOGY (SLT 2018)",
  year="2018",
  pages="521--527",
  publisher="IEEE Signal Processing Society",
  address="Athens",
  doi="10.1109/SLT.2018.8639655",
  isbn="978-1-5386-4334-1",
  url="https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8639655"
}

Documents