Detail publikačního výsledku

Integration of Variational Autoencoder and Spatial Clustering for Adaptive Multi-Channel Neural Speech Separation

ŽMOLÍKOVÁ, K.; DELCROIX, M.; BURGET, L.; NAKATANI, T.; ČERNOCKÝ, J.

Originální název

Integration of Variational Autoencoder and Spatial Clustering for Adaptive Multi-Channel Neural Speech Separation

Anglický název

Integration of Variational Autoencoder and Spatial Clustering for Adaptive Multi-Channel Neural Speech Separation

Druh

Stať ve sborníku v databázi WoS či Scopus

Originální abstrakt

In this paper, we propose a method combining variational autoencodermodel of speech with a spatial clustering approach for multichannelspeech separation. The advantage of integrating spatial clusteringwith a spectral model was shown in several works. As thespectral model, previous works used either factorial generative modelsof the mixed speech or discriminative neural networks. In ourwork, we combine the strengths of both approaches, by building afactorial model based on a generative neural network, a variationalautoencoder. By doing so, we can exploit the modeling power ofneural networks, but at the same time, keep a structured model. Sucha model can be advantageous when adapting to new noise conditionsas only the noise part of the model needs to be modified. We showexperimentally, that our model significantly outperforms previousfactorial model based on Gaussian mixture model (DOLPHIN), performscomparably to integration of permutation invariant trainingwith spatial clustering, and enables us to easily adapt to new noiseconditions.

Anglický abstrakt

In this paper, we propose a method combining variational autoencodermodel of speech with a spatial clustering approach for multichannelspeech separation. The advantage of integrating spatial clusteringwith a spectral model was shown in several works. As thespectral model, previous works used either factorial generative modelsof the mixed speech or discriminative neural networks. In ourwork, we combine the strengths of both approaches, by building afactorial model based on a generative neural network, a variationalautoencoder. By doing so, we can exploit the modeling power ofneural networks, but at the same time, keep a structured model. Sucha model can be advantageous when adapting to new noise conditionsas only the noise part of the model needs to be modified. We showexperimentally, that our model significantly outperforms previousfactorial model based on Gaussian mixture model (DOLPHIN), performscomparably to integration of permutation invariant trainingwith spatial clustering, and enables us to easily adapt to new noiseconditions.

Klíčová slova

Multi-channel speech separation, variational autoencoder,spatial clustering, DOLPHIN

Klíčová slova v angličtině

Multi-channel speech separation, variational autoencoder,spatial clustering, DOLPHIN

Autoři

ŽMOLÍKOVÁ, K.; DELCROIX, M.; BURGET, L.; NAKATANI, T.; ČERNOCKÝ, J.

Rok RIV

2022

Vydáno

19.01.2021

Nakladatel

IEEE Signal Processing Society

Místo

Shenzhen - virtual

ISBN

978-1-7281-7066-4

Kniha

2021 IEEE Spoken Language Technology Workshop, SLT 2021 - Proceedings

Strany od

889

Strany do

896

Strany počet

8

URL

BibTex

@inproceedings{BUT175809,
  author="Kateřina {Žmolíková} and Marc {Delcroix} and Lukáš {Burget} and Tomohiro {Nakatani} and Jan {Černocký}",
  title="Integration of Variational Autoencoder and Spatial Clustering for Adaptive Multi-Channel Neural Speech Separation",
  booktitle="2021 IEEE Spoken Language Technology Workshop, SLT 2021 - Proceedings",
  year="2021",
  pages="889--896",
  publisher="IEEE Signal Processing Society",
  address="Shenzhen - virtual",
  doi="10.1109/SLT48900.2021.9383612",
  isbn="978-1-7281-7066-4",
  url="https://ieeexplore.ieee.org/document/9383612"
}

Dokumenty