Publication result detail

Multi-Channel Speech Separation with Cross-Attention and Beamforming

MOŠNER, L.; PLCHOT, O.; PENG, J.; BURGET, L.; ČERNOCKÝ, J.

Original Title

Multi-Channel Speech Separation with Cross-Attention and Beamforming

English Title

Multi-Channel Speech Separation with Cross-Attention and Beamforming

Type

Paper in proceedings (conference paper)

Original Abstract

Originally, single-channel source separation gained more research interest. It resulted in immense progress. Multichannel (MC) separation comes with new challenges posed by adverse indoor conditions making it an important field of study. We seek to combine promising ideas from the two worlds. First, we build MC models by extending current single-channel time-domain separators relying on their strength. Our approach allows reusing pre-trained models by inserting designed lightweight reference channel attention (RCA) combiner, the only trained module. It comprises two blocks: the former allows attending to different parts of other channels w.r.t. the reference one, and the latter provides an attention-based combination of channels. Second, like many successful MC models, our system incorporates beamforming and allows for the fusion of the network and beamformer outputs. We compare our approach with the SOTA models on the SMS-WSJ dataset and show better or similar performance.

English abstract

Originally, single-channel source separation gained more research interest. It resulted in immense progress. Multichannel (MC) separation comes with new challenges posed by adverse indoor conditions making it an important field of study. We seek to combine promising ideas from the two worlds. First, we build MC models by extending current single-channel time-domain separators relying on their strength. Our approach allows reusing pre-trained models by inserting designed lightweight reference channel attention (RCA) combiner, the only trained module. It comprises two blocks: the former allows attending to different parts of other channels w.r.t. the reference one, and the latter provides an attention-based combination of channels. Second, like many successful MC models, our system incorporates beamforming and allows for the fusion of the network and beamformer outputs. We compare our approach with the SOTA models on the SMS-WSJ dataset and show better or similar performance.

Keywords

multi-channel source separation, cross-channel attention, beamforming

Key words in English

multi-channel source separation, cross-channel attention, beamforming

Authors

MOŠNER, L.; PLCHOT, O.; PENG, J.; BURGET, L.; ČERNOCKÝ, J.

RIV year

2024

Released

20.08.2023

Publisher

International Speech Communication Association

Location

Dublin

Book

Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH

ISBN

1990-9772

Periodical

Proceedings of Interspeech

Volume

2023

Number

08

State

French Republic

Pages from

1693

Pages to

1697

Pages count

5

URL

BibTex

@inproceedings{BUT185571,
  author="Ladislav {Mošner} and Oldřich {Plchot} and Junyi {Peng} and Lukáš {Burget} and Jan {Černocký}",
  title="Multi-Channel Speech Separation with Cross-Attention and Beamforming",
  booktitle="Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH",
  year="2023",
  journal="Proceedings of Interspeech",
  volume="2023",
  number="08",
  pages="1693--1697",
  publisher="International Speech Communication Association",
  address="Dublin",
  doi="10.21437/Interspeech.2023-2537",
  issn="1990-9772",
  url="https://www.isca-speech.org/archive/interspeech_2023/mosner23_interspeech.html"
}

Documents