Detail projektu

Multi-lingualita v řečových technologiích

Období řešení: 01.01.2020 — 31.08.2023

Zdroje financování

Ministerstvo školství, mládeže a tělovýchovy ČR - INTER-EXCELLENCE - Podprogram INTER-ACTION 19LTAIN

- plně financující (2020-01-01 - 2022-12-31)

O projektu

Technologie dolování řečových dat a rozhraní člověk-stroj založené na řeči zažily v posledním desetiletí významné pokroky a  řada aplikací byla úspěšně komercializována. Obvykle však fungují správně pouze v  příznivých scénářích -v jazycích s množstvím dat pro trénování a v relativně čistém prostředí, jako je kancelář nebo byt. Narychle se rozvíjejících velkých trzích, jako je ten indický, ztěžují využívání řeči závažné problémy: mnoho jazyků (některé z nich s omezenými nebo chybějícími zdroji), velmi hlučné podmínky (spousta obchodů se jednoduše provádí na ulicích indických měst)a variabilní počet mluvčích v konverzaci (od běžných dvou po celé rodiny). Díky tomu je vývoj automatického rozpoznávání řeči (automatic speech recognition, ASR), rozpoznávání mluvčího (speaker recognition, SR) a  diarizace mluvčích (určení, kdo kdy mluvil, speaker diarization, SD) komplikovaný. V  rámci navrhovaného projektu se dvě zavedené akademické laboratoře se zkušenostmi s multilingválním ASR, robustním SR a SD: Vysoké učení technické v Brně (VUT), IIT Madras (IIT-M) spojily s důležitým hráčem na indickém a globálním trhu s elektronikou -Samsung R&D Institute India-Bangalore (SRI-B), a  navrhují významný posun vtěchto technologiích, zejména ve vícejazyčném ASR s  omezenými zdroji. Zatímco VUT a  IIT-M budou poskytovat špičkový výzkum voblasti dolování řeči (založený mimo jiné na amerických programech IARPA Babel and Material, vítězství v evaluaci IARPA ASpIRE a v Interspeech 2018 Low Resource Speech Recognition Challenge for Indian Languages, a na indickém projektu MANDI) , SRI-B bude poskytovat data, průmyslové vedení a bude se věnovat tvorbě demonstrátorů.

Popis anglicky
Speech data mining technologies and human-machine interfaces based on speech have witnessed significant advances in the past decade and numerous applications have been successfully commercialized. However, they usually work correctly only in favorable scenarios - in languages with abundance of training data and in relatively clean environments, such as office or apartment. In fast developing big markets such as the Indian one, severe problems make the exploitation of speech difficult: multitude of languages (some of them with limited or missing resources), highly noisy conditions (lots of business is simply done on the streets in Indian cities), and highly variable numbers of speakers in a conversation (from normal two to whole families). These make the development of automatic speech recognition (ASR), speaker recognition (SR) and speaker diarization (determining who spoke when, SD) complicated. In the proposed project, two established research institutes with significant track multi-lingual ASR, robust SR and SD: Brno University of Technology (BUT), IIT Madras (IIT-M) have teamed up with an important player on the Indian and global personal electronics markets - Samsung R&D Institute India-Bangalore (SRI-B), and propose significant advances in several speech technologies, notably in multi-lingual low-resource ASR. While BUT and IIT-M will provide top speech research (based, among others, on the U.S. IARPA Babel and Material programs, victory in IARPA ASpIRE evaluation and in Interspeech 2018 Low Resource Speech Recognition Challenge for Indian Languages, and on Indian MANDI project), SRI-B will provide data, industrial guidelines and to produce demonstrators of technologies.

Klíčová slova
multi-lingualita, rozpoznávání řeči, strojové učení, data, přenos učení

Klíčová slova anglicky
multi-linguality, speech recognition, machine learning, data, transfer learning

Označení

LTAIN19087

Originální jazyk

čeština

Řešitelé

Černocký Jan, prof. Dr. Ing. - hlavní řešitel
Egorova Ekaterina, Ing., Ph.D. - spoluřešitel
Skácel Miroslav, Ing. - spoluřešitel

Útvary

Ústav počítačové grafiky a multimédií
- příjemce (19.07.2019 - 31.08.2023)

Výsledky

LOZANO DÍEZ, A.; SILNOVA, A.; PULUGUNDLA, B.; ROHDIN, J.; VESELÝ, K.; BURGET, L.; PLCHOT, O.; GLEMBEK, O.; NOVOTNÝ, O.; MATĚJKA, P. BUT Text-Dependent Speaker Verification System for SdSV Challenge 2020. In Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH. Proceedings of Interspeech. Shanghai: International Speech Communication Association, 2020. p. 761-765. ISSN: 1990-9772.
Detail

PENG, J.; PLCHOT, O.; STAFYLAKIS, T.; MOŠNER, L.; BURGET, L.; ČERNOCKÝ, J. Improving Speaker Verification with Self-Pretrained Transformer Models. In Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH. Proceedings of Interspeech. Dublin: International Speech Communication Association, 2023. p. 5361-5365. ISSN: 1990-9772.
Detail

YUSUF, B.; ONDEL YANG, L.; BURGET, L.; ČERNOCKÝ, J.; SARAÇLAR, M. A Hierarchical Subspace Model for Language-Attuned Acoustic Unit Discovery. In ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Toronto, Ontario: IEEE Signal Processing Society, 2021. p. 3710-3714. ISBN: 978-1-7281-7605-5.
Detail

BASKAR, M.; BURGET, L.; WATANABE, S.; ASTUDILLO, R.; ČERNOCKÝ, J. Eat: Enhanced ASR-TTS for Self-Supervised Speech Recognition. In ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Toronto, Ontario: IEEE Signal Processing Society, 2021. p. 6753-6757. ISBN: 978-1-7281-7605-5.
Detail

ŽMOLÍKOVÁ, K.; DELCROIX, M.; BURGET, L.; NAKATANI, T.; ČERNOCKÝ, J. Integration of Variational Autoencoder and Spatial Clustering for Adaptive Multi-Channel Neural Speech Separation. In 2021 IEEE Spoken Language Technology Workshop, SLT 2021 - Proceedings. Shenzhen - virtual: IEEE Signal Processing Society, 2021. p. 889-896. ISBN: 978-1-7281-7066-4.
Detail

KOCOUR, M.; CÁMBARA, G.; LUQUE, J.; BONET, D.; FARRÚS, M.; KARAFIÁT, M.; VESELÝ, K.; ČERNOCKÝ, J. BCN2BRNO: ASR System Fusion for Albayzin 2020 Speech to Text Challenge. Proceedings of IberSPEECH 2021. Vallaloid: International Speech Communication Association, 2021. p. 113-117.
Detail

PENG, J.; QU, X.; WANG, J.; GU, R.; XIAO, J.; BURGET, L.; ČERNOCKÝ, J. ICSpk: Interpretable Complex Speaker Embedding Extractor from Raw Waveform. In Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH. Proceedings of Interspeech. Brno: International Speech Communication Association, 2021. p. 511-515. ISSN: 1990-9772.
Detail

ŽMOLÍKOVÁ, K.; DELCROIX, M.; RAJ, D.; WATANABE, S.; ČERNOCKÝ, J. Auxiliary Loss Function for Target Speech Extraction and Recognition with Weak Supervision Based on Speaker Characteristics. In Proceedings of 2021 Interspeech. Proceedings of Interspeech. Brno: International Speech Communication Association, 2021. p. 1464-1468. ISSN: 1990-9772.
Detail

PENG, J.; QU, X.; GU, R.; WANG, J.; XIAO, J.; BURGET, L.; ČERNOCKÝ, J. Effective Phase Encoding for End-To-End Speaker Verification. In Proceedings Interspeech 2021. Proceedings of Interspeech. Brno: International Speech Communication Association, 2021. p. 2366-2370. ISSN: 1990-9772.
Detail

MOŠNER, L.; PLCHOT, O.; BURGET, L.; ČERNOCKÝ, J. Multisv: Dataset for Far-Field Multi-Channel Speaker Verification. In ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings. Singapore: IEEE Signal Processing Society, 2022. p. 7977-7981. ISBN: 978-1-6654-0540-9.
Detail

MOŠNER, L.; PLCHOT, O.; BURGET, L.; ČERNOCKÝ, J. Multi-Channel Speaker Verification with Conv-Tasnet Based Beamformer. In ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings. Singapore: IEEE Signal Processing Society, 2022. p. 7982-7986. ISBN: 978-1-6654-0540-9.
Detail

HAN, J.; LONG, Y.; BURGET, L.; ČERNOCKÝ, J. DPCCN: Densely-Connected Pyramid Complex Convolutional Network for Robust Speech Separation and Extraction. In ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings. Singapore: IEEE Signal Processing Society, 2022. p. 7292-7296. ISBN: 978-1-6654-0540-9.
Detail

EGOROVA, E.; VYDANA, H.; BURGET, L.; ČERNOCKÝ, J. Spelling-Aware Word-Based End-to-End ASR. IEEE SIGNAL PROCESSING LETTERS, 2022, vol. 29, no. 29, p. 1729-1733. ISSN: 1558-2361.
Detail

SILNOVA, A.; STAFYLAKIS, T.; MOŠNER, L.; PLCHOT, O.; ROHDIN, J.; MATĚJKA, P.; BURGET, L.; GLEMBEK, O.; BRUMMER, J. Analyzing speaker verification embedding extractors and back-ends under language and channel mismatch. Proceedings of The Speaker and Language Recognition Workshop (Odyssey 2022). Beijing: International Speech Communication Association, 2022. p. 9-16.
Detail

PENG, J.; ZHANG, C.; ČERNOCKÝ, J.; YU, D. Progressive contrastive learning for self-supervised text-independent speaker verification. Proceedings of The Speaker and Language Recognition Workshop (Odyssey 2022). Beijing: International Speech Communication Association, 2022. p. 17-24.
Detail

ALAM, J.; BURGET, L.; GLEMBEK, O.; MATĚJKA, P.; MOŠNER, L.; PLCHOT, O.; ROHDIN, J.; SILNOVA, A.; STAFYLAKIS, T. Development of ABC systems for the 2021 edition of NIST Speaker Recognition evaluation. Proceedings of The Speaker and Language Recognition Workshop (Odyssey 2022). Beijing: International Speech Communication Association, 2022. p. 346-353.
Detail

PENG, J.; GU, R.; MOŠNER, L.; PLCHOT, O.; BURGET, L.; ČERNOCKÝ, J. Learnable Sparse Filterbank for Speaker Verification. In Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH. Proceedings of Interspeech. Incheon: International Speech Communication Association, 2022. p. 5110-5114. ISSN: 1990-9772.
Detail

KOCOUR, M.; ŽMOLÍKOVÁ, K.; ONDEL YANG, L.; ŠVEC, J.; DELCROIX, M.; OCHIAI, T.; BURGET, L.; ČERNOCKÝ, J. Revisiting joint decoding based multi-talker speech recognition with DNN acoustic model. In Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH. Proceedings of Interspeech. Incheon: International Speech Communication Association, 2022. p. 4955-4959. ISSN: 1990-9772.
Detail

BASKAR, M.; HERZIG, T.; NGUYEN, D.; DIEZ SÁNCHEZ, M.; POLZEHL, T.; BURGET, L.; ČERNOCKÝ, J. Speaker adaptation for Wav2vec2 based dysarthric ASR. In Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH. Proceedings of Interspeech. Incheon: International Speech Communication Association, 2022. p. 3403-3407. ISSN: 1990-9772.
Detail

ŠVEC, J.; ŽMOLÍKOVÁ, K.; KOCOUR, M.; DELCROIX, M.; OCHIAI, T.; MOŠNER, L.; ČERNOCKÝ, J. Analysis of impact of emotions on target speech extraction and speech separation. In Proceedings of The 17th International Workshop on Acoustic Signal Enhancement (IWAENC 2022). Bamberg: IEEE Signal Processing Society, 2022. p. 1-5. ISBN: 978-1-6654-6867-1.
Detail

DE BENITO GORRON, D.; ŽMOLÍKOVÁ, K.; TORRE TOLEDANO, D. Source Separation for Sound Event Detection in domestic environments using jointly trained models. In Proceedings of The 17th International Workshop on Acoustic Signal Enhancement (IWAENC 2022). Bamberg: IEEE Signal Processing Society, 2022. p. 1-5. ISBN: 978-1-6654-6867-1.
Detail

KOCOUR, M.; UMESH, J.; KARAFIÁT, M.; ŠVEC, J.; LOPEZ, F.; BENEŠ, K.; DIEZ SÁNCHEZ, M.; SZŐKE, I.; LUQUE, J.; VESELÝ, K.; BURGET, L.; ČERNOCKÝ, J. BCN2BRNO: ASR System Fusion for Albayzin 2022 Speech to Text Challenge. Proceedings of IberSpeech 2022. Granada: International Speech Communication Association, 2022. p. 276-280.
Detail

LANDINI, F.; GLEMBEK, O.; MATĚJKA, P.; ROHDIN, J.; BURGET, L.; DIEZ SÁNCHEZ, M.; SILNOVA, A. Analysis of the BUT Diarization System for Voxconverse Challenge. In ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Toronto, Ontario: IEEE Signal Processing Society, 2021. p. 5819-5823. ISBN: 978-1-7281-7605-5.
Detail

PENG, J.; PLCHOT, O.; STAFYLAKIS, T.; MOŠNER, L.; BURGET, L.; ČERNOCKÝ, J. An attention-based backend allowing efficient fine-tuning of transformer models for speaker verification. In 2022 IEEE Spoken Language Technology Workshop, SLT 2022 - Proceedings. Doha: IEEE Signal Processing Society, 2023. p. 555-562. ISBN: 978-1-6654-7189-3.
Detail

STAFYLAKIS, T.; MOŠNER, L.; KAKOUROS, S.; PLCHOT, O.; BURGET, L.; ČERNOCKÝ, J. Extracting speaker and emotion information from self-supervised speech models via channel-wise correlations. In 2022 IEEE Spoken Language Technology Workshop, SLT 2022 - Proceedings. Doha: IEEE Signal Processing Society, 2023. p. 1136-1143. ISBN: 978-1-6654-7189-3.
Detail

SILNOVA, A.; SLAVÍČEK, J.; MOŠNER, L.; KLČO, M.; PLCHOT, O.; MATĚJKA, P.; PENG, J.; STAFYLAKIS, T.; BURGET, L. ABC System Description for NIST LRE 2022. Proceedings of NIST LRE 2022 Workshop. Washington DC: National Institute of Standards and Technology, 2023. p. 1-5.
Detail

KESIRAJU, S.; BENEŠ, K.; TIKHONOV, M.; ČERNOCKÝ, J. BUT Systems for IWSLT 2023 Marathi - Hindi Low Resource Speech Translation Task. In 20th International Conference on Spoken Language Translation, IWSLT 2023 - Proceedings of the Conference. Toronto (in-person and online): Association for Computational Linguistics, 2023. p. 227-234. ISBN: 978-1-959429-84-5.
Detail

PENG, J.; STAFYLAKIS, T.; GU, R.; PLCHOT, O.; MOŠNER, L.; BURGET, L.; ČERNOCKÝ, J. Parameter-Efficient Transfer Learning of Pre-Trained Transformer Models for Speaker Verification Using Adapters. In ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings. Rhodes Island: IEEE Signal Processing Society, 2023. p. 1-5. ISBN: 978-1-7281-6327-7.
Detail

ŽMOLÍKOVÁ, K.; DELCROIX, M.; OCHIAI, T.; ČERNOCKÝ, J.; KINOSHITA, K.; YU, D. Neural Target Speech Extraction: An overview. IEEE SIGNAL PROCESSING MAGAZINE, 2023, vol. 40, no. 3, p. 8-29. ISSN: 1558-0792.
Detail

MOŠNER, L.; PLCHOT, O.; PENG, J.; BURGET, L.; ČERNOCKÝ, J. Multi-Channel Speech Separation with Cross-Attention and Beamforming. In Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH. Proceedings of Interspeech. Dublin: International Speech Communication Association, 2023. p. 1693-1697. ISSN: 1990-9772.
Detail

MATĚJKA, P.; SILNOVA, A.; SLAVÍČEK, J.; MOŠNER, L.; PLCHOT, O.; KLČO, M.; PENG, J.; STAFYLAKIS, T.; BURGET, L. Description and Analysis of ABC Submission to NIST LRE 2022. In Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH. Proceedings of Interspeech. Dublin: International Speech Communication Association, 2023. p. 511-515. ISSN: 1990-9772.
Detail

KOCOUR, M.; UMESH, J.; KARAFIÁT, M.; ŠVEC, J.; LOPEZ, F.; BENEŠ, K.; DIEZ SÁNCHEZ, M.; SZŐKE, I.; LUQUE, J.; VESELÝ, K.; BURGET, L.; ČERNOCKÝ, J.: R1-LTAIN19087; BCN2BRNO Automatic speech recognition system for Albayzin 2022 Speech to Text Challenge. Kontaktujte: https://www.fit.vut.cz/person/cernocky/ nebo https://www.fit.vut.cz/person/ikocour/. URL: https://www.fit.vut.cz/research/product/797/. (software)
Detail