Project detail

Robust processing of recordings for operations and security

Duration: 1.10.2020 — 30.9.2025

Funding resources

Ministerstvo vnitra ČR - PROGRAM STRATEGICKÁ PODPORA ROZVOJE BEZPEČNOSTNÍHO VÝZKUMU ČR 2019-2025 (IMPAKT 1) PODPROGRAMU 1 SPOLEČNÉ VÝZKUMNÉ PROJEKTY (BV IMP1/1VS)

On the project

Cílem projektu je zvýšení kompetencí, sjednocení a větší koordinace dvou předních českých výzkumných pracovišť, v oboru dolování informací z řeči z reálných nahrávek pro oblast bezpečnosti a úzká spolupráce s bezpečnostními sbory na uvádění výsledků výzkumu do praxe vyšetřování a zpravodajství. Tento cíl zahrnuje posun v robustním automatickém rozpoznávání řeči (ASR), trénování/adaptaci ASR pro různá prostředí, určení kdy kdo mluví v nahrávce (diarizace) a výzkum prohledávání nahrávek pomocí akustických dotazů (Query by Example)

Description in English
The aim of the project is to increase competencies, unification and greater coordination of two leading Czech research institutes, in the field of speech information mining from real recordings in the field of security and close cooperation with security corps to put research results into practice of investigation and intelligence. This goal includes a shift in robust automatic speech recognition (ASR), training / adaptation of ASRs for different environments, determining when a person is speaking in a recording (diarization), and researching recordings through acoustic queries (Query by Example)

Keywords
rozpoznávání řeči, robustní, nahrávky, operativa, bezpečnost

Key words in English
speech recognition, robust, recordings, operations, security

Mark

VJ01010108

Default language

Czech

People responsible

Karafiát Martin, Ing., Ph.D. - principal person responsible
Beneš Karel, Ing., Ph.D. - fellow researcher
Brukner Jan, Ing. - fellow researcher
Kesiraju Santosh, Ph.D. - fellow researcher
Malenovský Vladimír, Ing., Ph.D. - fellow researcher
Mošner Ladislav, Ing. - fellow researcher
Pálka Petr, Ing. - fellow researcher
Plchot Oldřich, Ing., Ph.D. - fellow researcher
Sedláček Šimon, Ing. - fellow researcher
Schwarz Petr, Ing., Ph.D. - fellow researcher
Silnova Anna, M.Sc., Ph.D. - fellow researcher
Švec Ján, Ing. - fellow researcher
Veselý Karel, Ing., Ph.D. - fellow researcher
Yusuf Bolaji - fellow researcher

Units

Department of Computer Graphics and Multimedia
- responsible department (23.3.2020 - not assigned)
Speech Data Mining Research Group BUT Speech@FIT
- internal (23.3.2020 - 30.9.2025)
Department of Computer Graphics and Multimedia
- beneficiary (23.3.2020 - 30.9.2025)

Results

KARAFIÁT, M.; DIEZ SÁNCHEZ, M.; ŠVEC, J.; ČERNOCKÝ, J.; SZŐKE, I.; ŠMÍDL, L.; ZAJÍC, Z.: SW2 Robustní diarizace. URL: https://www.fit.vut.cz/research/product/750/. (Software)
Detail

ŠMÍDL, L.; LEHEČKA, J.; ŠVEC, J.; KARAFIÁT, M.: SW5 Řečové technologie s automatickou adaptací na konkrétní prostředí. URL: https://www.fit.vut.cz/research/product/870/. (Software)
Detail

YUSUF, B.; KARAFIÁT, M.; ŠVEC, J.; ŠMÍDL, L.: SW4 Detektor akustických vzorů. URL: https://www.fit.vut.cz/research/product/835/. (Software)
Detail

KARAFIÁT, M.; LEHEČKA, J.; SZŐKE, I.; ŠMÍDL, L.; ŠVEC, J.: SW1: ASR asijského jazyka. URL: https://www.fit.vut.cz/research/product/697/. (Software)
Detail

ŠMÍDL, L.; KARAFIÁT, M.; ŠVEC, J.; LEHEČKA, J.; MOŠNER, L.; BRUKNER, J.: SW3 ASR pro akusticky náročná prostředí. URL: https://www.fit.vut.cz/research/product/795/. (Software)
Detail

ŠMÍDL, L.; KARAFIÁT, M.; LEHEČKA, J.; SZŐKE, I.; ŠVEC, J.: SW6 ASR systém východoevropského jazyka. URL: https://www.fit.vut.cz/research/product/836/. (Software)
Detail

SILNOVA, A.; SLAVÍČEK, J.; MOŠNER, L.; KLČO, M.; PLCHOT, O.; MATĚJKA, P.; PENG, J.; STAFYLAKIS, T.; BURGET, L. ABC System Description for NIST LRE 2022. Proceedings of NIST LRE 2022 Workshop. Washington DC: National Institute of Standards and Technology, 2023. p. 1-5.
Detail

ZHANG, L.; WANG, X.; COOPER, E.; DIEZ SÁNCHEZ, M.; LANDINI, F.; EVANS, N.; YAMAGISHI, J. Spoof Diarization: "What Spoofed When" in Partially Spoofed Audio. In Proceedings of Interspeech 2024. Proceedings of Interspeech. Kos: International Speech Communication Association, 2024. no. 9, p. 502-506. ISSN: 1990-9772.
Detail

PEŠÁN, J.; JUŘÍK, V.; RŮŽIČKOVÁ, A.; SVOBODA, V.; JANOUŠEK, O.; NĚMCOVÁ, A.; BOJANOVSKÁ, H.; ALDABAGHOVÁ, J.; KYSLÍK, F.; VODIČKOVÁ, K.; SODOMOVÁ, A.; BARTYS, P.; CHUDÝ, P.; ČERNOCKÝ, J. Speech production under stress for machine learning: multimodal dataset of 79 cases and 8 signals. Scientific Data, 2024, vol. 11, no. 1, p. 1-9. ISSN: 2052-4463.
Detail

KUNEŠOVÁ, M.; ZAJÍC, Z.; ŠMÍDL, L.; KARAFIÁT, M. Comparison of wav2vec 2.0 models on three speech processing tasks. International Journal of Speech Technology, 2024, vol. 27, no. 4, p. 847-859. ISSN: 1572-8110.
Detail

YUSUF, B.; ČERNOCKÝ, J.; SARAÇLAR, M. Pretraining End-to-End Keyword Search with Automatically Discovered Acoustic Units. In Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH. Proceedings of Interspeech. Kos: International Speech Communication Association, 2024. no. 9, p. 5068-5072. ISSN: 1990-9772.
Detail

BRUMMER, J.; SWART, A.; MOŠNER, L.; SILNOVA, A.; PLCHOT, O.; STAFYLAKIS, T.; BURGET, L. Probabilistic Spherical Discriminant Analysis: An Alternative to PLDA for length-normalized embeddings. In Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH. Proceedings of Interspeech. Incheon: International Speech Communication Association, 2022. no. 9, p. 1446-1450. ISSN: 1990-9772.
Detail

KARAFIÁT, M.; VESELÝ, K.; ČERNOCKÝ, J.; PROFANT, J.; NYTRA, J.; HLAVÁČEK, M.; PAVLÍČEK, T. Analysis of X-Vectors for Low-Resource Speech Recognition. In ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Toronto, Ontario: IEEE Signal Processing Society, 2021. p. 6998-7002. ISBN: 978-1-7281-7605-5.
Detail

MATĚJKA, P.; SILNOVA, A.; SLAVÍČEK, J.; MOŠNER, L.; PLCHOT, O.; KLČO, M.; PENG, J.; STAFYLAKIS, T.; BURGET, L. Description and Analysis of ABC Submission to NIST LRE 2022. In Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH. Proceedings of Interspeech. Dublin: International Speech Communication Association, 2023. no. 08, p. 511-515. ISSN: 1990-9772.
Detail

HAN, J.; LANDINI, F.; ROHDIN, J.; SILNOVA, A.; DIEZ SÁNCHEZ, M.; BURGET, L. Leveraging Self-Supervised Learning for Speaker Diarization. In ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings. Hyderabad: IEEE Signal Processing Society, 2025. p. 1-5. ISBN: 979-8-3503-6874-1.
Detail

PENG, J.; ASHIHARA, T.; DELCROIX, M.; OCHIAI, T.; PLCHOT, O.; ARAKI, S.; ČERNOCKÝ, J. TS-SUPERB: A Target Speech Processing Benchmark for Speech Self-Supervised Learning Models. In ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings. Hyderabad: IEEE Signal Processing Society, 2025. p. 1-5. ISBN: 979-8-3503-6874-1.
Detail

ZHANG, L.; STAFYLAKIS, T.; LANDINI, F.; DIEZ SÁNCHEZ, M.; SILNOVA, A.; BURGET, L. Do End-to-End Neural Diarization Attractors Need to Encode Speaker Characteristic Information?. Proceedings of Odyssey 2024: The Speaker and Language Recognition Workshop. Québec City: International Speech Communication Association, 2024. p. 123-130.
Detail

LANDINI, F.; GLEMBEK, O.; MATĚJKA, P.; ROHDIN, J.; BURGET, L.; DIEZ SÁNCHEZ, M.; SILNOVA, A. Analysis of the BUT Diarization System for Voxconverse Challenge. In ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Toronto, Ontario: IEEE Signal Processing Society, 2021. p. 5819-5823. ISBN: 978-1-7281-7605-5.
Detail

LANDINI, F.; DIEZ SÁNCHEZ, M.; LOZANO DÍEZ, A.; BURGET, L. Multi-Speaker and Wide-Band Simulated Conversations as Training Data for End-to-End Neural Diarization. In Proceedings of ICASSP 2023. Rhodes Island: IEEE Signal Processing Society, 2023. p. 1-5. ISBN: 978-1-7281-6327-7.
Detail

STAFYLAKIS, T.; MOŠNER, L.; KAKOUROS, S.; PLCHOT, O.; BURGET, L.; ČERNOCKÝ, J. Extracting speaker and emotion information from self-supervised speech models via channel-wise correlations. In 2022 IEEE Spoken Language Technology Workshop, SLT 2022 - Proceedings. Doha: IEEE Signal Processing Society, 2023. p. 1136-1143. ISBN: 978-1-6654-7189-3.
Detail

STAFYLAKIS, T.; MOŠNER, L.; PLCHOT, O.; ROHDIN, J.; SILNOVA, A.; BURGET, L.; ČERNOCKÝ, J. Training Speaker Embedding Extractors Using Multi-Speaker Audio with Unknown Speaker Boundaries. In Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH. Proceedings of Interspeech. Incheon: International Speech Communication Association, 2022. no. 9, p. 605-609. ISSN: 1990-9772.
Detail

PENG, J.; MOŠNER, L.; ZHANG, L.; PLCHOT, O.; STAFYLAKIS, T.; BURGET, L.; ČERNOCKÝ, J. CA-MHFA: A Context-Aware Multi-Head Factorized Attentive Pooling for SSL-Based Speaker Verification. In ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings. Hyderabad: IEEE Signal Processing Society, 2025. p. 1-5. ISBN: 979-8-3503-6874-1.
Detail

LANDINI, F.; DIEZ SÁNCHEZ, M.; STAFYLAKIS, T.; BURGET, L. DiaPer: End-to-End Neural Diarization With Perceiver-Based Attractors. IEEE Transactions on Audio Speech and Language Processing, 2024, vol. 32, no. 7, p. 3450-3465. ISSN: 1558-7916.
Detail

MOŠNER, L.; PLCHOT, O.; PENG, J.; BURGET, L.; ČERNOCKÝ, J. Multi-Channel Speech Separation with Cross-Attention and Beamforming. In Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH. Proceedings of Interspeech. Dublin: International Speech Communication Association, 2023. no. 08, p. 1693-1697. ISSN: 1990-9772.
Detail

MOŠNER, L.; PLCHOT, O.; BURGET, L.; ČERNOCKÝ, J. Multi-Channel Speaker Verification with Conv-Tasnet Based Beamformer. In ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings. Singapore: IEEE Signal Processing Society, 2022. p. 7982-7986. ISBN: 978-1-6654-0540-9.
Detail

PÁLKA, P.; LANDINI, F.; KLEMENT, D.; DIEZ SÁNCHEZ, M.; SILNOVA, A.; BURGET, L.; DELCROIX, M. Joint Training of Speaker Embedding Extractor, Speech and Overlap Detection for Diarization. Palermo: IEEE Signal Processing Society, 2025. p. 31-35. ISBN: 978-9-46-459362-4.
Detail

MOŠNER, L.; SERIZEL, R.; BURGET, L.; PLCHOT, O.; VINCENT, E.; PENG, J.; ČERNOCKÝ, J. Multi-Channel Extension of Pre-trained Models for Speaker Verification. In Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH. Proceedings of Interspeech. Kos: International Speech Communication Association, 2024. no. 9, p. 2135-2139. ISSN: 1990-9772.
Detail

SILNOVA, A.; STAFYLAKIS, T.; MOŠNER, L.; PLCHOT, O.; ROHDIN, J.; MATĚJKA, P.; BURGET, L.; GLEMBEK, O.; BRUMMER, J. Analyzing speaker verification embedding extractors and back-ends under language and channel mismatch. Proceedings of The Speaker and Language Recognition Workshop (Odyssey 2022). Beijing: International Speech Communication Association, 2022. p. 9-16.
Detail

HAN, J.; LANDINI, F.; ROHDIN, J.; DIEZ SÁNCHEZ, M.; BURGET, L.; CAO, Y.; LU, H.; ČERNOCKÝ, J. Diacorrect: Error Correction Back-End for Speaker Diarization. In ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Seoul: IEEE Signal Processing Society, 2024. p. 11181-11185. ISBN: 979-8-3503-4485-1.
Detail

KLEMENT, D.; DIEZ SÁNCHEZ, M.; LANDINI, F.; BURGET, L.; SILNOVA, A.; DELCROIX, M.; TAWARA, N. Discriminative Training of VBx Diarization. In ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings. Seoul: IEEE Signal Processing Society, 2024. p. 11871-11875. ISBN: 979-8-3503-4485-1.
Detail

YUSUF, B.; SARAÇLAR, M. Written Term Detection Improves Spoken Term Detection. IEEE-ACM Transactions on Audio Speech and Language Processing, 2024, vol. 32, no. 06, p. 3213-3223. ISSN: 2329-9290.
Detail

PENG, J.; STAFYLAKIS, T.; GU, R.; PLCHOT, O.; MOŠNER, L.; BURGET, L.; ČERNOCKÝ, J. Parameter-Efficient Transfer Learning of Pre-Trained Transformer Models for Speaker Verification Using Adapters. In ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings. Rhodes Island: IEEE Signal Processing Society, 2023. p. 1-5. ISBN: 978-1-7281-6327-7.
Detail

MOŠNER, L.; PLCHOT, O.; BURGET, L.; ČERNOCKÝ, J. Multisv: Dataset for Far-Field Multi-Channel Speaker Verification. In ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings. Singapore: IEEE Signal Processing Society, 2022. p. 7977-7981. ISBN: 978-1-6654-0540-9.
Detail

ALAM, J.; BURGET, L.; GLEMBEK, O.; MATĚJKA, P.; MOŠNER, L.; PLCHOT, O.; ROHDIN, J.; SILNOVA, A.; STAFYLAKIS, T. Development of ABC systems for the 2021 edition of NIST Speaker Recognition evaluation. Proceedings of The Speaker and Language Recognition Workshop (Odyssey 2022). Beijing: International Speech Communication Association, 2022. p. 346-353.
Detail

KAKOUROS, S.; STAFYLAKIS, T.; MOŠNER, L.; BURGET, L. Speech-Based Emotion Recognition with Self-Supervised Models Using Attentive Channel-Wise Correlations and Label Smoothing. In Proceedings of ICASSP 2023. Rhodes Island: IEEE Signal Processing Society, 2023. p. 1-5. ISBN: 978-1-7281-6327-7.
Detail

ALAM, J.; BARAHONA QUIRÓS, S.; BOBOŠ, D.; BURGET, L.; CUMANI, S.; DAHMANE, M.; HAN, J.; HLAVÁČEK, M.; KODOVSKÝ, M.; LANDINI, F.; MOŠNER, L.; PÁLKA, P.; PAVLÍČEK, T.; PENG, J.; PLCHOT, O.; RAJASEKHAR, P.; ROHDIN, J.; SILNOVA, A.; STAFYLAKIS, T.; ZHANG, L. ABC SYSTEM DESCRIPTION FOR NIST SRE 2024. Proceedings of NIST SRE 2024. San Juan: National Institute of Standards and Technology, 2024. p. 1-9.
Detail

KOCOUR, M.; UMESH, J.; KARAFIÁT, M.; ŠVEC, J.; LOPEZ, F.; BENEŠ, K.; DIEZ SÁNCHEZ, M.; SZŐKE, I.; LUQUE, J.; VESELÝ, K.; BURGET, L.; ČERNOCKÝ, J. BCN2BRNO: ASR System Fusion for Albayzin 2022 Speech to Text Challenge. Proceedings of IberSpeech 2022. Granada: International Speech Communication Association, 2022. p. 276-280.
Detail

LANDINI, F.; LOZANO DÍEZ, A.; DIEZ SÁNCHEZ, M.; BURGET, L. From Simulated Mixtures to Simulated Conversations as Training Data for End-to-End Neural Diarization. In Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH. Proceedings of Interspeech. Incheon: International Speech Communication Association, 2022. no. 9, p. 5095-5099. ISSN: 1990-9772.
Detail