Detail projektu

Augmented Multi-party Interaction

Období řešení: 01.01.2004 — 31.12.2006

O projektu

Jointly managed by Prof. Herve Bourlard (IDIAP, http://www.idiap.ch) and Prof. Steve Renals (University of Edinburgh, http://www.iccs.informatics.ed.ac.uk), AMI targets computer enhanced multi-modal interaction in the context of meetings. The project aims at substantially advancing the state-of-the-art, within important underpinning technologies (such as human-human communication modeling, speech recognition, computer vision, multimedia indexing and retrieval). It will also produce tools for off-line and on-line browsing of multi-modal meeting data, including meeting structure analysis and summarizing functions. The project also makes recorded and annotated multimodal meeting data widely available for the European research community, thereby contributing to the research infrastructure in the field.

Popis česky
Evropský projekt AMI je společně řízen Prof. Herve Bourlardem (IDIAP, http://www.idiap.ch) a Prof. Stevem Renalsem (University of Edinburgh, http://www.iccs.informatics.ed.ac.uk). Je zaměřen na multimodální interakci během živých jednání (meetingů) s počítačovou podporou. Projekt si klade za cíl podstatný posun state-of-the-art tohoto oboru a jeho technologií (modelování komunikace člověka s člověkem, rozpoznávání řeči, počítačové vidění, multimediální indexace a vyhledávání). Jeho výstupem  bude mj. off-line a on-line software pro prohlížení (browsing)  multimodálních dat, včetně analýzy struktury jednání a jeho sumarizace. V rámci projektu jsou také pořizována a distribuována nahraná a anotovaná multimodální data z jednání. Projekt tímto přispívá výzkumné infrastruktuře v tomto oboru a evropské výzkumné komunitě.

Klíčová slova
multi-modal interaction, speech recognition, video processing, multi-modal recognition, meeting data collection, meeting data annotation

Klíčová slova česky
multimodální interakce, rozpoznávání řeči, zpracování videa, multimodální rozpoznávání, sběr dat z jednání, anotace dat z jednání

Označení

506811-AMI

Originální jazyk

angličtina

Řešitelé

Útvary

Ústav počítačové grafiky a multimédií
- spolupříjemce (01.01.2004 - 31.12.2006)

Výsledky

HEROUT, A., ZEMČÍK, P. Animated Particle Rendering in DSP and FPGA. In SCCG 2004 Proceedings. Bratislava: Slovak University of Technology in Bratislava, 2004. p. 237-242. ISBN: 80-223-1918-X.
Detail

BERAN, V., POTÚČEK, I. REAL-TIME RECONSTRUCTION OF INCOMPLETE HUMAN MODEL USING COMPUTER VISION. In Proceeding of the 10th Conference and Competition STUDENT EEICT 2004, Volume 2. Brno: 2004. p. 298-302. ISBN: 80-214-2635-7.
Detail

ČERNOCKÝ, J., POTÚČEK, I., SUMEC, S., ZEMČÍK, P. AMI Mobile Meeting Capture and Analysis System. Washington: 2006.
Detail

BURGET, L. Complementarity of Speech Recognition Systems and System Combination. Brno: Faculty of Information Technology BUT, 2004.
Detail

SCHWARZ, P., MATĚJKA, P., ČERNOCKÝ, J. Phoneme Recognition. AMI Workshop. 2004. p. 1 ( p.)
Detail

FAPŠO, M., SCHWARZ, P., SZŐKE, I., ČERNOCKÝ, J., SMRŽ, P., BURGET, L., KARAFIÁT, M. Search Engine for Information Retrieval from Multi-modal Records. Edinburgh: 2005.
Detail

GRÉZL, F. Spectral plane investigation for probabilistic features for ASR. Edinburgh: 2005. p. 82 ( p.)
Detail

KARAFIÁT, M., GRÉZL, F., BURGET, L. Combination of MFCC and TRAP features for LVCSR of meeting data. Martigny: 2004.
Detail

MOTLÍČEK, P., BURGET, L., ČERNOCKÝ, J. PHONEME RECOGNITION OF MEETINGS USING AUDIO-VISUAL DATA. AMI Workshop. Martigny: 2004.
Detail

BERAN, V. Augmented Multi-User Communication System. In Proceedings of the working conference on Advanced visual interfaces. Gallipoli: Association for Computing Machinery, 2004. p. 257-260. ISBN: 1-58113-867-9.
Detail

KARAFIÁT, M., GRÉZL, F., ČERNOCKÝ, J. TRAP based features for LVCSR of meeting data. In Proc. 8th International Conference on Spoken Language Processing. 8th International Conference on Spoken Language Processing. Jeju Island: Sunjin Printing Co, 2004. p. 437-440. ISSN: 1225-4111.
Detail

BURGET, L. Combination of Speech Features Using Smoothed Heteroscedastic Linear Discriminant Analysis. In Proc. 8th International Conference on Spoken Language Processing. Jeju island: Sunjin Printing Co, 2004. p. 2549-2552.
Detail

MOTLÍČEK, P., ČERNOCKÝ, J. Multimodal Phoneme Recognition of Meeting Data. In 7th International Conference, TSD 2004 Brno, Czech Republic, September 2004 Proceedings. Lecture Notes in Computer Science. Brno: Springer Verlag, 2004. p. 379-384. ISBN: 3-540-23049-1. ISSN: 0302-9743.
Detail

FUČÍK, O., ZEMČÍK, P., TUPEC, P., CRHA, L., HEROUT, A. The Networked Photo-Enforcement and Traffic Monitoring System. In Proceedings of Engineering of Computer-Based Systems. Los Alamitos: IEEE Computer Society, 2004. p. 423-428. ISBN: 0-7695-2125-8.
Detail

HEROUT, A., ZEMČÍK, P., BERAN, V., KADLEC, J. Image and Video Processing Software Framework for Fast Application Development. In Joint AMI/PASCAL/IM2/M4 workshop. Martigny: Institute for Perceptual Artificial Intelligence, 2004.
Detail

PEČIVA, J. Collaborative Virtual Environments. In Poster at MLMI'04 workshop. Martigny: Institute for Perceptual Artificial Intelligence, 2004. p. 1 (1 s.).
Detail

MOTLÍČEK, P. Segmentace nahrávek živých jednání podle mluvčího. In Sborník příspěvků a prezentací akce Odborné semináře 2004. REL03V. Brno: Ústav radioelektroniky FEKT VUT, 2004.
Detail

SCHWARZ, P., MATĚJKA, P., ČERNOCKÝ, J. Phoneme Recognition from a Long Temporal Context. In poster at JOINT AMI/PASCAL/IM2/M4 Workshop on Multimodal Interaction and Related Machine Learning Algorithms. Martigny: Institute for Perceptual Artificial Intelligence, 2004. p. 1 (1 s.).
Detail

FOUSEK, P., SVOJANOVSKÝ, P., GRÉZL, F., HEŘMANSKÝ, H. New Nonsense Syllables Database - Analyses and Preliminary ASR Experiments. In Proc. 8th International Conference on Spoken Language Processing. 8th International Conference on Spoken Language Processing. Jeju Island: Sunjin Printing Co, 2004. p. 348-351. ISSN: 1225-4111.
Detail

SUMEC, S., KADLEC, J. Event Editor - The Multi-Modal Annotation Tool. In Workshop on Multimodal Interaction and Related Machine Learning Algorithms (MLMI). Edinburgh: 2005. p. 1 ( p.)
Detail

POTÚČEK, I. Automatic Image Stabilization for Omni-Directional Systems. In Proceedings of the Fifth IASTED International Conference on VISUALIZATION, IMAGING,AND IMAGE PROCESSING. Benidorm: ACTA Press, 2005. p. 338-342.
Detail

SZŐKE, I., SCHWARZ, P., BURGET, L., FAPŠO, M., KARAFIÁT, M., ČERNOCKÝ, J., MATĚJKA, P. Comparison of Keyword Spotting Approaches for Informal Continuous Speech. In Interspeech'2005 - Eurospeech - 9th European Conference on Speech Communication and Technology. European Conference EUROSPEECH. Lisabon: 2005. p. 633-636. ISSN: 1018-4074.
Detail

SZŐKE, I., SCHWARZ, P., MATĚJKA, P., BURGET, L., FAPŠO, M., KARAFIÁT, M., ČERNOCKÝ, J. Comparison of Keyword Spotting Approaches for Informal Continuous Speech. In 2nd Joint Workshop on Multimodal Interaction and Related Machine Learning Algorithms. Edinburgh: 2005. p. 1 ( p.)
Detail

NIJHOLT, A., ZWIERS, J., PEČIVA, J. The Distributed Virtual Meeting Room Exercise. In Proceedings ICMI 2005 Workshop on Multimodal multiparty meeting processing. Trento: 2005. p. 93-99.
Detail

ZHU, Q., CHEN, B., GRÉZL, F., MORGAN, N. Improved MLP Structures for Data-Driven Feature Extraction for ASR. In Interspeech'2005 - Eurospeech - 9th European Conference on Speech Communication and Technology. European Conference EUROSPEECH. Lisabon: 2005. p. 2129 ( p.)ISSN: 1018-4074.
Detail

STOLCKE, A., ANGUERA, X., BOAKYE, K., CETIN, Ö., GRÉZL, F., JANIN, A., MANDAL, A., PESKIN, B., WOOTERS, C., ZHENG, J. Further Progress in Meeting Recognition: The ICSI-SRI Spring 2005 Speech-to-Text Evaluation System. In Machine Learning for Multimodal Interaction, Second International Workshop, MLMI 2005, Edinburgh, UK, July 11-13, 2005, Revised Selected Papers. Lecture Notes in Computer Science 3869, Springer 2006. Edinburgh, Scotland: University of Edinburgh, 2005. p. 463-475. ISBN: 978-3-540-32549-9.
Detail

FAPŠO, M., SMRŽ, P., SCHWARZ, P., SZŐKE, I., BURGET, L., KARAFIÁT, M., ČERNOCKÝ, J. Systém pre efektívne vyhľadávanie v rečových databázach. In Sborník databázové konference DATAKON 2005. Brno: Masaryk University, 2005. s. 323-333. ISBN: 80-210-3813-6.
Detail

KARAFIÁT, M., BURGET, L., ČERNOCKÝ, J. Using Smoothed Heteroscedastic Linear Discriminant Analysis in Large Vocabulary Continuous Speech Recognition System. In 2nd Joint Workshop on Multimodal Interaction and Related Machine Learning Algorithms. Edinbourgh: 2005. p. 1 ( p.)
Detail

KARAFIÁT, M. The Development of the AMI System for the Transcription of Speech in Meetings. In Machine Learning for Multimodal Interaction, Second International Workshop, MLMI 2005, Edinburgh, UK, July 11-13, 2005, Revised Selected Papers. Lecture Notes in Computer Science Volume 3869, Springer 2006. Edinburgh: University of Edinburgh, 2005. p. 344-356. ISBN: 978-3-540-32549-9.
Detail

KARAFIÁT, M. Transcription of Conference Room Meetings: an Investigation. In Interspeech'2005 - Eurospeech - 9th European Conference on Speech Communication and Technology. European Conference EUROSPEECH. Lisabon: International Speech Communication Association, 2005. p. 1 ( p.)ISSN: 1018-4074.
Detail

HAIN, T.; BURGET, L.; DINES, J.; GARAU, G.; KARAFIÁT, M.; LINCOLN, M.; MCCOWAN, I.; MOORE, D.; WAN, V.; ORDELMAN, R.; RENALS, S. The 2005 AMI System for the Transcription of Speech in Meetings. In Machine Learning for Multimodal Interaction, Second International Workshop, MLMI 2005, Edinburgh, UK, July 11-13, 2005, Revised Selected Papers. Lecture Notes in Computer Science Volume 3869, Springer 2006. Edinburgh: University of Edinburgh, 2005. p. 450-462. ISBN: 978-3-540-32549-9.
Detail

ASHBY, S., BOURBAN, S., CARLETTA, J., FLYNN, M., GUILLEMOT, M., HAIN, T., KARAISKOS, V., KRAAIJ, W., KRONENTHAL, M., LATHOUD, G., LINCOLN, M., LISOWSKA, A., MCCOWAN, I., POST, W., REIDSMA, D., WELLNER, P., KADLEC, J. The AMI Meeting Corpus: A Pre-Announcement. In Workshop on Multimodal Interaction and Related Machine Learning Algorithms (MLMI). Edinburgh: 2005. p. 1 ( p.)
Detail

MOTLÍČEK, P., BURGET, L., ČERNOCKÝ, J. Non-parametric Speaker Turn Segmentation of Meeting Data. In Interspeech'2005 - Eurospeech - 9th European Conference on Speech Communication and Technology. European Conference EUROSPEECH. Lisabon: International Speech Communication Association, 2005. p. 657-660. ISSN: 1018-4074.
Detail

PEČIVA, J. Omnipresent Collaborative Virtual Environments for Open Inventor Applications. In INTETAIN 2005. Springer Lecture Notes in Artificial Intelligence. Madonna di Campiglio: Springer Verlag, 2005. p. 272-276. ISBN: 3-540-30509-2.
Detail

SUMEC, S., POTÚČEK, I., ZEMČÍK, P. AUTOMATIC MOBILE MEETING ROOM. In Proceedings of 3IA'2005 International Conference in Computer Graphics and Artificial Intelligence. Limoges: 2005. p. 171-177. ISBN: 2-914256-07-8.
Detail

MATĚJKA, P. Phoneme Recognition Tuning for Language Identification System. In Proceedings of the 11th conference STUDENT EEICT 2005. Brno: Faculty of Electrical Engineering and Communication BUT, 2005. p. 658-653. ISBN: 80-214-2890-2.
Detail

KADLEC, J., POTÚČEK, I., SUMEC, S., ZEMČÍK, P. Evaluation of Tracking and Recognition Methods. In Proceedings of the 11th conference EEICT. Brno: 2005. p. 617-622. ISBN: 80-214-2890-2.
Detail

MATĚJKA, P., SCHWARZ, P., ČERNOCKÝ, J., CHYTIL, P. Phonotactic Language Identification. In Proceedings of Radioelektronika 2005. Brno: Faculty of Electrical Engineering and Communication BUT, 2005. p. 140-143. ISBN: 80-214-2904-6.
Detail

MATĚJKA, P., SCHWARZ, P., ČERNOCKÝ, J., CHYTIL, P. Phonotactic Language Identification using High Quality Phoneme Recognition. In Interspeech'2005 - Eurospeech - 9th European Conference on Speech Communication and Technology. European Conference EUROSPEECH. Lisbon: International Speech Communication Association, 2005. p. 2237-2240. ISSN: 1018-4074.
Detail

ASHBY, S., BOURBAN, S., CARLETTA, J., FLYNN, M., GUILLEMOT, M., HAIN, T., KARAISKOS, V., KRAAIJ, W., KRONENTHAL, M., LATHOUD, G., LINCOLN, M., LISOWSKA, A., MCCOWAN, I., POST, W., REIDSMA, D., WELLNER, P., KADLEC, J. The AMI Meeting Corpus. In Measuring Behavior 2005 Proceedings Book. Wageningen: 2005. p. 1 ( p.)
Detail

SZŐKE, I., SCHWARZ, P., BURGET, L., KARAFIÁT, M., ČERNOCKÝ, J. Phoneme based acoustics keyword spotting in informal continuous speech. In Radioelektronika 2005. Brno: Faculty of Electrical Engineering and Communication BUT, 2005. p. 195-198. ISBN: 80-214-2904-6.
Detail

MOTLÍČEK, P., BURGET, L., ČERNOCKÝ, J. VISUAL FEATURES FOR MULTIMODAL SPEECH RECOGNITION. In Radioelektronika 2005. Brno: Faculty of Electrical Engineering and Communication BUT, 2005. p. 187-190. ISBN: 80-214-2904-6.
Detail

SMRŽ, P.; FAPŠO, M. Vyhledávání v záznamech přednášek. In Sborník semináře Technologie pro e-vzdělávání. Praha: České vysoké učení technické, 2005. s. 21-26. ISBN: 80-01-03274-4.
Detail

PEČIVA, J. Active Transaction Approach for Collaborative Virtual Environments. In ACM International Conference on Virtual Reality Continuum and its Applications (VRCIA). Chinese University of Hong Kong: Association for Computing Machinery, 2006. p. 171-178. ISBN: 1-59593-324-7.
Detail

MATĚJKA, P., BURGET, L., SCHWARZ, P., ČERNOCKÝ, J. NIST 2005 Language Recognition Evaluation. In Proceedings of NIST LRE 2005. Washington DC: 2006. p. 1-37.
Detail

MATĚJKA, P., BURGET, L., SCHWARZ, P., ČERNOCKÝ, J. NIST Speaker Recognition Evaluation 2006. In Proceedings of NIST Speaker Recognition Evaluation 2006. San Juan: 2006. p. 1-40.
Detail

GRÉZL, F.; FOUSEK, P. Optimizing bottle-neck features for LVCSR. 2008 IEEE International Conference on Acoustics, Speech, and Signal Processing. Las Vegas, Nevada: IEEE Signal Processing Society, 2008. p. 4729-4732. ISBN: 1-4244-1484-9.
Detail

SZŐKE, I., SCHWARZ, P., BURGET, L., KARAFIÁT, M., MATĚJKA, P., ČERNOCKÝ, J. Phoneme Based Acoustics Keyword Spotting in Informal Continuous Speech. Lecture Notes in Computer Science, 2005, vol. 2005, no. 3658, p. 302 ( p.)ISSN: 0302-9743.
Detail

SMRŽ, P. Parallel Metagrammar for Closely Related Languages - A Case Study of Czech and Russian. Research on Language & Computation, 2005, vol. 3, no. 2, p. 101-128. ISSN: 1570-7075.
Detail

MOTLÍČEK, P., ČERNOCKÝ, J. Multimodal Phoneme Recognition of Meeting Data. Lecture Notes in Computer Science, 2004, vol. 2004, no. 3206, p. 379 ( p.)ISSN: 0302-9743.
Detail

MOTLÍČEK, P. Visual Feature Extreaction for Phoneme Recognition of Meetings. Brno: Department of Computer Graphics and Multimedia FIT BUT, 2004.
Detail

SZŐKE, I.; FAPŠO, M.: VUT-SW-Search; Lattice Spoken Term Detection toolkit (LatticeSTD). http://speech.fit.vutbr.cz/en/software/lattice-spoken-term-detection-toolkit-latticestd. URL: http://speech.fit.vutbr.cz/en/software/lattice-spoken-term-detection-toolkit-latticestd. (software)
Detail

BERAN, V.; POTÚČEK, I.; SUMEC, S.: TETA; TETA: Tracking Evaluation Tool. Produkt je umístěn ve webovém systému VUT FIT (http://www.fit.vutbr.cz/research/prod).. URL: https://www.fit.vut.cz/research/product/39/. (software)
Detail

POTÚČEK, I.; SUMEC, S.; CHALUPNÍČEK, K.; KADLEC, J.; ČERNOCKÝ, J.; ZEMČÍK, P.: Mobile meeting room. https://www.fit.vut.cz/research/product/28/. URL: https://www.fit.vut.cz/research/product/28/. (prototyp)
Detail

CHALUPNÍČEK, K.; ČERNOCKÝ, J.; KAŠPÁREK, T.: Web-based system for semi-automatic checks of speech annotations. https://www.fit.vut.cz/research/product/27/. URL: https://www.fit.vut.cz/research/product/27/. (software)
Detail

BURGET, L.; GLEMBEK, O.; KARAFIÁT, M.; KONTÁR, S.; SCHWARZ, P.; ČERNOCKÝ, J.: STK Toolkit. https://www.fit.vut.cz/research/product/26/. URL: https://www.fit.vut.cz/research/product/26/. (software)
Detail

HAIN, T.; BURGET, L.; KARAFIÁT, M.: AMI Large vocabulary continuous speech recognizer. https://www.fit.vut.cz/research/product/25/. URL: https://www.fit.vut.cz/research/product/25/. (software)
Detail

FAPŠO, M.; SZŐKE, I.; SCHWARZ, P.; ČERNOCKÝ, J.: Indexation and search engine for multimodal data. https://www.fit.vut.cz/research/product/24/. URL: https://www.fit.vut.cz/research/product/24/. (software)
Detail

SCHWARZ, P.; MATĚJKA, P.; BURGET, L.; GLEMBEK, O.: VUT-SW-Search; Phoneme recognizer based on long temporal context. http://speech.fit.vutbr.cz/en/software/phoneme-recognizer-based-long-temporal-context. URL: http://speech.fit.vutbr.cz/en/software/phoneme-recognizer-based-long-temporal-context. (software)
Detail

Odkaz