Detail projektu

Social Semantic Emotion Analysis for Innovative Multilingual Big Data Analytics Markets

Období řešení: 1.4.2015 — 31.3.2017

Zdroje financování

Evropská unie - Horizon 2020

O projektu

Emotion analysis is central to tracking customer and user behaviour and satisfaction, which can be observed from user interaction in the form of explicit feedback through email, call center interaction, social media comments, etc., as well as implicit acknowledgement of approval or rejection through facial expressions, speech or other non-verbal feedback. In Europe specifically, but increasingly also globally, an added factor here is that user feedback can be in multiple languages, in text as well as in speech and audio-visual content. This implies different cultural backgrounds and thus different ways to produce and perceive emotions in everyday interactions, beyond the fact of having specific rules for encoding and decoding emotions in each language. Making sense of accumulated user interaction from different data sources, modalities and languages is challenging and has not yet been explored in fullness in an industrial context. Commercial solutions exist but do not address the multilingual aspect in a robust and large-scale setting and do not scale up to huge data volumes that need to be processed, or the integration of emotion analysis observations across data sources and/or modalities on a meaningful level, i.e. keeping track of entities involved as well the connections between them (who said what? to whom? in the context of which event, product, service?) In MixedEmotions we will implement an integrated Big Linked Data platform for emotion analysis across heterogeneous data sources, languages and modalities, building on existing state of the art tools, services and approaches that will enable the tracking of emotional aspects of user interaction and feedback on an entity level. The MixedEmotions platform will provide an integrated solution for Large-scale emotion analysis and fusion on heterogeneous, multilingual, text, speech, video and social media data streams, leveraging open access and proprietary data sources, exploiting also social context by leveraging social network graphs 􀆔 Semantic-level emotion information aggregation and integration through robust extraction of social semantic knowledge graphs for emotion analysis along multidimensional clusters The platform will be developed and evaluated in the context of three cross-domain Pilot Projects that are representative of a variety of data analytics markets: Social TV, Brand Reputation Management, Call Centre Operations. Each of the companies involved in the pilot projects have specific innovation objectives

Popis česky
Projekt vyvine aplikace pro analýzu velkých multilingválních a multimodálních dat se zaměřením na vytváření emočních profilů chování uživatelů. Bude využito kombinace textových zdrojů, audio/video (včetně analýzy mluvené řeči v několika jazycích), sociální sítě a strukturovaná data.

Klíčová slova
social semantics, multilingual multi-modal emotion analysis, knowledge graphs, Social TV, Brand Reputation Management, Call Centre Operations

Klíčová slova česky
sociální sémantika, vícejazyčná vícemodální emoční analýza, znalostní grafy, sociální televize

Originální jazyk

angličtina

Řešitelé

Smrž Pavel, doc. RNDr., Ph.D. - hlavní řešitel
Černocký Jan, prof. Dr. Ing. - spoluřešitel
Dytrych Jaroslav, Ing., Ph.D. - spoluřešitel
Matějka Jiří, Ing. - spoluřešitel
Nedeljković Sava, Bc. - spoluřešitel
Otrusina Lubomír, Ing. - spoluřešitel
Prexta Dávid, Bc. - spoluřešitel
Rusiňák Petr, Ing. - spoluřešitel
Suchánek Jan, Ing. - spoluřešitel
Švaňa Miloš, Bc. - spoluřešitel
Zapletal Jakub, Ing. - spoluřešitel
Zárybnický Jakub, Ing. - spoluřešitel

Útvary

Ústav počítačové grafiky a multimédií
- odpovědné pracoviště (8.10.2014 - nezadáno)
Výzkumná skupina znalostních technologií
- interní (8.10.2014 - 31.3.2017)
Ústav počítačové grafiky a multimédií
- příjemce (8.10.2014 - 31.3.2017)

Výsledky

DOLEŽAL, J.; SMRŽ, P.; DYTRYCH, J.; OTRUSINA, L.; KOUŘIL, J.: xx; Semantic Enrichment Component. xx. URL: http://sec.fit.vutbr.cz/. (Software)
Detail

HRADIŠ, M.; KOHÚT, J.: xx; Video emotion web service. xx. URL: http://www.fit.vutbr.cz/~ihradis/data/EmotionService-1.0-rc1.tar.bzhttp://www.fit.vutbr.cz/~ihradis/prods.php?id=527¬itle=1. (Software)
Detail

DYTRYCH, J.; KOUŘIL, J.; KARÁSEK, M.; SMRŽ, P.; DOLEŽAL, J.; OTRUSINA, L.: xx; Corpora Processing Software. xx. URL: http://knot.fit.vutbr.cz/corpproc/. (Software)
Detail

OTRUSINA, L.; SMRŽ, P.: xx; Blogs downloader for MixedEmotions project. xx. URL: http://www.fit.vutbr.cz/research/prod/index.php?id=481. (Software)
Detail

POPKOVÁ, A.; POVOLNÝ, F.; MATĚJKA, P.; GLEMBEK, O.; GRÉZL, F.; ČERNOCKÝ, J. Investigation of Bottle-Neck Features for Emotion Recognition. In 19th International Conference, TSD 2016, Brno , Czech Republic, September 12-16, 2016, Proceedings. Lecture Notes in Computer Science. Lecture Notes in Computer Science, Lecture Notes in Artificial Intelligence. Brno: International Speech Communication Association, 2016. no. 9, p. 426-434. ISSN: 0302-9743.
Detail

OTRUSINA, L.; SMRŽ, P. WTF-LOD - A New Resource for Large-Scale NER Evaluation. In Proceedings of the Tenth conference on International Language Resources and Evaluation (LREC'16). Portorož: European Language Resources Association, 2016. p. 3299-3302. ISBN: 978-2-9517408-9-1.
Detail

BUITELAAR, P.; WOOD, I.; NEGI, S.; ARCAN, M.; MCCRAE, J.; ABELE, A.; ROBIN, C.; ANDRYUSHECHKIN, V.; ZIAD, H.; SAGHA, H.; SCHMITT, M.; SCHULLER, B.; SÁNCHEZ-RADA, J.; IGLESIAS, C.; NAVARRO, C.; GIEFER, A.; HEISE, N.; MASUCCI, V.; DANZA, F.; CATERINO, C.; SMRŽ, P.; HRADIŠ, M.; POVOLNÝ, F.; KLIMEŠ, M.; MATĚJKA, P.; TUMMARELLO, G. MixedEmotions: An Open-Source Toolbox for Multimodal Emotion Analysis. IEEE TRANSACTIONS ON MULTIMEDIA, 2018, vol. 20, no. 9, p. 2454-2465. ISSN: 1520-9210.
Detail

SAGHA, H.; MATĚJKA, P.; GAVRYUOKOVA, M.; POVOLNÝ, F.; MARCHI, E.; SCHULLER, B. Enhancing multilingual recognition of emotion in speech by language identification. In 17TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION - Proceedings (INTERSPEECH 2016). Proceedings of Interspeech. San Francisco: International Speech Communication Association, 2016. no. 9, p. 2949-2953. ISSN: 1990-9772.
Detail

DYTRYCH, J.; SMRŽ, P. Advanced User Interfaces for Semantic Annotation of Complex Relations in Text. In Agents and Artificial Intelligence. Lecture Notes in Computer Science. Lecture Notes in Computer Science. Cham: Springer International Publishing, 2018. no. 2018, p. 205-221. ISBN: 978-3-319-93581-2. ISSN: 0302-9743.
Detail

POVOLNÝ, F.; MATĚJKA, P.; HRADIŠ, M.; POPKOVÁ, A.; OTRUSINA, L.; SMRŽ, P.; WOOD, I.; ROBIN, C.; LAMEL, L. Multimodal Emotion Recognition for AVEC 2016 Challenge. In AVEC '16 Proceedings of the 6th International Workshop on Audio/Visual Emotion Challenge. Amsterdam: Association for Computing Machinery, 2016. p. 75-82. ISBN: 978-1-4503-4516-3.
Detail

POLOK, L.; ILA, V.; SMRŽ, P. 3D Reconstruction Quality Analysis and Its Acceleration on GPU Clusters. In Proceedings of European Signal Processing Conference 2016. Budapest: Institute of Electrical and Electronics Engineers, 2016. p. 1108-1112. ISBN: 978-0-9928626-6-4.
Detail

MACHÁČEK, J. BUTknot at SemEval-2016 task 5: Supervised machine learning with term substitution approach in Aspect Category Detection. In SemEval 2016 - 10th International Workshop on Semantic Evaluation. San Diego: Association for Computational Linguistics, 2016. p. 301-305. ISBN: 978-1-941643-95-2.
Detail

DYTRYCH, J.; SMRŽ, P. Interaction Patterns in Computer-assisted Semantic Annotation of Text - An Empirical Evaluation. In Proceedings of the 8th International Conference on Agents and Artificial Intelligence. Volume 2: ICAART. Setúbal: SciTePress - Science and Technology Publications, 2016. p. 74-84. ISBN: 978-989-758-172-4.
Detail

POLOK, L.; SMRŽ, P. Increasing Double Precision Throughput on NVIDIA Maxwell GPUs. In Proceedings of the 24th High Performance Computing Symposium. Pasadena / Los Angeles: Association for Computing Machinery, 2016. p. 146-153. ISBN: 978-1-5108-2318-1.
Detail

Odkaz