Show simple item record

dc.contributor.authorBuitelaar, Paul
dc.contributor.authorWood, Ian D.
dc.contributor.authorNegi, Sapna
dc.contributor.authorArcan, Mihael
dc.contributor.authorMcCrae, John P.
dc.contributor.authorAbele, Andrejs
dc.contributor.authorRobin, Cécile
dc.contributor.authorAndryushechkin, Vladimir
dc.contributor.authorZiad, Housam
dc.contributor.authorSagha, Hesam
dc.contributor.authorSchmitt, Maximilian
dc.contributor.authorSchuller, Björn W.
dc.contributor.authorSánchez-Rada, J. Fernando
dc.contributor.authorIglesias, Carlos A.
dc.contributor.authorNavarro, Carlos
dc.contributor.authorGiefer, Andreas
dc.contributor.authorHeise, Nicolaus
dc.contributor.authorMasucci, Vincenzo
dc.contributor.authorDanza, Francesco A.
dc.contributor.authorCaterino, Ciro
dc.contributor.authorSmrž, Pavel
dc.contributor.authorHradiš, Michal
dc.contributor.authorPovolný, Filip
dc.contributor.authorKlimeš, Marek
dc.contributor.authorMatějka, Pavel
dc.contributor.authorTummarello, Giovanni
dc.identifier.citationBuitelaar, P. and Wood, I. D. and Negi, S. and Arcan, M. and McCrae, J. P. and Abele, A. and Robin, C. and Andryushechkin, V. and Ziad, H. and Sagha, H. and Schmitt, M. and Schuller, B. W. and Sanchez, J. F. and Iglesias, C. A. and Navarro, C. and Giefer, A. and Heise, N. and Masucci, V. and Danza, F. A. and Caterino, C. and Smrz, P. and Hradis, M. and Povolny, F. and Klimes, M. and Matejka, P. and Tummarello, G. (2018). MixedEmotions: An Open-Source Toolbox for Multimodal Emotion Analysis. IEEE Transactions on Multimedia, 20(9), 2454-2465. doi: 10.1109/TMM.2018.2798287en_IE
dc.description.abstractRecently, there is an increasing tendency to embed the functionality of recognizing emotions from the user generated contents, to infer richer profile about the users or contents, that can be used for various automated systems such as call-center operations, recommendations, and assistive technologies. However, to date, adding this functionality was a tedious, costly, and time consuming effort, and one should look for different tools that suits one's needs, and should provide different interfaces to use those tools. The MixedEmotions toolbox leverages the need for such functionalities by providing tools for text, audio, video, and linked data processing within an easily integrable plug-and-play platform. These functionalities include: (i) for text processing: emotion and sentiment recognition, (ii) for audio processing: emotion, age, and gender recognition, (iii) for video processing: face detection and tracking, emotion recognition, facial landmark localization, head pose estimation, face alignment, and body pose estimation, and (iv) for linked data: knowledge graph. Moreover, the MixedEmotions Toolbox is open-source and free. In this article, we present this toolbox in the context of the existing landscape, and provide a range of detailed benchmarks on standardized test-beds showing its state-of-the-art performance. Furthermore, three real-world use-cases show its effectiveness, namely emotion-driven smart TV, call center monitoring, and brand reputation analysis.en_IE
dc.relation.ispartofIEEE Transactions on Multimediaen
dc.subjectEmotion analysisen_IE
dc.subjectOpen source toolboxen_IE
dc.subjectAffective computingen_IE
dc.subjectLinked dataen_IE
dc.subjectAudio processingen_IE
dc.subjectText processingen_IE
dc.subjectVideo processingen_IE
dc.titleMixedEmotions: An open-source toolbox for multi-modal emotion analysisen_IE
dc.local.contactIan Wood. Email:

Files in this item

Attribution-NonCommercial-NoDerivs 3.0 Ireland
This item is available under the Attribution-NonCommercial-NoDerivs 3.0 Ireland. No item may be reproduced for commercial purposes. Please refer to the publisher's URL where this is made available, or to notes contained in the item itself. Other terms may apply.

The following license files are associated with this item:


This item appears in the following Collection(s)

Show simple item record