Title: Detection of challenging dialogue stages using acoustic signals and biosignals
Authors: Egorow, Olga
Wendemuth, Andreas
Citation: WSCG '2016: short communications proceedings: The 24th International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision 2016 in co-operation with EUROGRAPHICS: University of West Bohemia, Plzen, Czech RepublicMay 30 - June 3 2016, p. 137-143.
Issue Date: 2016
Publisher: Václav Skala - UNION Agency
Document type: konferenční příspěvek
conferenceObject
URI: wscg.zcu.cz/WSCG2016/!!_CSRN-2602.pdf
http://hdl.handle.net/11025/29697
ISBN: 978-80-86943-58-9
ISSN: 2464-4617
Keywords: emoce;afekt;afektivní výpočetní techniky;rozpoznávání emoce;rozpoznávání akustické emoce;biosignály
Keywords in different language: emotion;affect;affective computing;emotion recognition;acoustic emotion recognition;biosignals
Abstract: Emotions play an important role in human-human interaction. But they are also expressed during human-computer interaction, and thus should be recognised and responded to in an appropriate way. Therefore, emotion recognition is an important feature that should be integrated in human-computer interaction. But the task of emotion recognition is not an easy one – in “in the wild” scenarios, the occurring emotions are rarely expressive and clear. Different emotions like joy and surprise often occur simultaneously or in a very reduced form. That is why, besides recognising categorial and clear emotions like joy and anger, it is also important to recognise more subtle affects. One example for such an affect that is crucial for human-computer interaction is trouble experienced by the human in case of unexpected dialogue course. Another point concerning this task is that the emotional status of a person is not necessarily revealed in his or her voice. But the same information is contained in the physiological reactions of the person, that are much harder to conceal, therefore representing the “true signal”. That is why the physiological signals, or biosignals, should not be left unattended. In this paper we use the data from naturalistic human-computer dialogues containing challenging dialogue stages to show that it is possible to differentiate between troubled and untroubled dialogue in acoustic as well as in physiological signals. We achieve an unweighted average recall (UAR) of 64% using the acoustic signal, and an UAR of 88% using the biosignals.
Rights: © Václav Skala - UNION Agency
Appears in Collections:WSCG '2016: Short Papers Proceedings

Files in This Item:
File Description SizeFormat 
Egorow.pdfPlný text370,35 kBAdobe PDFView/Open


Please use this identifier to cite or link to this item: http://hdl.handle.net/11025/29697

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.