Název: Detection of challenging dialogue stages using acoustic signals and biosignals
Autoři: Egorow, Olga
Wendemuth, Andreas
Citace zdrojového dokumentu: WSCG '2016: short communications proceedings: The 24th International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision 2016 in co-operation with EUROGRAPHICS: University of West Bohemia, Plzen, Czech RepublicMay 30 - June 3 2016, p. 137-143.
Datum vydání: 2016
Nakladatel: Václav Skala - UNION Agency
Typ dokumentu: konferenční příspěvek
conferenceObject
URI: wscg.zcu.cz/WSCG2016/!!_CSRN-2602.pdf
http://hdl.handle.net/11025/29697
ISBN: 978-80-86943-58-9
ISSN: 2464-4617
Klíčová slova: emoce;afekt;afektivní výpočetní techniky;rozpoznávání emoce;rozpoznávání akustické emoce;biosignály
Klíčová slova v dalším jazyce: emotion;affect;affective computing;emotion recognition;acoustic emotion recognition;biosignals
Abstrakt: Emotions play an important role in human-human interaction. But they are also expressed during human-computer interaction, and thus should be recognised and responded to in an appropriate way. Therefore, emotion recognition is an important feature that should be integrated in human-computer interaction. But the task of emotion recognition is not an easy one – in “in the wild” scenarios, the occurring emotions are rarely expressive and clear. Different emotions like joy and surprise often occur simultaneously or in a very reduced form. That is why, besides recognising categorial and clear emotions like joy and anger, it is also important to recognise more subtle affects. One example for such an affect that is crucial for human-computer interaction is trouble experienced by the human in case of unexpected dialogue course. Another point concerning this task is that the emotional status of a person is not necessarily revealed in his or her voice. But the same information is contained in the physiological reactions of the person, that are much harder to conceal, therefore representing the “true signal”. That is why the physiological signals, or biosignals, should not be left unattended. In this paper we use the data from naturalistic human-computer dialogues containing challenging dialogue stages to show that it is possible to differentiate between troubled and untroubled dialogue in acoustic as well as in physiological signals. We achieve an unweighted average recall (UAR) of 64% using the acoustic signal, and an UAR of 88% using the biosignals.
Práva: © Václav Skala - UNION Agency
Vyskytuje se v kolekcích:WSCG '2016: Short Papers Proceedings

Soubory připojené k záznamu:
Soubor Popis VelikostFormát 
Egorow.pdfPlný text370,35 kBAdobe PDFZobrazit/otevřít


Použijte tento identifikátor k citaci nebo jako odkaz na tento záznam: http://hdl.handle.net/11025/29697

Všechny záznamy v DSpace jsou chráněny autorskými právy, všechna práva vyhrazena.