Full metadata record
DC poleHodnotaJazyk
dc.contributor.authorEgorow, Olga
dc.contributor.authorWendemuth, Andreas
dc.contributor.editorSkala, Václav
dc.date.accessioned2018-05-18T08:35:40Z-
dc.date.available2018-05-18T08:35:40Z-
dc.date.issued2016
dc.identifier.citationWSCG '2016: short communications proceedings: The 24th International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision 2016 in co-operation with EUROGRAPHICS: University of West Bohemia, Plzen, Czech RepublicMay 30 - June 3 2016, p. 137-143.en
dc.identifier.isbn978-80-86943-58-9
dc.identifier.issn2464-4617
dc.identifier.uriwscg.zcu.cz/WSCG2016/!!_CSRN-2602.pdf
dc.identifier.urihttp://hdl.handle.net/11025/29697
dc.description.abstractEmotions play an important role in human-human interaction. But they are also expressed during human-computer interaction, and thus should be recognised and responded to in an appropriate way. Therefore, emotion recognition is an important feature that should be integrated in human-computer interaction. But the task of emotion recognition is not an easy one – in “in the wild” scenarios, the occurring emotions are rarely expressive and clear. Different emotions like joy and surprise often occur simultaneously or in a very reduced form. That is why, besides recognising categorial and clear emotions like joy and anger, it is also important to recognise more subtle affects. One example for such an affect that is crucial for human-computer interaction is trouble experienced by the human in case of unexpected dialogue course. Another point concerning this task is that the emotional status of a person is not necessarily revealed in his or her voice. But the same information is contained in the physiological reactions of the person, that are much harder to conceal, therefore representing the “true signal”. That is why the physiological signals, or biosignals, should not be left unattended. In this paper we use the data from naturalistic human-computer dialogues containing challenging dialogue stages to show that it is possible to differentiate between troubled and untroubled dialogue in acoustic as well as in physiological signals. We achieve an unweighted average recall (UAR) of 64% using the acoustic signal, and an UAR of 88% using the biosignals.en
dc.format7 s.cs
dc.format.mimetypeapplication/pdf
dc.language.isoenen
dc.publisherVáclav Skala - UNION Agencyen
dc.relation.ispartofseriesWSCG '2016: short communications proceedingsen
dc.rights© Václav Skala - UNION Agencycs
dc.subjectemocecs
dc.subjectafektcs
dc.subjectafektivní výpočetní technikycs
dc.subjectrozpoznávání emocecs
dc.subjectrozpoznávání akustické emocecs
dc.subjectbiosignálycs
dc.titleDetection of challenging dialogue stages using acoustic signals and biosignalsen
dc.typekonferenční příspěvekcs
dc.typeconferenceObjecten
dc.rights.accessopenAccessen
dc.type.versionpublishedVersionen
dc.subject.translatedemotionen
dc.subject.translatedaffecten
dc.subject.translatedaffective computingen
dc.subject.translatedemotion recognitionen
dc.subject.translatedacoustic emotion recognitionen
dc.subject.translatedbiosignalsen
dc.type.statusPeer-revieweden
Vyskytuje se v kolekcích:WSCG '2016: Short Papers Proceedings

Soubory připojené k záznamu:
Soubor Popis VelikostFormát 
Egorow.pdfPlný text370,35 kBAdobe PDFZobrazit/otevřít


Použijte tento identifikátor k citaci nebo jako odkaz na tento záznam: http://hdl.handle.net/11025/29697

Všechny záznamy v DSpace jsou chráněny autorskými právy, všechna práva vyhrazena.