Full metadata record
DC poleHodnotaJazyk
dc.contributor.authorYamamoto, Tomohiro
dc.contributor.authorOkabe, Makoto
dc.contributor.authorHijikata, Yusuke
dc.contributor.authorOnai, Rikio
dc.contributor.editorOliviera, Manuel M.
dc.contributor.editorSkala, Václav
dc.date.accessioned2014-02-04T10:31:46Z
dc.date.available2014-02-04T10:31:46Z
dc.date.issued2013
dc.identifier.citationWSCG 2013: Full Papers Proceedings: 21st International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision in cooperation with EUROGRAPHICS Association, p. 179-186.en
dc.identifier.isbn978-80-86943-74-9
dc.identifier.urihttp://wscg.zcu.cz/WSCG2013/!_2013-WSCG-Full-proceedings.pdf
dc.identifier.urihttp://hdl.handle.net/11025/10608
dc.description.abstractWe propose a method to synthesize the video of a user-specified band music, in which the performers appear to play it nicely. Given the music and videos of the band members as inputs, our system synthesizes the resulting video by semi-automatically cutting and concatenating the videos temporarily so that these synchronize to the music. To compute the synchronization between music and video, we analyze the timings of the musical notes of them, which we estimate from the audio signals by applying techniques including short-time Fourier transform (STFT), image processing, and sound source separation. Our video retrieval technique then uses the estimated timings of musical notes as the feature vector. To efficiently retrieve a part of the video that matches to a part of the music, we develop a novel feature matching technique more suitable for our feature vector than dynamic-time warping (DTW) algorithm. The output of our system is the project file of Adobe After Effects, on which the user can further refine the result interactively. In our experiment, we recorded videos of performances of playing the violin, piano, guitar, bass and drums. Each video is recorded independently for each instrument. We demonstrate that our system helps the non-expert performers who cannot play the music well to synthesize its performance videos. We also present that, given an arbitrary music as input, our system can synthesize its performance video by semi-automatically cutting and pasting existing videos.en
dc.format8 s.cs
dc.format.mimetypeapplication/pdf
dc.language.isoenen
dc.publisherVáclav Skala - UNION Agencycs
dc.relation.ispartofseriesWSCG 2013: Full Papers Proceedingsen
dc.rights© Václav Skala - UNION Agencyen
dc.subjectsyntéza videacs
dc.subjecthudební analýzacs
dc.subjectmultimédiacs
dc.titleSemi-Automatic Synthesis of Videos of Performers Appearing to Play User-Specified Musicen
dc.typekonferenční příspěvekcs
dc.typeconferenceObjecten
dc.rights.accessopenAccessen
dc.type.versionpublishedVersionen
dc.subject.translatedvideo synthesisen
dc.subject.translatedmusical analysisen
dc.subject.translatedmultimediaen
dc.type.statusPeer-revieweden
dc.type.driverinfo:eu-repo/semantics/conferenceObjecten
dc.type.driverinfo:eu-repo/semantics/publishedVersionen
Vyskytuje se v kolekcích:WSCG 2013: Full Papers Proceedings

Soubory připojené k záznamu:
Soubor Popis VelikostFormát 
Yamamoto.pdfPlný text8,45 MBAdobe PDFZobrazit/otevřít


Použijte tento identifikátor k citaci nebo jako odkaz na tento záznam: http://hdl.handle.net/11025/10608

Všechny záznamy v DSpace jsou chráněny autorskými právy, všechna práva vyhrazena.