Full metadata record
DC poleHodnotaJazyk
dc.contributor.authorLee, Hanhaesol
dc.contributor.authorSa, Jaewon
dc.contributor.authorChung, Yongwha
dc.contributor.authorPark, Daihee
dc.contributor.authorKim, Hakjae
dc.contributor.editorSkala, Václav
dc.date.accessioned2019-10-22T08:42:07Z
dc.date.available2019-10-22T08:42:07Z
dc.date.issued2019
dc.identifier.citationWSCG 2019: full papers proceedings: 27. International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision, p. 17-25.en
dc.identifier.isbn978-80-86943-37-4 (CD/-ROM)
dc.identifier.issn2464–4617 (print)
dc.identifier.issn2464-4625 (CD/DVD)
dc.identifier.urihttp://hdl.handle.net/11025/35605
dc.format9 s.cs
dc.format.mimetypeapplication/odt
dc.language.isoenen
dc.publisherVáclav Skala - UNION Agencycs
dc.rights© Václav Skala - UNION Agencycs
dc.subjectsledování prasatcs
dc.subjectpřekrývající se prasatacs
dc.subjectoddělenícs
dc.subjecthluboké učenícs
dc.subjectYOLOcs
dc.subjectYou Only Look Oncecs
dc.titleDeep Learning-based Overlapping-Pigs Separation by Balancing Accuracy and Execution Timeen
dc.typekonferenční příspěvekcs
dc.typeconferenceObjecten
dc.rights.accessopenAccessen
dc.type.versionpublishedVersionen
dc.description.abstract-translatedThe crowded environment of a pig farm is highly vulnerable to the spread of infectious diseases such as foot-andmouth disease, and studies have been conducted to automatically analyze behavior of pigs in a crowded pig farm through a video surveillance system using a top-view camera. Although it is required to correctly separate overlapping-pigs for tracking each individual pigs, extracting the boundaries of each pig fast and accurately is a challenging issue due to the complicated occlusion patterns such as X shape and T shape. In this study, we propose a fast and accurate method to separate overlapping-pigs not only by exploiting the advantage (i.e., one of the fast deep learning-based object detectors) of You Only Look Once, YOLO, but also by overcoming the disadvantage (i.e., the axis aligned bounding box-based object detector) of YOLO with the test-time data augmentation of rotation. Experimental results with the occlusion patterns between the overlapping-pigs show that the proposed method can provide better accuracy and faster processing speed than one of the state-of-the-art deep learningbased segmentation techniques such as Mask R-CNN (i.e., the performance improvement over Mask R-CNN was about 11 times, in terms of the accuracy/processing speed performance metrics).en
dc.subject.translatedpig monitoringen
dc.subject.translatedoverlapping-pigsen
dc.subject.translatedseparationen
dc.subject.translateddeep learningen
dc.subject.translatedYOLOen
dc.subject.translatedYou Only Look Onceen
dc.identifier.doihttps://doi.org/10.24132/CSRN.2019.2901.1.3
dc.type.statusPeer-revieweden
Vyskytuje se v kolekcích:WSCG 2019: Full Papers Proceedings

Soubory připojené k záznamu:
Soubor Popis VelikostFormát 
Lee.pdfPlný text1,09 MBAdobe PDFZobrazit/otevřít


Použijte tento identifikátor k citaci nebo jako odkaz na tento záznam: http://hdl.handle.net/11025/35605

Všechny záznamy v DSpace jsou chráněny autorskými právy, všechna práva vyhrazena.