Title: Using Vision for Animating Virtual Humans
Authors: Kakadiaris, Ioannis A.
Citation: WSCG '2001: Conference proceedings: The 9-th International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision 2001: University of West Bohemia, Plzen, Czech Republic, February 5.-9., 2001, p. vi.
Issue Date: 2001
Publisher: University of West Bohemia
Document type: jiný
other
URI: http://wscg.zcu.cz/wscg2001/Keynote/Kakadiaris_WSCG2001.pdf
http://hdl.handle.net/11025/11299
ISBN: 80-7082-713-0
ISSN: 1213-6972
Keywords: počítačové vidění;animace;virtuální lidé
Keywords in different language: computer vision;animation;virtual humans
Abstract: Automatic, non-intrusive vision-based capture of the human body motion opens new possibilities in applications requiring the use of geometric and kinematic data from individuals (e. g., virtual reality, performance measurement). If synthesized motion is to be compelling, we must create actors for computer animations and virtual environments that appear when they move. In this talk, I will present the computer vision and computer graphics formulations and techniques that we have developed for the three-dimensional model-based motion capture and animation of unconstrained human movement from multiple cameras. Our tracking and animation system consists of a human motion analysis and a synthesis components. First, I will present novel analytical computer vision techniques to accurately recover the three-dimensional shape and pose of a subject's body parts. These techniques are based on the spatio-temporal analysis of a subject's silhoutte from image sequences acquired simultaneously from multiple cameras. We employ physics-based deformable human body models to estimate the position and orientation of the subject's body parts during the analysis steps. This amounts to minimizing continously the discrepancy between the occluding contour of the deformable model and the silhoutte of the human in the acquired image sequences. To mitigate difficulties arising due to occlusion, we employ multiple cameras. For efficiency and robustness, we have devised two criteria for the active time-varying selection of the subset of the cameras that provide the most information. These criteria depend on the visibility of the subject's body parts and the observability of their predicted motion from the specific camera. For the motion synthesis component, we have developed techniques that allow the efficient and realistic animation of the subject's estimated graphical model. The advantage of our system is that the subject does not have to wear markers or special equipment. Finally, I will present motion estimation and animation results demonstrating the generality and robustness of our algorithm.
Rights: © University of West Bohemia
Appears in Collections:WSCG '2001: Conference proceedings

Files in This Item:
File Description SizeFormat 
Kakadiaris.pdfPrezentace566,92 kBAdobe PDFView/Open


Please use this identifier to cite or link to this item: http://hdl.handle.net/11025/11299

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.