Virtual Lives' Animation


Overview

How to generate virtual lives' animation is an interesting and important topic in computer animation. It includes how to create the geometry of the virtual lives, how to edit or generate the motion of the virtual lives, and how to specify the behavior of the virtual lives.

To edit the key-frame poses of a 3D model (of a virtual life), model deformation with a surface (MDS) is a new and efficient technique which is proposed by borrowing the idea of texture-mapping for free-form surfaces. By using MDS, the shape of the deformed model due to a given parametric surface can be predicted easily. For the virtual lives which are hardly edited by MDS, free-form deformation (FFD) is a good choice. Using auto-generated multiresolution lattices by refining a bounding box of the model and generating a set of finer lattices, both global and local deformations of the model can be achieved easily. To extract the skeleton of a 3D model for editing the key-poses of it, domain connected graph (DCG) is presented.

For simulating human hair animation, we provide a practical approach at an interactive frame rate. In this approach, the hair is modeled as a set of loosely connected particles (LCP) that serves as sampling points for the volume of the hair, which covers the whole region where hair is present. The dynamics of the hair, including hair-hair interactions, is simulated using the interacting particles.

To generate the geometry and motion of the character model together, character model creation from cel animation and character animation creation using hand-drawn sketches are proposed, which are easy-to-use approaches for creating a set of consistent 3D polygon models that correspond to the input frames of a cel animation or hand-drawn sketches. The created models can be used in cel animation editing systems for adding shadowing effects or helping to make cel animations.

To duplicate the motion of the virtual lives, cloning skeleton-driven animation including skeleton, binding weights and key-frame poses, is an efficient technique. With the proposed technique, users will only need to specify few common features between the source (animation) model and the target (static) ones, and our system can transfer the animation data automatically, a new version is released based on consistent volume parameterization (CVP). To reduce the complexity of the animation mesh data, progressive deforming meshes is provided.

Conceptual farm provided a virtual reality platform for generating and observing the behaviors of different autonomous characters (virtual lives), a gesture-based behavior authoring method is also provided.


Personnel


Publications

Progressive Deforming Meshes based on Deformation Oriented Decimation and Dynamic Connectivity Updating
Fu-Chung Huang, Bing-Yu Chen, and Yung-Yu Chuang
Proc. SCA 2006
Progressive Deforming Meshes based on Deformation Oriented Decimation
Fu-Chung Huang, Bing-Yu Chen, and Yung-Yu Chuang
SIGGRAPH 2006 Posters - accepted for Student Research Competition
Skeleton-driven Animation Transfer based on Consistent Volume Parameterization
Yen-Tuo Chang, Bing-Yu Chen, Wan-Chi Luo, and Jian-Bin Huang
Proc. CGI 2006
Domain Connected Graph: the Skeleton of a Closed 3D Shape for Animation
Fu-Che Wu, Wan-Chun Ma, Bing-Yu Chen, Rung-Huei Liang, and Ming Ouhyoung.
The Visual Computer
Motion Retargetting and Transition in Different Articulated Figures
Ming-Kei Hsieh, Bing-Yu Chen, Ming Ouhyoung
Proc. CAD/Graphics 2005
Character Animation Creation using Hand-drawn Sketches
Bing-Yu Chen, Yutaka Ono, Tomoyuki Nishita
The Visual Computer (Proc. PG 2005)
Toward Gesture-Based Behavior Authoring
Edward Yu-Te Shen and Bing-Yu Chen
Proc. CGI 2005
Cloning Skeleton-driven Animation to Other Models
Wan-Chi Luo, Jian-Bin Huang, Bing-Yu Chen, and Pin-Chou Liu
Proc. ICS 2004
3D Character Model Creation from Cel Animation
Yutaka Ono, Bing-Yu Chen, and Tomoyuki Nishita
Proc. CW 2004
Composite Mouse Gestures: Toward an Easier Tool for Behavior Authoring
Edward Yu-Te Shen, Kuei-Yuan Zheng, and Bing-Yu Chen
SIGGRAPH 2004 Posters - accepted for Student Research Competition
Animating Hand-drawn Sketches
Yutaka Ono, Bing-Yu Chen, and Tomoyuki Nishita
SIGGRAPH 2004 Posters
Prong Features Detection of a 3D Model Based on the Watershed Algorithm
Fu-Che Wu, Bing-Yu Chen, Rung-Huei Liang, and Ming Ouhyoung
SIGGRAPH 2004 Sketches
Conceptual Farm
Shuen-Huei Guan, Sheng-Yao Cho, Yu-Te Shen, Rung-Huei Liang, Bing-Yu Chen, and Ming Ouhyoung
Proc. ICME 2004
Automatic Animation Skeleton Construction Using Repulsive Force Field
Pin-Chou Liu, Fu-Che Wu, Wan-Chun Ma, Rung-Huei Liang, Ming Ouhyoung
Proc. PG 2003
Animating Hair with Loosely Connected Particles
Yosuke Bando, Bing-Yu Chen, and Tomoyuki Nishita
Computer Graphics Forum (Proc. EG 2003) - Best Paper Award (3rd Prize)
Skeleton Extraction of 3D Objects with Radial Basis Functions
Wan-Chun Ma, Fu-Che Wu, Ming Ouhyoung
Proc. SMI 2003
Free-Form Deformation with Automatically Generated Multiresolution Lattices
Yutaka Ono, Bing-Yu Chen, Tomoyuki Nishita, and Jieqing Feng
Proc. CW 2002
3D Model Deformation along a Parametric Surface
Bing-Yu Chen, Yutaka Ono, Henry Johan, Masaaki Ishii, Tomoyuki Nishita, and Jieqing Feng
Proc. VIIP 2002


Support

This research is partially supported by:


last update: by robin -a-t- ntu.edu.tw