Spatio-temporal Manifold Learning for Human Motions via Long-horizon Modeling

Wang, He, Ho, Edmond, Shum, Hubert and Zhu, Zhanxing (2019) Spatio-temporal Manifold Learning for Human Motions via Long-horizon Modeling. IEEE Transactions on Visualization and Computer Graphics. ISSN 1077-2626 (In Press)

[img]
Preview
Text
Spatio_temporal_Recurrent_Neural_Network.pdf - Accepted Version

Download (2MB) | Preview
Official URL: https://doi.org/10.1109/tvcg.2019.2936810

Abstract

Data-driven modeling of human motions is ubiquitous in computer graphics and computer vision applications, such as synthesizing realistic motions or recognizing actions. Recent research has shown that such problems can be approached by learning a natural motion manifold using deep learning on a large amount data, to address the shortcomings of traditional data-drivenapproaches. However, previous deep learning methods can be sub-optimal for two reasons. First, the skeletal information has not been fully utilized for feature extraction. Unlike images, it is difficult to define spatial proximity in skeletal motions in the way that deep networks can be applied for feature extraction. Second, motion is time-series data with strong multi-modal temporal correlations between frames. On the one hand, a frame could be followed by several candidate frames leading to different motions; on the other hand, long-range dependencies exist where a number of frames in the beginning correlate to a number of frames later. Ineffective temporal modeling would either under-estimate the multi-modality and variance, resulting in featureless mean motion or over-estimate them resulting in jittery motions, which is a major source of visual artifacts. In this paper, we propose a new deep network to tackle these challenges by creating a natural motion manifold that is versatile for many applications. The network has a new spatial component for feature extraction. It is also equipped with a new batch prediction model that predicts a large number of frames at once, such that long-term temporally-based objective functions can be employed to correctly learn the motion multi-modality and variances. With our system, long-duration motions can be predicted/synthesized using an open-loop setup where the motion retains the dynamics accurately. It can also be used for denoising corrupted motions and synthesizing new motions with given control signals. We demonstrate that our system can create superior results comparing to existing work in multiple applications.

Item Type: Article
Uncontrolled Keywords: Computer Graphics, Computer Animation, Character Animation, Deep Learning
Subjects: G400 Computer Science
G500 Information Systems
Department: Faculties > Engineering and Environment > Computer and Information Sciences
Depositing User: Elena Carlaw
Date Deposited: 20 Aug 2019 13:01
Last Modified: 11 Oct 2019 13:02
URI: http://nrl.northumbria.ac.uk/id/eprint/40409

Actions (login required)

View Item View Item

Downloads

Downloads per month over past year

View more statistics


Policies: NRL Policies | NRL University Deposit Policy | NRL Deposit Licence