Stef, Andreea, Perera, Kaveen, Shum, Hubert P. H. and Ho, Edmond S. L. (2019) Synthesizing Expressive Facial and Speech Animation by Text-to-IPA Translation with Emotion Control. In: 2018 12th International Conference on Software, Knowledge, Information Management & Applications (SKIMA): Phnom Penh, Cambodia 3 – 5 December 2018. IEEE, Piscataway, NJ, pp. 70-78. ISBN 9781538691427, 9781538691410
|
Text (Full text)
Stef et al - Synthesizing Expressive Facial and Speech AAM.pdf - Accepted Version Download (3MB) | Preview |
Abstract
Given the complexity of the human facial anatomy, animating facial expressions and lip movements for speech is a very time-consuming and tedious task. In this paper, a new text-to-animation framework for facial animation synthesis is proposed. The core idea is to improve the expressiveness of lip-sync animation by incorporating facial expressions in 3D animated characters. This idea is realized as a plug-in in Autodesk Maya, one of the most popular animation platforms in the industry, such that professional animators can effectively apply the method in their existing work. We evaluate the proposed system by conducting two sets of surveys, in which both novice and experienced users participate in the user study to provide feedback and evaluations from different perspectives. The results of the survey highlights the effectiveness of creating realistic facial animations with the use of emotion expressions. Video demos of the synthesized animations are available online at https://git.io/fx5U3
Item Type: | Book Section |
---|---|
Uncontrolled Keywords: | Lip-sync, Facial animation, Facial expressions, Emotion, Character animation |
Subjects: | G400 Computer Science |
Department: | Faculties > Engineering and Environment > Computer and Information Sciences |
Depositing User: | Paul Burns |
Date Deposited: | 02 Nov 2018 10:36 |
Last Modified: | 01 Aug 2021 11:51 |
URI: | http://nrl.northumbria.ac.uk/id/eprint/36475 |
Downloads
Downloads per month over past year