1. Predicting face movements from speech acoustics using spectral dynamics
- Author
-
Abeer Alwan, E.T.Jr. Auer, Lynne E. Bernstein, Patricia A. Keating, and Jintao Jiang
- Subjects
Speech Acoustics ,Dynamics (music) ,Computer science ,Speech recognition ,Face (geometry) ,Feature extraction ,Autocorrelation ,Speech synthesis ,Filter (signal processing) ,Speech processing ,computer.software_genre ,computer - Abstract
The paper introduces a new dynamical model which enhances the relationship between face movements and speech acoustics. Based on the autocorrelation of the acoustics and of the face movements, a causal and a non-causal filter are proposed to approximate dynamic features in the speech signals. The database consists of sentences recorded acoustically, and a Qualisys system is used to capture face movements, with 20 reflectors put on the face, simultaneously. Speech signals are represented by 16/sup th/-order LSPs and log-energy. With the filtered dynamic features, the acoustic features account for more than 80% of the variance of face movements.
- Published
- 2003
- Full Text
- View/download PDF