9 results on '"synthetic face"'
Search Results
2. Construction of Facial Composites from Eyewitness Memory
- Author
-
Tredoux, Colin Getty, Frowd, Charlie, Vredeveldt, Annelies, Scott, Kyra, Crusio, Wim E., Series Editor, Dong, Haidong, Series Editor, Radeke, Heinfried H., Series Editor, Rezaei, Nima, Series Editor, Steinlein, Ortrud, Series Editor, Xiao, Junjie, Series Editor, Shapiro, Leonard, editor, and Rea, Paul M., editor
- Published
- 2023
- Full Text
- View/download PDF
3. Is Training Useful to Detect Deepfakes? A Preliminary Study.
- Author
-
Mohamed, Nadia Belghazi, Bogdanel, Georgiana, and Gómez Moreno, Hilario
- Subjects
GENERATIVE adversarial networks ,DEEPFAKES ,DIGITAL technology ,ARTIFICIAL neural networks ,ACCURACY - Abstract
Generative Adversarial Networks (GANs) constitute a significant breakthrough due to their ability to generate realistic synthetic data. Sometimes, the synthetic creation resembles a real person or their voice, and we call them deepfakes. Deepfakes are used for multiple purposes, including malicious ones. This makes them a major concern due to the widespread lack of information about their existence and because they are increasingly generated more realistically. It is becoming increasingly difficult to differentiate them from real content. In this study, we evaluate human ability to differentiate between a real and a synthetic face. The results indicate that people are unable to differentiate between real images and deepfakes, but that some training slightly improves these results. Therefore, our hypothesis is that human accuracy in identifying synthetic faces could be improved with thorough training on how to detect deepfake faces. [ABSTRACT FROM AUTHOR]
- Published
- 2023
4. Learning 3D Head Pose From Synthetic Data: A Semi-Supervised Approach
- Author
-
Shubhajit Basak, Peter Corcoran, Faisal Khan, Rachel Mcdonnell, and Michael Schukat
- Subjects
Head pose estimation ,synthetic face ,face dataset ,visual domain adaptation ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Accurate head pose estimation from 2D image data is an essential component of applications such as driver monitoring systems, virtual reality technology, and human-computer interaction. It enables a better determination of user engagement and attentiveness. The most accurate head pose estimators are based on Deep Neural Networks that are trained with the supervised approach and rely primarily on the accuracy of training data. The acquisition of real head pose data with a wide variation of yaw, pitch and roll is a challenging task. Publicly available head pose datasets have limitations with respect to size, resolution, annotation accuracy and diversity. In this work, a methodology is proposed to generate pixel-perfect synthetic 2D headshot images rendered from high-quality 3D synthetic facial models with accurate head pose annotations. A diverse range of variations in age, race, and gender are also provided. The resulting dataset includes more than 300k pairs of RGB images with corresponding head pose annotations. A wide range of variations in pose, illumination and background are included. The dataset is evaluated by training a state-of-the-art head pose estimation model and testing against the popular evaluation-dataset Biwi. The results show that training with purely synthetic data generated using the proposed methodology achieves close to state-of-the-art results on head pose estimation which are originally trained on real human facial datasets. As there is a domain gap between the synthetic images and real-world images in the feature space, initial experimental results fall short of the current state-of-the-art. To reduce the domain gap, a semi-supervised visual domain adaptation approach is proposed, which simultaneously trains with the labelled synthetic data and the unlabeled real data. When domain adaptation is applied, a significant improvement in model performance is achieved. Additionally, by applying a data fusion-based transfer learning approach, better results are achieved than previously published work on this topic.
- Published
- 2021
- Full Text
- View/download PDF
5. Validating Seed Data Samples for Synthetic Identities – Methodology and Uniqueness Metrics
- Author
-
Viktor Varkarakis, Shabab Bazrafkan, Gabriel Costache, and Peter Corcoran
- Subjects
Artificial intelligence ,computer vision ,face recognition ,generative adversarial networks (GANs) ,StyleGAN ,synthetic face ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
This work explores the identity attribute of synthetic face samples derived from Generative Adversarial Networks. The goal is to determine if individual samples are unique in terms of identity, firstly with respect to the seed dataset that trains the GAN model and secondly with respect to other synthetic face samples. Two approaches are introduced to enable the comparative analysis of large sets of synthetic face samples. The first of these uses ROC curves to determine identity uniqueness using a number of large publicly available datasets of real facial samples to provide reference ROCs as a baseline. The second approach uses a thresholding technique utilizing again large publicly available datasets as a reference. For this approach, new metrics are introduced, and a technique is provided to remove the most connected data samples within a large synthetic dataset. The remaining synthetic samples can be considered as unique as data samples gathered from different real individuals. Several StyleGAN models are used to create the synthetic datasets, and variations in key model parameters are explored. It is concluded that the resulting synthetic data samples exhibit excellent uniqueness when compared with the original training dataset, but significantly less uniqueness when comparisons are made within the synthetic dataset. Nevertheless, it is possible to remove the most highly connected synthetic data samples. Thus, in some cases, up to 92% of the data samples in a 20k synthetic dataset can be shown to exhibit similar uniqueness to data samples taken from real public datasets.
- Published
- 2020
- Full Text
- View/download PDF
6. Learning 3D Head Pose From Synthetic Data: A Semi-Supervised Approach
- Author
-
Peter Corcoran, Michael Schukat, Rachel McDonnell, Shubhajit Basak, and Faisal Shah Khan
- Subjects
Head pose estimation ,General Computer Science ,Computer science ,business.industry ,Feature vector ,General Engineering ,synthetic face ,Pattern recognition ,visual domain adaptation ,Solid modeling ,face dataset ,Sensor fusion ,Synthetic data ,Domain (software engineering) ,Data modeling ,General Materials Science ,Artificial intelligence ,lcsh:Electrical engineering. Electronics. Nuclear engineering ,Transfer of learning ,business ,Pose ,lcsh:TK1-9971 - Abstract
Accurate head pose estimation from 2D image data is an essential component of applications such as driver monitoring systems, virtual reality technology, and human-computer interaction. It enables a better determination of user engagement and attentiveness. The most accurate head pose estimators are based on Deep Neural Networks that are trained with the supervised approach and rely primarily on the accuracy of training data. The acquisition of real head pose data with a wide variation of yaw, pitch and roll is a challenging task. Publicly available head pose datasets have limitations with respect to size, resolution, annotation accuracy and diversity. In this work, a methodology is proposed to generate pixel-perfect synthetic 2D headshot images rendered from high-quality 3D synthetic facial models with accurate head pose annotations. A diverse range of variations in age, race, and gender are also provided. The resulting dataset includes more than 300k pairs of RGB images with corresponding head pose annotations. A wide range of variations in pose, illumination and background are included. The dataset is evaluated by training a state-of-the-art head pose estimation model and testing against the popular evaluation-dataset Biwi. The results show that training with purely synthetic data generated using the proposed methodology achieves close to state-of-the-art results on head pose estimation which are originally trained on real human facial datasets. As there is a domain gap between the synthetic images and real-world images in the feature space, initial experimental results fall short of the current state-of-the-art. To reduce the domain gap, a semi-supervised visual domain adaptation approach is proposed, which simultaneously trains with the labelled synthetic data and the unlabeled real data. When domain adaptation is applied, a significant improvement in model performance is achieved. Additionally, by applying a data fusion-based transfer learning approach, better results are achieved than previously published work on this topic.
- Published
- 2021
7. Implicit face prototype learning from geometric information
- Author
-
Or, Charles C.-F. and Wilson, Hugh R.
- Subjects
- *
FACE perception , *FACE , *PROTOTYPES , *GEOMETRIC analysis , *MEMORY , *PHYSIOLOGY ,VISION research - Abstract
Abstract: There is evidence that humans implicitly learn an average or prototype of previously studied faces, as the unseen face prototype is falsely recognized as having been learned (Solso & McCarthy, 1981). Here we investigated the extent and nature of face prototype formation where observers’ memory was tested after they studied synthetic faces defined purely in geometric terms in a multidimensional face space. We found a strong prototype effect: The basic results showed that the unseen prototype averaged from the studied faces was falsely identified as learned at a rate of 86.3%, whereas individual studied faces were identified correctly 66.3% of the time and the distractors were incorrectly identified as having been learned only 32.4% of the time. This prototype learning lasted at least 1week. Face prototype learning occurred even when the studied faces were further from the unseen prototype than the median variation in the population. Prototype memory formation was evident in addition to memory formation of studied face exemplars as demonstrated in our models. Additional studies showed that the prototype effect can be generalized across viewpoints, and head shape and internal features separately contribute to prototype formation. Thus, implicit face prototype extraction in a multidimensional space is a very general aspect of geometric face learning. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
8. Visual Contribution to Speech Perception: Measuring the Intelligibility of Animated Talking Heads
- Author
-
Dominic W. Massaro, Michael M. Cohen, Hope Ishak, Slim Ouni, Analysis, perception and recognition of speech (PAROLE), INRIA Lorraine, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Laboratoire Lorrain de Recherche en Informatique et ses Applications (LORIA), Institut National de Recherche en Informatique et en Automatique (Inria)-Université Henri Poincaré - Nancy 1 (UHP)-Université Nancy 2-Institut National Polytechnique de Lorraine (INPL)-Centre National de la Recherche Scientifique (CNRS)-Université Henri Poincaré - Nancy 1 (UHP)-Université Nancy 2-Institut National Polytechnique de Lorraine (INPL)-Centre National de la Recherche Scientifique (CNRS), Perceptual Science Laboratory (PERCEPTUAL SCIENCE LABORATORY), University of California [Santa Cruz] (UC Santa Cruz), University of California (UC)-University of California (UC), Centre National de la Recherche Scientifique (CNRS)-Institut National Polytechnique de Lorraine (INPL)-Université Nancy 2-Université Henri Poincaré - Nancy 1 (UHP)-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre National de la Recherche Scientifique (CNRS)-Institut National Polytechnique de Lorraine (INPL)-Université Nancy 2-Université Henri Poincaré - Nancy 1 (UHP), University of California [Santa Cruz] (UCSC), and University of California-University of California
- Subjects
Sumby and Pollack ,Acoustics and Ultrasonics ,Computer science ,Speech recognition ,speech ,lcsh:QC221-246 ,Intelligibility (communication) ,computer.software_genre ,01 natural sciences ,Speech science ,speech perception ,lcsh:QA75.5-76.95 ,[INFO.INFO-CL]Computer Science [cs]/Computation and Language [cs.CL] ,[INFO.INFO-AI]Computer Science [cs]/Artificial Intelligence [cs.AI] ,[INFO.INFO-TS]Computer Science [cs]/Signal and Image Processing ,Logical data model ,[SHS.LANGUE]Humanities and Social Sciences/Linguistics ,010301 acoustics ,media_common ,communication ,synthetic face ,animated agent ,parole ,tête parlante ,[SCCO.PSYC]Cognitive science/Psychology ,visual ,0305 other medical science ,[SPI.SIGNAL]Engineering Sciences [physics]/Signal and Image processing ,Natural language processing ,FLMP ,Speech perception ,media_common.quotation_subject ,[SCCO.COMP]Cognitive science/Computer science ,Fuzzy logic ,030507 speech-language pathology & audiology ,03 medical and health sciences ,Perception ,0103 physical sciences ,Audio visual ,audio ,Electrical and Electronic Engineering ,talking head ,intelligibility ,natural face ,perception de la parole ,business.industry ,Visible Speech ,[SCCO.LING]Cognitive science/Linguistics ,intelligibilité ,lcsh:Acoustics. Sound ,visuelle ,lcsh:Electronic computers. Computer science ,Artificial intelligence ,business ,computer ,audio-visual - Abstract
Animated agents are becoming increasingly frequent in research and applications in speech science. An important challenge is to evaluate the effectiveness of the agent in terms of the intelligibility of its visible speech. In three experiments, we extend and test the Sumby and Pollack (1954) metric to allow the comparison of an agent relative to a standard or reference, and also propose a new metric based on the fuzzy logical model of perception (FLMP) to describe the benefit provided by a synthetic animated face relative to the benefit provided by a natural face. A valid metric would allow direct comparisons accross different experiments and would give measures of the benfit of a synthetic animated face relative to a natural face (or indeed any two conditions) and how this benefit varies as a function of the type of synthetic face, the test items (e.g., syllables versus sentences), different individuals, and applications.
- Published
- 2007
- Full Text
- View/download PDF
9. Evaluation of Synthetic Faces: Human Recognition of Emotional Facial Displays
- Author
-
Costantini, E. Pianesi, and F. Cosi
- Subjects
Emotions ,Evaluation ,Synthetic Face - Abstract
Affective Dialogue Systems Tutorial and Research Workshop, ADS 2004, Kloster Irsee, Germany, June 14-16, 2004
- Published
- 2004
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.