Back to Search Start Over

Sample reconstruction with deep autoencoder for one sample per person face recognition

Authors :
Yan Zhang
Hua Peng
Source :
IET Computer Vision, Vol 11, Iss 6, Pp 471-478 (2017)
Publication Year :
2017
Publisher :
Wiley, 2017.

Abstract

One sample per person (OSPP) face recognition is a challenging problem in face recognition community. Lack of samples is the main reason for the failure of most algorithms in OSPP. In this study, the authors propose a new algorithm to generalise intra‐class variations of multi‐sample subjects to single‐sample subjects by deep autoencoder and reconstruct new samples. In the proposed algorithm, a generalised deep autoencoder is first trained with all images in the gallery, then a class‐specific deep autoencoder (CDA) is fine‐tuned for each single‐sample subject with its single sample. Samples of the multi‐sample subject, which is most like the single‐sample subject, are input to the corresponding CDA to reconstruct new samples. For classification, minimum L2 distance, principle component analysis, sparse represented‐based classifier and softmax regression are used. Experiments on the Extended Yale Face Database B, AR database and CMU PIE database are provided to show the validity of the proposed algorithm.

Details

Language :
English
ISSN :
17519640 and 17519632
Volume :
11
Issue :
6
Database :
Directory of Open Access Journals
Journal :
IET Computer Vision
Publication Type :
Academic Journal
Accession number :
edsdoj.760c7173812b459682d146d586f85181
Document Type :
article
Full Text :
https://doi.org/10.1049/iet-cvi.2016.0322