Back to Search Start Over

Efficient disentangled representation learning for multi-modal finger biometrics.

Authors :
Yang, Weili
Huang, Junduan
Luo, Dacan
Kang, Wenxiong
Source :
Pattern Recognition. Jan2024, Vol. 145, pN.PAG-N.PAG. 1p.
Publication Year :
2024

Abstract

Most multi-modal biometric systems use multiple devices to capture different traits and directly fuse multi-modal data while ignoring correlation information between modalities. In this paper, finger skin and finger vein images are acquired from the same region of the finger and therefore have a higher correlation. To represent data efficiently, we propose a novel Finger Disentangled Representation Learning Framework (FDRL-Net) that is based on a factorization concept, which disentangles each modality into shared and private features, thereby improving complementarity for better fusion and extracting modality-invariant features for heterogeneous recognition. Besides, to capture as much finger texture as possible, we utilize three-view finger images to reconstruct full-view multi-spectral finger traits, which increases the identity information and the robustness to finger posture variation. Finally, a Boat-Trackers-based multi-task distillation method is proposed to migrate the feature representation ability to a lightweight multi-task network. Extensive experiments on six single-view multi-spectral finger datasets and two full-view multi-spectral finger datasets demonstrate the effectiveness of our approach. • FDRL-Net is the first disentangled representation learning method on finger traits. • A direct Boat-Trackers-based multi-task distillation method is proposed. • Extensive experiments shows the superiority of our proposed method. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
00313203
Volume :
145
Database :
Academic Search Index
Journal :
Pattern Recognition
Publication Type :
Academic Journal
Accession number :
172778090
Full Text :
https://doi.org/10.1016/j.patcog.2023.109944