Back to Search Start Over

The Unconstrained Ear Recognition Challenge 2019 - ArXiv Version With Appendix

Authors :
Emeršič, Žiga
V., Aruna Kumar S.
Harish, B. S.
Gutfeter, Weronika
Khiarak, Jalil Nourmohammadi
Pacut, Andrzej
Hansley, Earnest
Segundo, Mauricio Pamplona
Sarkar, Sudeep
Park, Hyeonjung
Nam, Gi Pyo
Kim, Ig-Jae
Sangodkar, Sagar G.
Kaçar, Ümit
Kirci, Murvet
Yuan, Li
Yuan, Jishou
Zhao, Haonan
Lu, Fei
Mao, Junying
Zhang, Xiaoshuang
Yaman, Dogucan
Eyiokur, Fevziye Irem
Özler, Kadir Bulut
Ekenel, Hazım Kemal
Chowdhury, Debbrota Paul
Bakshi, Sambit
Sa, Pankaj K.
Majhi, Banshidhar
Peer, Peter
Štruc, Vitomir
Emeršič, Žiga
V., Aruna Kumar S.
Harish, B. S.
Gutfeter, Weronika
Khiarak, Jalil Nourmohammadi
Pacut, Andrzej
Hansley, Earnest
Segundo, Mauricio Pamplona
Sarkar, Sudeep
Park, Hyeonjung
Nam, Gi Pyo
Kim, Ig-Jae
Sangodkar, Sagar G.
Kaçar, Ümit
Kirci, Murvet
Yuan, Li
Yuan, Jishou
Zhao, Haonan
Lu, Fei
Mao, Junying
Zhang, Xiaoshuang
Yaman, Dogucan
Eyiokur, Fevziye Irem
Özler, Kadir Bulut
Ekenel, Hazım Kemal
Chowdhury, Debbrota Paul
Bakshi, Sambit
Sa, Pankaj K.
Majhi, Banshidhar
Peer, Peter
Štruc, Vitomir
Publication Year :
2019

Abstract

This paper presents a summary of the 2019 Unconstrained Ear Recognition Challenge (UERC), the second in a series of group benchmarking efforts centered around the problem of person recognition from ear images captured in uncontrolled settings. The goal of the challenge is to assess the performance of existing ear recognition techniques on a challenging large-scale ear dataset and to analyze performance of the technology from various viewpoints, such as generalization abilities to unseen data characteristics, sensitivity to rotations, occlusions and image resolution and performance bias on sub-groups of subjects, selected based on demographic criteria, i.e. gender and ethnicity. Research groups from 12 institutions entered the competition and submitted a total of 13 recognition approaches ranging from descriptor-based methods to deep-learning models. The majority of submissions focused on ensemble based methods combining either representations from multiple deep models or hand-crafted with learned image descriptors. Our analysis shows that methods incorporating deep learning models clearly outperform techniques relying solely on hand-crafted descriptors, even though both groups of techniques exhibit similar behaviour when it comes to robustness to various covariates, such presence of occlusions, changes in (head) pose, or variability in image resolution. The results of the challenge also show that there has been considerable progress since the first UERC in 2017, but that there is still ample room for further research in this area.<br />Comment: The content of this paper was published in ICB, 2019. This ArXiv version is from before the peer review

Details

Database :
OAIster
Publication Type :
Electronic Resource
Accession number :
edsoai.on1106334118
Document Type :
Electronic Resource