Back to Search Start Over

Ensemble of loss functions to improve generalizability of deep metric learning methods

Authors :
Zabihzadeh, Davood
Alitbi, Zahraa
Seyed Jalaleddin, Mousavirad
Zabihzadeh, Davood
Alitbi, Zahraa
Seyed Jalaleddin, Mousavirad
Publication Year :
2023

Abstract

The success of a Deep metric learning (DML) algorithm greatly depends on its loss function. However, no loss function is perfect and deals only with some aspects of an optimal similarity embedding. Besides, they omit the generalizability of the DML on unseen categories. To address these challenges, we propose novel approaches to combine different losses built on top of a shared deep network. The proposed ensemble of losses enforces the model to extract compatible features with all losses. Since the selected losses are diverse and emphasize different aspects of an optimal embedding, our effective combining method yields a considerable improvement over any individual loss and generalize well on unseen classes. It can optimize each loss function and its weight without imposing an additional hyper-parameter. We evaluate our methods on some popular datasets in a Zero-Shot-Learning setting. The results are very encouraging and show that our methods outperform all baseline losses by a large margin in all datasets. Specifically, the proposed method surpasses the best individual loss on the Cars-196 dataset by 10.37% and 9.54% in terms of Recall@1 and kNN accuracy respectively. Moreover, we develop a novel distance-based compression method that compresses the coefficient and embedding of losses into a single embedding vector. The size of the resulting embedding is identical to each baseline learner. Thus, it is fast as each baseline DML in the evaluation stage. Meanwhile, it outperforms the best individual loss on the Cars-196 dataset by 8.28% and 7.76% in terms of Recall@1 and kNN accuracy respectively.

Details

Database :
OAIster
Notes :
English
Publication Type :
Electronic Resource
Accession number :
edsoai.on1428131263
Document Type :
Electronic Resource
Full Text :
https://doi.org/10.1007.s11042-023-16160-9