Back to Search
Start Over
Hierarchical knowledge amalgamation with dual discriminative feature alignment.
- Source :
-
Information Sciences . Oct2022, Vol. 613, p556-574. 19p. - Publication Year :
- 2022
-
Abstract
- Heterogeneous Knowledge Amalgamation (HKA) algorithms attempt to learn a versatile and lightweight student neural network from multiple pre-trained heterogeneous teachers. They encourage the student not only to produce the same prediction as the teachers but also to imitate each teacher's features separately in a learned Common Feature Space (CFS) by using Maximum Mean Discrepancy (MMD). However, there is no theoretical guarantee of the Out-of-Distribution robustness of teacher models in CFS, which can cause an overlap of feature representations when mapping unknown category samples. Furthermore, global alignment MMD can easily result in a negative transfer without considering class-level alignment and the relationships among all teachers. To overcome these issues, we propose a Dual Discriminative Feature Alignment (DDFA) framework, consisting of a Discriminative Centroid Clustering Strategy (DCCS) and a Joint Group Feature Alignment method (JGFA). DCCS promotes the class-separability of the teachers' features to alleviate the overlap issue. Meanwhile, JGFA decouples the complex discrepancy among teachers and the student at both category and group levels, extending MMD to align the features discriminatively. We test our model on a list of benchmarks and demonstrate that the learned student is robust and even outperforms its teachers in most cases. [ABSTRACT FROM AUTHOR]
- Subjects :
- *LIBRARY media specialists
*AMALGAMATION
*MACHINE learning
Subjects
Details
- Language :
- English
- ISSN :
- 00200255
- Volume :
- 613
- Database :
- Academic Search Index
- Journal :
- Information Sciences
- Publication Type :
- Periodical
- Accession number :
- 159928208
- Full Text :
- https://doi.org/10.1016/j.ins.2022.09.031