Back to Search Start Over

Some thoughts about transfer learning. What role for the source domain?

Authors :
Cornuéjols, A.
Source :
International Journal of Approximate Reasoning. Mar2024, Vol. 166, pN.PAG-N.PAG. 1p.
Publication Year :
2024

Abstract

Transfer learning is called for when the training and test data do not share the same input distributions (P X S ≠ P X T) or/and not the same conditional ones (P Y | X S ≠ P Y | X T). In the most general case, the input spaces and/or output spaces can be different: X S ≠ X T and/or Y S ≠ Y T. However, most work assume that X S = X T. Furthermore, a common held assumption is that it is necessary that the source hypothesis be good on the source training data and that the "distance" between the source and the target domains be as small as possible in order to get a good (transferred) target hypothesis. This paper revisits the reasons for these beliefs and discusses the relevance of these conditions. An algorithm is presented which can deal with transfer learning problems where X S ≠ X T , and that furthermore brings a fresh perspective on the role of the source hypothesis (it does not have to be good) and on what is important in the distance between the source and the target domains (translations between them should belong to a limited set). Experiments illustrate the properties of the method and confirm the theoretical analysis. Determining beforehand a relevant source hypothesis remains an open problem, but the vista provided here helps understanding its role. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
0888613X
Volume :
166
Database :
Academic Search Index
Journal :
International Journal of Approximate Reasoning
Publication Type :
Periodical
Accession number :
175258357
Full Text :
https://doi.org/10.1016/j.ijar.2023.109107