Back to Search
Start Over
Invariant representation learning via decoupling style and spurious features.
- Source :
- Machine Learning; Feb2025, Vol. 114 Issue 2, p1-20, 20p
- Publication Year :
- 2025
-
Abstract
- This paper considers the out-of-distribution (OOD) generalization problem under the setting that both style distribution shift and spurious features exist and domain labels are missing. This setting frequently arises in real-world applications and is underlooked because previous approaches mainly handle either of these two factors. The critical challenge is decoupling style and spurious features in the absence of domain labels. We propose a structural causal model (SCM) for the image generation process to address this challenge, considering both style distribution shifts and spurious features. The proposed SCM enables us to design a new framework called IRSS, which can gradually separate style distribution and spurious features from images by introducing adversarial neural networks and multi-environment optimization, thus achieving OOD generalization. Moreover, it does not require additional supervision (e.g., domain labels) other than the images and their corresponding labels. Experiments on benchmark datasets demonstrate that IRSS outperforms traditional OOD methods and solves the problem of Invariant risk minimization degradation, enabling the extraction of invariant features under distribution shift. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 08856125
- Volume :
- 114
- Issue :
- 2
- Database :
- Complementary Index
- Journal :
- Machine Learning
- Publication Type :
- Academic Journal
- Accession number :
- 182539700
- Full Text :
- https://doi.org/10.1007/s10994-024-06730-9