Back to Search Start Over

State-Aware Compositional Learning Toward Unbiased Training for Scene Graph Generation.

Authors :
He, Tao
Gao, Lianli
Song, Jingkuan
Li, Yuan-Fang
Source :
IEEE Transactions on Image Processing; 2023, Vol. 32, p43-56, 14p
Publication Year :
2023

Abstract

How to avoid biased predictions is an important and active research question in scene graph generation (SGG). Current state-of-the-art methods employ debiasing techniques such as resampling and causality analysis. However, the role of intrinsic cues in the features causing biased training has remained under-explored. In this paper, for the first time, we make the surprising observation that object identity information, in the form of object label embeddings (e.g. GLOVE), is principally responsible for biased predictions. We empirically observe that, even without any visual features, a number of recent SGG models can produce comparable or even better results solely from object label embeddings. Motivated by this insight, we propose to leverage a conditional variational auto-encoder to decouple the entangled visual features into two meaningful components: the object’s intrinsic identity features and the extrinsic, relation-dependent state feature. We further develop two compositional learning strategies on the relation and object levels to mitigate the data scarcity issue of rare relations. On the two benchmark datasets Visual Genome and GQA, we conduct extensive experiments on the three scenarios, i.e., conventional, few-shot and zero-shot SGG. Results consistently demonstrate that our proposed Decomposition and Composition (DeC) method effectively alleviates the biases in the relation prediction. Moreover, DeC is model-free, and it significantly improves the performance of recent SGG models, establishing new state-of-the-art performance. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
10577149
Volume :
32
Database :
Complementary Index
Journal :
IEEE Transactions on Image Processing
Publication Type :
Academic Journal
Accession number :
160960775
Full Text :
https://doi.org/10.1109/TIP.2022.3224872