9 results on '"relationship reasoning"'
Search Results
2. 一种基于图注意力的双分支社会关系识别方法.
- Author
-
李欢 and 陈念年
- Abstract
Extracting social relationships between people from images has an important role in criminal investigation, privacy protection and other fields. Existing graph modeling approaches have achieved good results by creating interpersonal relationship graphs or constructing knowledge graphs to learn people's relationships. However, the methods based on graph convolutional neural network (GCN) ignore different degrees of importance of different features for specific relationships to some extent. In order to solve this problem, this paper proposed a graph attention-based double-branch social relationship recognition model(GAT-DBSR). The first branch extracting person regions as well as image global features as nodes, and the core updated these nodes to learn feature representations of person relationships through graph attention networks and gating mechanisms. The second branch extracted scene features by convolutional neural networks to enhance the recognition of relationships between people. Finally, it fused and classified the features of the two branches to obtain all social relationships. The model achieves an mAP of 74.4% on the fine-grained relationship recognition task on the PISC dataset, a 1.2% improvement compared to the baseline model. The accuracy of relationship recognition on the PIPA dataset also shows some improvement. The experimental results show that the model has better results. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
3. Exploring correlation of relationship reasoning for scene graph generation.
- Author
-
Tian, Peng, Mo, Hongwei, and Jiang, Laihao
- Abstract
Accurately reasoning about the relationship between objects play a central role in scene understanding. Due to the complexity of modeling visual relationships and the unbalanced distribution of relationship types, the results obtained by the existing methods are far from satisfying. In this work, we find that the interplay between contextual information of object pairs and their relationships can effectively regularize the space of visual relationship types to improve the accuracy of relationship reasoning. To this end, we incorporate the interplay into deep neural networks to facilitate scene graph generation by developing a Relationship Reasoning Network (ReRN). Specifically, the model uses a feature updating structure to mutual connection and iterative update the semantic features of objects and relationships to explore contextual information between objects. Then a graph attention mechanism is used to obtain the correlation information between object pairs and their relationships. Finally, our model adopts the correlation information to facilitate interactions recognition between objects while leveraging the mutual connections and joint refines of different semantic features to improve the accuracy of scene graph generation. Extensive experiments on the Visual Genome dataset demonstrate that our method outperforms the other state-of-the-art methods. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
4. Cross-modal independent matching network for image-text retrieval.
- Author
-
Ke, Xiao, Chen, Baitao, Yang, Xiong, Cai, Yuhang, Liu, Hao, and Guo, Wenzhong
- Subjects
- *
GRAVITATION , *NEIGHBORHOODS - Abstract
Image-text retrieval serves as a bridge connecting vision and language. Mainstream modal cross matching methods can effectively perform cross-modal interactions with high theoretical performance. However, there is a deficiency in efficiency. Modal independent matching methods exhibit superior efficiency but lack in performance. Therefore, achieving a balance between matching efficiency and performance becomes a challenge in the field of image-text retrieval. In this paper, we propose a new Cross-modal Independent Matching Network (CIMN) for image-text retrieval. Specifically, we first use the proposed Feature Relationship Reasoning (FRR) to infer neighborhood and potential relations of modal features. Then, we introduce Graph Pooling (GP) based on graph convolutional networks to perform modal global semantic aggregation. Finally, we introduce the Gravitation Loss (GL) by incorporating sample mass into the learning process. This loss can correct the matching relationship between and within each modality, avoiding the problem of equal treatment of all samples in the traditional triplet loss. Extensive experiments on Flickr30K and MSCOCO datasets demonstrate the superiority of the proposed method. It achieves a good balance between matching efficiency and performance, surpasses other similar independent matching methods in performance, and can obtain retrieval accuracy comparable to some mainstream cross matching methods with an order of magnitude lower inference time. • NRR and PRR form FRR, enabling efficient global relationship reasoning at lower cost. • Graph Pooling uses graph structures for efficient global semantic aggregation. • Sample mass and Gravitation Loss improve diverse matching in image-text retrieval. • CIMN achieves competitive performance on MSCOCO and Flickr30K with high efficiency. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
5. Graph-Based Visual Manipulation Relationship Reasoning Network for Robotic Grasping
- Author
-
Guoyu Zuo, Jiayuan Tong, Hongxing Liu, Wenbai Chen, and Jianfeng Li
- Subjects
relationship reasoning ,graph convolution network ,grasping order ,robotic manipulation ,object-stacking scene ,Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 - Abstract
To grasp the target object stably and orderly in the object-stacking scenes, it is important for the robot to reason the relationships between objects and obtain intelligent manipulation order for more advanced interaction between the robot and the environment. This paper proposes a novel graph-based visual manipulation relationship reasoning network (GVMRN) that directly outputs object relationships and manipulation order. The GVMRN model first extracts features and detects objects from RGB images, and then adopts graph convolutional network (GCN) to collect contextual information between objects. To improve the efficiency of relation reasoning, a relationship filtering network is built to reduce object pairs before reasoning. The experiments on the Visual Manipulation Relationship Dataset (VMRD) show that our model significantly outperforms previous methods on reasoning object relationships in object-stacking scenes. The GVMRN model is also tested on the images we collected and applied on the robot grasping platform. The results demonstrated the generalization and applicability of our method in real environment.
- Published
- 2021
- Full Text
- View/download PDF
6. Graph-Based Visual Manipulation Relationship Reasoning Network for Robotic Grasping.
- Author
-
Zuo, Guoyu, Tong, Jiayuan, Liu, Hongxing, Chen, Wenbai, and Li, Jianfeng
- Subjects
OBJECT manipulation ,ROBOTICS ,ROBOTS ,GENERALIZATION - Abstract
To grasp the target object stably and orderly in the object-stacking scenes, it is important for the robot to reason the relationships between objects and obtain intelligent manipulation order for more advanced interaction between the robot and the environment. This paper proposes a novel graph-based visual manipulation relationship reasoning network (GVMRN) that directly outputs object relationships and manipulation order. The GVMRN model first extracts features and detects objects from RGB images, and then adopts graph convolutional network (GCN) to collect contextual information between objects. To improve the efficiency of relation reasoning, a relationship filtering network is built to reduce object pairs before reasoning. The experiments on the Visual Manipulation Relationship Dataset (VMRD) show that our model significantly outperforms previous methods on reasoning object relationships in object-stacking scenes. The GVMRN model is also tested on the images we collected and applied on the robot grasping platform. The results demonstrated the generalization and applicability of our method in real environment. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
7. Complex relationship graph abstraction for autonomous air combat collaboration: A learning and expert knowledge hybrid approach.
- Author
-
Piao, Haiyin, Han, Yue, Chen, Hechang, Peng, Xuanqi, Fan, Songyuan, Sun, Yang, Liang, Chen, Liu, Zhimin, Sun, Zhixiao, and Zhou, Deyun
- Subjects
- *
REINFORCEMENT learning , *DECISION making - Abstract
Large-scale air combat is accompanied by complex relationships among the participants, e.g., siege, support. These relationships often present numerous, multi-relational, and high-order characteristics. However, previous studies have encountered significant difficulties in dissecting large-scale air confrontations with such complex relationships. In view of this, a novel Multi-Agent Deep Reinforcement Learning (MADRL) and expert knowledge hybrid algorithm named T ransitive R elat I on S hip graph reas O ing for auto N omous a I r combat C ollaboration (TRISONIC) is proposed, which solves the large-scale autonomous air combat problem with complex relationships. Specifically, TRISONIC creates a Graph Neural Networks (GNNs) and expert knowledge composite approach to jointly reason out the key relationships into an Abstract Relationship Graph (ARG). After this particular relationship simplification process, representative collaboration tactics emerged via subsequent intention communication and joint decision making mechanisms. Empirically, we demonstrate that the proposed method outperforms state-of-the-art algorithms with an at least 67.4% relative winning rate in a high-fidelity air combat simulation environment. • Solving large-scale autonomous air combat problem with complex relationships. • Simplifying relationships via a GNN and expert knowledge hybrid approach. • Collaboration tactics emergence via joint-decision making. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
8. 知識グラフ内の微細な情報を用いたエンティティ指向検索の支援
- Author
-
Wiradee, Imrattanatrai, 吉川, 正俊, 森, 信介, and 田島, 敬史
- Subjects
Property Embeddings ,Relationship Reasoning ,Relationship Explanation ,Knowledge Graphs ,Property Identification ,Entity-oriented Search ,Entity Ranking - Published
- 2020
9. Supporting Entity-oriented Search with Fine-grained Information in Knowledge Graphs
- Author
-
Wiradee, Imrattanatrai and Wiradee, Imrattanatrai
- Published
- 2020
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.