4 results on '"Chen, Jintao"'
Search Results
2. Pursuit Winning Strategies for Reach-Avoid Games with Polygonal Obstacles
- Author
-
Yan, Rui, Mi, Shuai, Duan, Xiaoming, Chen, Jintao, and Ji, Xiangyang
- Subjects
Electrical Engineering and Systems Science - Systems and Control ,Computer Science - Computer Science and Game Theory - Abstract
This paper studies a multiplayer reach-avoid differential game in the presence of general polygonal obstacles that block the players' motions. The pursuers cooperate to protect a convex region from the evaders who try to reach the region. We propose a multiplayer onsite and close-to-goal (MOCG) pursuit strategy that can tell and achieve an increasing lower bound on the number of guaranteed defeated evaders. This pursuit strategy fuses the subgame outcomes for multiple pursuers against one evader with hierarchical optimal task allocation in the receding-horizon manner. To determine the qualitative subgame outcomes that who is the game winner, we construct three pursuit winning regions and strategies under which the pursuers guarantee to win against the evader, regardless of the unknown evader strategy. First, we utilize the expanded Apollonius circles and propose the onsite pursuit winning that achieves the capture in finite time. Second, we introduce convex goal-covering polygons (GCPs) and propose the close-to-goal pursuit winning for the pursuers whose visibility region contains the whole protected region, and the goal-visible property will be preserved afterwards. Third, we employ Euclidean shortest paths (ESPs) and construct a pursuit winning region and strategy for the non-goal-visible pursuers, where the pursuers are firstly steered to positions with goal visibility along ESPs. In each horizon, the hierarchical optimal task allocation maximizes the number of defeated evaders and consists of four sequential matchings: capture, enhanced, non-dominated and closest matchings. Numerical examples are presented to illustrate the results., Comment: 16 pages, 10 figures
- Published
- 2024
3. SimGRACE: A Simple Framework for Graph Contrastive Learning without Data Augmentation
- Author
-
Xia, Jun, Wu, Lirong, Chen, Jintao, Hu, Bozhen, and Li, Stan Z.
- Subjects
Computer Science - Machine Learning ,Computer Science - Social and Information Networks - Abstract
Graph contrastive learning (GCL) has emerged as a dominant technique for graph representation learning which maximizes the mutual information between paired graph augmentations that share the same semantics. Unfortunately, it is difficult to preserve semantics well during augmentations in view of the diverse nature of graph data. Currently, data augmentations in GCL that are designed to preserve semantics broadly fall into three unsatisfactory ways. First, the augmentations can be manually picked per dataset by trial-and-errors. Second, the augmentations can be selected via cumbersome search. Third, the augmentations can be obtained by introducing expensive domain-specific knowledge as guidance. All of these limit the efficiency and more general applicability of existing GCL methods. To circumvent these crucial issues, we propose a \underline{Sim}ple framework for \underline{GRA}ph \underline{C}ontrastive l\underline{E}arning, \textbf{SimGRACE} for brevity, which does not require data augmentations. Specifically, we take original graph as input and GNN model with its perturbed version as two encoders to obtain two correlated views for contrast. SimGRACE is inspired by the observation that graph data can preserve their semantics well during encoder perturbations while not requiring manual trial-and-errors, cumbersome search or expensive domain knowledge for augmentations selection. Also, we explain why SimGRACE can succeed. Furthermore, we devise adversarial training scheme, dubbed \textbf{AT-SimGRACE}, to enhance the robustness of graph contrastive learning and theoretically explain the reasons. Albeit simple, we show that SimGRACE can yield competitive or better performance compared with state-of-the-art methods in terms of generalizability, transferability and robustness, while enjoying unprecedented degree of flexibility and efficiency., Comment: Accepted by The Web Conference 2022 (WWW 2022)
- Published
- 2022
- Full Text
- View/download PDF
4. ProGCL: Rethinking Hard Negative Mining in Graph Contrastive Learning
- Author
-
Xia, Jun, Wu, Lirong, Wang, Ge, Chen, Jintao, and Li, Stan Z.
- Subjects
Computer Science - Artificial Intelligence - Abstract
Contrastive Learning (CL) has emerged as a dominant technique for unsupervised representation learning which embeds augmented versions of the anchor close to each other (positive samples) and pushes the embeddings of other samples (negatives) apart. As revealed in recent studies, CL can benefit from hard negatives (negatives that are most similar to the anchor). However, we observe limited benefits when we adopt existing hard negative mining techniques of other domains in Graph Contrastive Learning (GCL). We perform both experimental and theoretical analysis on this phenomenon and find it can be attributed to the message passing of Graph Neural Networks (GNNs). Unlike CL in other domains, most hard negatives are potentially false negatives (negatives that share the same class with the anchor) if they are selected merely according to the similarities between anchor and themselves, which will undesirably push away the samples of the same class. To remedy this deficiency, we propose an effective method, dubbed \textbf{ProGCL}, to estimate the probability of a negative being true one, which constitutes a more suitable measure for negatives' hardness together with similarity. Additionally, we devise two schemes (i.e., \textbf{ProGCL-weight} and \textbf{ProGCL-mix}) to boost the performance of GCL. Extensive experiments demonstrate that ProGCL brings notable and consistent improvements over base GCL methods and yields multiple state-of-the-art results on several unsupervised benchmarks or even exceeds the performance of supervised ones. Also, ProGCL is readily pluggable into various negatives-based GCL methods for performance improvement. We release the code at \textcolor{magenta}{\url{https://github.com/junxia97/ProGCL}}., Comment: Accetpted at ICML 2022
- Published
- 2021
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.