2,593 results on '"Feature matching"'
Search Results
2. Fault Diagnosis of Mechanical Equipment Using a Distribution Guided Adversarial Transfer Network
- Author
-
Liu, Shaowei, Shen, Lianjie, Xu, Zeyu, Zhao, Junmin, Wu, Sijie, Huang, Yuyang, Ceccarelli, Marco, Series Editor, Corves, Burkhard, Advisory Editor, Glazunov, Victor, Advisory Editor, Hernández, Alfonso, Advisory Editor, Huang, Tian, Advisory Editor, Jauregui Correa, Juan Carlos, Advisory Editor, Takeda, Yukio, Advisory Editor, Agrawal, Sunil K., Advisory Editor, Wang, Zuolu, editor, Zhang, Kai, editor, Feng, Ke, editor, Xu, Yuandong, editor, and Yang, Wenxian, editor
- Published
- 2025
- Full Text
- View/download PDF
3. Enhancing Semi-Dense Feature Matching Through Probabilistic Modeling of Cascaded Supervision and Consistency
- Author
-
Min, Hongchang, Tang, Yihong, Li, Qiankun, Wang, Zengfu, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Lin, Zhouchen, editor, Cheng, Ming-Ming, editor, He, Ran, editor, Ubul, Kurban, editor, Silamu, Wushouer, editor, Zha, Hongbin, editor, Zhou, Jie, editor, and Liu, Cheng-Lin, editor
- Published
- 2025
- Full Text
- View/download PDF
4. Raising the Ceiling: Conflict-Free Local Feature Matching with Dynamic View Switching
- Author
-
Lu, Xiaoyong, Du, Songlin, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Leonardis, Aleš, editor, Ricci, Elisa, editor, Roth, Stefan, editor, Russakovsky, Olga, editor, Sattler, Torsten, editor, and Varol, Gül, editor
- Published
- 2025
- Full Text
- View/download PDF
5. StereoGlue: Robust Estimation with Single-Point Solvers
- Author
-
Barath, Daniel, Mishkin, Dmytro, Cavalli, Luca, Sarlin, Paul-Edouard, Hruby, Petr, Pollefeys, Marc, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Leonardis, Aleš, editor, Ricci, Elisa, editor, Roth, Stefan, editor, Russakovsky, Olga, editor, Sattler, Torsten, editor, and Varol, Gül, editor
- Published
- 2025
- Full Text
- View/download PDF
6. A Demand‐Side Resource Selection Method for Feature Aggregation Based on Load Mapping.
- Author
-
Li, Bin, Tang, Tianyue, Wu, Dan, Tian, Shiming, Xu, Yuting, Shi, Shanshan, and Zhang, Kaiyu
- Subjects
- *
FEATURE extraction , *GAUSSIAN processes , *ELECTRICAL engineers , *AUTOMATION - Abstract
In order to improve the intuitiveness and automation of demand‐side resource selection, a demand‐side resource selection method based on load mapping matching is proposed in view of the increasing challenges of supply–demand balance in power networks and the rapid development of power demand‐side management technologies. First, a two‐dimensional load mapping of demand‐side resources is drawn, and the load mapping is processed by Gaussian convolutional difference method. Then, feature points are extracted and located for the target resources and the loads of other resources in the demand‐side resource pool, and similar feature key point pairs of demand‐side resources are obtained. Finally, the demand‐side resources with similar load characteristics to the target resources in the resource pool are screened according to the number of similar feature key point pairs, and the load resources similar to the target resources are finally identified by dividing the resource selection into priority levels. The experimental results show that the method can effectively extract feature key points, clearly and intuitively represent the features of demand‐side resource load mapping, and can match to load resources with similar characteristics, which greatly transforms the demand‐side resource selection mode. © 2024 Institute of Electrical Engineers of Japan and Wiley Periodicals LLC. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Comparison and evaluation of feature matching methods for multisource planetary remote sensing imagery.
- Author
-
Ye, Zhen, Zhou, Yingying, Xu, Yusheng, Huang, Rong, Wan, Genyi, Qian, Jia, Xie, Huan, and Tong, Xiaohua
- Subjects
- *
REMOTE sensing , *COMPUTER vision , *SURFACE morphology , *DETECTORS , *DATA quality , *DEEP learning , *IMAGE registration - Abstract
Feature‐based image matching is a critical technique in photogrammetry and computer vision. Recently, various advanced image matching methods have been proposed. The effectiveness of these methods is significantly challenged in the case of multisource planetary images which often have to deal with unique surface morphologies, observed by different sensors and under different illumination and viewing conditions. This study investigates and evaluates the performances of 13 feature detectors across diverse images from Moon and Mars, captured by different sensor systems under different radiometric and geometric conditions. Also, the performances of 12 feature descriptors are assessed. A ranking for combinations of detectors and descriptors is determined. The results reveal that phase congruency‐based algorithms achieve favourable performance in both feature detection and description. On the other hand methods based on deep learning may obtain better results if training data of high quality were available. Finally, we summarise the capabilities and limitations of multisource remote sensing image matching methods and provide discussions and prospects for future research. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. Hierarchical Graph Neural Network: A Lightweight Image Matching Model with Enhanced Message Passing of Local and Global Information in Hierarchical Graph Neural Networks.
- Author
-
Opanin Gyamfi, Enoch, Qin, Zhiguang, Mantebea Danso, Juliana, and Adu-Gyamfi, Daniel
- Subjects
- *
GRAPH neural networks , *COMPUTER vision , *IMAGE registration , *REPRESENTATIONS of graphs , *PRINCIPAL components analysis , *POSE estimation (Computer vision) - Abstract
Graph Neural Networks (GNNs) have gained popularity in image matching methods, proving useful for various computer vision tasks like Structure from Motion (SfM) and 3D reconstruction. A well-known example is SuperGlue. Lightweight variants, such as LightGlue, have been developed with a focus on stacking fewer GNN layers compared to SuperGlue. This paper proposes the h-GNN, a lightweight image matching model, with improvements in the two processing modules, the GNN and matching modules. After image features are detected and described as keypoint nodes of a base graph, the GNN module, which primarily aims at increasing the h-GNN's depth, creates successive hierarchies of compressed-size graphs from the base graph through a clustering technique termed SC+PCA. SC+PCA combines Principal Component Analysis (PCA) with Spectral Clustering (SC) to enrich nodes with local and global information during graph clustering. A dual non-contrastive clustering loss is used to optimize graph clustering. Additionally, four message-passing mechanisms have been proposed to only update node representations within a graph cluster at the same hierarchical level or to update node representations across graph clusters at different hierarchical levels. The matching module performs iterative pairwise matching on the enriched node representations to obtain a scoring matrix. This matrix comprises scores indicating potential correct matches between the image keypoint nodes. The score matrix is refined with a 'dustbin' to further suppress unmatched features. There is a reprojection loss used to optimize keypoint match positions. The Sinkhorn algorithm generates a final partial assignment from the refined score matrix. Experimental results demonstrate the performance of the proposed h-GNN against competing state-of-the-art (SOTA) GNN-based methods on several image matching tasks under homography, estimation, indoor and outdoor camera pose estimation, and 3D reconstruction on multiple datasets. Experiments also demonstrate improved computational memory and runtime, approximately 38.1% and 26.14% lower than SuperGlue, and an average of about 6.8% and 7.1% lower than LightGlue. Future research will explore the effects of integrating more recent simplicial message-passing mechanisms, which concurrently update both node and edge representations, into our proposed model. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Using scale-equivariant CNN to enhance scale robustness in feature matching.
- Author
-
Liao, Yun, Liu, Peiyu, Wu, Xuning, Pan, Zhixuan, Zhu, Kaijun, Zhou, Hao, Liu, Junhui, and Duan, Qing
- Subjects
- *
COMPUTER vision , *CONVOLUTIONAL neural networks , *TRANSFORMER models , *PROBLEM solving , *IMAGE registration - Abstract
Image matching is an important task in computer vision. The detector-free dense matching method is an important research direction of image matching due to its high accuracy and robustness. The classical detector-free image matching methods utilize convolutional neural networks to extract features and then match them. Due to the lack of scale equivariance in CNNs, this method often exhibits poor matching performance when the images to be matched undergo significant scale variations. However, large-scale variations are very common in practical problems. To solve the above problem, we propose SeLFM, a method that combines scale equivariance and the global modeling capability of transformer. The two main advantages of this method are scale-equivariant CNNs can extract scale-equivariant features, while transformer also brings global modeling capability. Experiments prove that this modification improves the performance of the matcher in matching image pairs with large-scale variations and does not affect the general matching performance of the matcher. The code will be open-sourced at this link: https://github.com/LiaoYun0x0/SeLFM/tree/main [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Object/Scene Recognition Based on a Directional Pixel Voting Descriptor.
- Author
-
Aguilar-González, Abiel, Medina Santiago, Alejandro, and Osuna-Coutiño, J. A. de Jesús
- Subjects
ARTIFICIAL intelligence ,FEATURE extraction ,CONVOLUTIONAL neural networks ,IMAGE processing ,VOTING - Abstract
Detecting objects in images is crucial for several applications, including surveillance, autonomous navigation, augmented reality, and so on. Although AI-based approaches such as Convolutional Neural Networks (CNNs) have proven highly effective in object detection, in scenarios where the objects being recognized are unknow, it is difficult to generalize an AI model for such tasks. In another trend, feature-based approaches like SIFT, SURF, and ORB offer the capability to search any object but have limitations under complex visual variations. In this work, we introduce a novel edge-based object/scene recognition method. We propose that utilizing feature edges, instead of feature points, offers high performance under complex visual variations. Our primary contribution is a directional pixel voting descriptor based on image segments. Experimental results are promising; compared to previous approaches, ours demonstrates superior performance under complex visual variations and high processing speed. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. A Robust Search Method Using Features To Determine Combined Keywords On Cloud Encrypted Data.
- Author
-
Viswanadham, Y. K., Lakshmi, G. Naga, Kumar, G. Dinesh, Archana, B., and Sravanthi, B.
- Subjects
CLOUD storage ,INDEXING - Abstract
Users are more comfortable trusting their sensitive information to the cloud as its security continues to improve. However, when there are several encrypted files, each with its own set of keywords for indexing, the storage overhead grows exponentially, and search efficiency suffers. Therefore, this work provides a technique for searching encrypted cloud data that makes use of features to match joint keywords (FMJK). Joint keywords are generated by randomly selecting a subset of the data owner's non-duplicated keywords choice among the documents' extracted keywords; together, these keywords form a keyword dictionary. Every combined keyword matches with a document's feature as well as a query keyword, making the former's result considered a dimension of a document's index with the latter's result considered a dimension about the query trapdoor. Its BM25 method is then utilized for arranging the top k results by the inner product between the document index and the trapdoor. [ABSTRACT FROM AUTHOR]
- Published
- 2024
12. Detecting change in graffiti using a hybrid framework.
- Author
-
Wild, Benjamin, Verhoeven, Geert, Muszyński, Rafał, and Pfeifer, Norbert
- Abstract
Graffiti, by their very nature, are ephemeral, sometimes even vanishing before creators finish them. This transience is part of graffiti's allure yet signifies the continuous loss of this often disputed form of cultural heritage. To counteract this, graffiti documentation efforts have steadily increased over the past decade. One of the primary challenges in any documentation endeavour is identifying and recording new creations. Image‐based change detection can greatly help in this process, effectuating more comprehensive documentation, less biased digital safeguarding and improved understanding of graffiti. This paper introduces a novel and largely automated image‐based graffiti change detection method. The methodology uses an incremental structure‐from‐motion approach and synthetic cameras to generate co‐registered graffiti images from different areas. These synthetic images are fed into a hybrid change detection pipeline combining a new pixel‐based change detection method with a feature‐based one. The approach was tested on a large and publicly available reference dataset captured along the Donaukanal (Eng. Danube Canal), one of Vienna's graffiti hotspots. With a precision of 87% and a recall of 77%, the results reveal that the proposed change detection workflow can indicate newly added graffiti in a monitored graffiti‐scape, thus supporting a more comprehensive graffiti documentation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. 词汇树索引约束的无人机影像快速特征匹配算法.
- Author
-
姜 三, 江万寿, and 郭丙轩
- Subjects
- *
SUBJECT headings , *SEARCH algorithms , *IMAGE retrieval , *DRONE aircraft , *EUCLIDEAN distance , *IMAGE registration - Abstract
Objectives: Efficient match pair selection and image feature matching directly affect the efficiency of structure from motion (SfM) -based 3D reconstruction for unmanned aerial vehicle (UAV) images. This paper combines the inverted and direct index structure of the vocabulary tree to achieve the speedup of match pair selection and feature matching for UAV images. Methods: First, for match pair selection, vocabulary tree-based image retrieval has been the commonly used technique. However, it depends on the fixed number or fixed ratio threshold for match pair selection, which may cause many redundant match pairs. An adaptive vocabulary tree-based retrieval algorithm is designed for match pair selection by using the word-image index structure and the spatial distribution of similarity scores, and it can avoid the drawback of depending on fixed thresholds. Second, for feature matching, the nearest neighbor searching method attempts to compute the Euclidean distance exhaustively between two sets of feature descriptors, which causes high computational costs and generates high outlier ratios. Thus, a guided feature matching (GFM) algorithm is presented which casts the explicit closest descriptor searching as the direct assignment by using the image-word index structure of the vocabulary tree. Combining the match pair selection and GFM algorithm, an integrated workflow is finally presented to achieve feature matching of both ordered and unordered UAV images with high precision and efficiency. Results: The proposed workflow is verified using four UAV datasets and compared comprehensively with classical nearest neighbor searching algorithms and commercial software packages. Conclusions: The experimental results verify that the proposed method can achieve efficient match pair selection and avoid the problem of retrieving too many or too few match pairs that are usually caused by traditional methods using fixed threshold or number strategies. Without the sacrifice of matching precision, the speedup ratio of direct assign based feature matching ranges from 156 to 228, and competitive accuracy is also obtained from 3D reconstruction compared with the nearest neighbor searching method. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. A robust feature matching algorithm based on adaptive feature fusion combined with image superresolution reconstruction.
- Author
-
Huangfu, Wenjun, Ni, Cui, Wang, Peng, and Zhang, Yingying
- Subjects
CONVOLUTIONAL neural networks ,FEATURE extraction ,IMAGE reconstruction ,DEEP learning ,HIGH resolution imaging ,IMAGE reconstruction algorithms ,IMAGE registration - Abstract
With the development of image feature matching technology, feature matching algorithms based on deep learning have achieved excellent results, but in scenarios with low texture or extreme perspective changes, the matching accuracy is still difficult to guarantee. In this paper, a superresolution reconstruction method based on a Residual-ESPCN (efficient subpixel convolutional neural network) approach is proposed based on LoFTR (local feature matching with transformers). The superresolution method is used to improve the interpolation method used in ASFF (adaptive spatial feature fusion) to increase the image resolution, enhance the detailed information of the image, and make the extracted features richer. Then, ASFF is introduced into the local feature extraction module of LoFTR, which can alleviate the inconsistency problem of information transmission between different scale features of the feature pyramid and lessen the amount of information lost during transmission from low- to high-resolution levels. Moreover, to improve the adaptability of the algorithm to different scenarios, OTSU is introduced to adaptively calculate the threshold of feature matching. The experimental results show that in different indoor or outdoor scenarios, our proposed algorithm for matching features can effectively improve the adaptability of feature matching and can achieve good results in terms of the area under the curve (AUC), accuracy and recall. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Feature Description using Autoencoders for Fast 3D Ultrasound Tracking.
- Author
-
Wulff, Daniel and Ernst, Floris
- Subjects
ULTRASONIC imaging ,RADIOTHERAPY ,IMAGE processing ,DETECTORS ,INDUSTRIAL lasers - Abstract
3D ultrasound imaging is a promising modality for therapy guidance, e.g. in radiation therapy. It is able to provide volumetric soft tissue images in real-time. However, due to low image quality, high noise ratio and high data dimensionality, real-time capable US image processing methods like target tracking are challenging. In this study, a feature-based tracking approach is investigated. The FAST feature detector is used to detect local image features in 3D ultrasound images. Two different feature descriptors are tested and evaluated in terms of target tracking: The BRIEF descriptor as well as a slicedwasserstein autoencoder. On the basis of a feature matching algorithm, tracking experiments are executed and evaluated using eight labeled 3D US sequences. The mean tracking error measured is 2.08±1.50mm and 2.29±1.59mm using the autoencoder and the BRIEF descriptor, respectively. The results indicate that using an autoencoder for feature description improves the tracking performance compared to a binary descriptor. The proposed tracking method could be executed in fast runtimes of 137 ms and 256 ms per image on average making it real-time capable. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. Research on efficient matching method of coal gangue recognition image and sorting image
- Author
-
Zhang Ye, Ma Hongwei, Wang Peng, Zhou Wenjian, Cao Xiangang, and Zhang Mingzhen
- Subjects
Coal gangue image matching ,Feature matching ,Intelligent coal gangue sorting robot ,Medicine ,Science - Abstract
Abstract When the coal gangue sorting robot sorts coal gangue, the position of the target coal gangue will change due to belt slippage, deviation, and speed fluctuations of the belt conveyor. This will cause the robotic to fail in grasping or miss grasping. We have developed a solution to this problem: the IMSSP-Net two-stage network gangue image fast matching method. This method will reacquire the target gangue position information and improve the robot’s grasping precision and efficiency. In the first stage, we use SuperPoint to guarantee the scene adaptability and credibility of feature point extraction. We have enhanced Superpoint’s ability to detect feature points further by using the improved Multi-scale Retinex with Color Restoration enhancement algorithm. In the second stage, we introduce SuperGlue for feature matching to improve the robustness of the matching network. We eliminated erroneous feature matching point pairs and improved the accuracy of image matching by adopting the PROSAC algorithm. We conducted image matching comparison experiments under different object distances, scales, rotation angles, and complex conditions. The experimental platform adopts the double-manipulator truss-type coal gangue sorting robot independently developed by the team. The matching precision, recall, and matching time of the method are 98.2%, 98.3%, and 84.6ms, respectively. The method can meet the requirements of efficient and accurate matching between coal gangue recognition images and sorting images.
- Published
- 2024
- Full Text
- View/download PDF
17. Automatic tracking of moving human body based on remote sensing spatial information.
- Author
-
Dong, Wei, Li, Jiayang, and Lv, Yongfei
- Abstract
Aiming at the low tracking accuracy and longtime tracking problems of the traditional automatic mobile human body tracking methods, this research proposes an automatic tracking method of the moving human body based on remote sensing spatial information. The proposed automatic tracking method for the moving human body first utilizes remote sensing technology to obtain the spatial information of the moving human body and then constructs a human body target motion model. By segmenting and fusing human body motion images, automatic matching and tracking of human body motion is finally realized. The experimental results showed that the auto-tracking time of the moving human body was only 0.3 s, while the auto-tracking accuracy rate was as high as 98.29%. In summary, the method used in this research has a high human motion tracking recognition effect. In addition, the study still has shortcomings, such as how to maintain high accuracy tracking in low light and bad weather conditions, still need to be studied. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. An Improved Ship Recognition Method Based on Bag-of-Visual-Words.
- Author
-
LI Lianmin, SUN Ligong, and SUN Shibao
- Subjects
COMPUTER vision ,SUPPORT vector machines ,SUPERVISED learning ,ENCYCLOPEDIAS & dictionaries ,SHIPS - Abstract
Ship identification plays a critical role in maritime trade and military activities. Current research largely relies on deep learning-based methods, which demand extensive datasets and high-end hardware, often necessitating GPUs. This requirement significantly limits their practical application. Addressing this challenge, this paper introduces an enhanced bag-of-visual-words ( BoVW) model based on classical computer vision techniques for rapid ship identification. The proposed method initially employs SIFT and SURF techniques to extract local features from ship images, followed by rapid matching and fusion of these features. A graph-theoretic approach is then used to determine the regions of interest (ROI) within the image, reducing background interference. Subsequently, clustering algorithms transform features within the ROIs into visual words and construct a visual dictionary. Each image is described using histograms of visual words. The method also employs a spatial pyramid kernel to represent spatial relationships between image features and uses support vector machines (SVM) for supervised learning classification. Key parameters in the model include the size of the visual dictionary and the resolution level. Extensive experiments were conducted to explore these parameters. When the visual dictionary size was set to 300 and the resolution level to 2, the model achieved an accuracy and precision exceeding 96%,validating the effectiveness of the proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. Building Better Models: Benchmarking Feature Extraction and Matching for Structure from Motion at Construction Sites.
- Author
-
Cueto Zumaya, Carlos Roberto, Catalano, Iacopo, and Queralta, Jorge Peña
- Subjects
- *
BUILDING sites , *FEATURE extraction , *RESEARCH personnel , *EVALUATION methodology , *POPULARITY , *DEEP learning - Abstract
The popularity of Structure from Motion (SfM) techniques has significantly advanced 3D reconstruction in various domains, including construction site mapping. Central to SfM, is the feature extraction and matching process, which identifies and correlates keypoints across images. Previous benchmarks have assessed traditional and learning-based methods for these tasks but have not specifically focused on construction sites, often evaluating isolated components of the SfM pipeline. This study provides a comprehensive evaluation of traditional methods (e.g., SIFT, AKAZE, ORB) and learning-based methods (e.g., D2-Net, DISK, R2D2, SuperPoint, SOSNet) within the SfM pipeline for construction site mapping. It also compares matching techniques, including SuperGlue and LightGlue, against traditional approaches such as nearest neighbor. Our findings demonstrate that deep learning-based methods such as DISK with LightGlue and SuperPoint with various matchers consistently outperform traditional methods like SIFT in both reconstruction quality and computational efficiency. Overall, the deep learning methods exhibited better adaptability to complex construction environments, leveraging modern hardware effectively, highlighting their potential for large-scale and real-time applications in construction site mapping. This benchmark aims to assist researchers in selecting the optimal combination of feature extraction and matching methods for SfM applications at construction sites. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. 物证鉴定范式发展(一):传统明确结论范式.
- Author
-
王桂强
- Subjects
EXPERT evidence ,FORENSIC sciences ,SCIENTIFIC method ,CRIME scenes ,DNA - Abstract
Copyright of Forensic Science & Technology is the property of Institute of Forensic Science, Ministry of Public Security and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
21. EMC+GD_C: circle-based enhanced motion consistency and guided diffusion feature matching for 3D reconstruction.
- Author
-
Cai, Zhenjiao, Zhang, Sulan, Zhang, Jifu, Li, Xiaoming, Hu, Lihua, and Cai, Jianghui
- Subjects
THREE-dimensional imaging ,NEIGHBORHOODS ,CIRCLE - Abstract
Robust matching, especially the number, precision and distribution of feature point matching, directly affects the effect of 3D reconstruction. However, the existing methods rarely consider these three aspects comprehensively to improve the quality of feature matching, which in turn affects the effect of 3D reconstruction. Therefore, to effectively improve the quality of 3D reconstruction, we propose a circle-based enhanced motion consistency and guided diffusion feature matching algorithm for 3D reconstruction named EMC+GD_C. Firstly, a circle-based neighborhood division method is proposed, which increases the number of initial matching points. Secondly, to improve the precision of feature point matching, on the one hand, we put forward the idea of enhancing motion consistency, reducing the mismatch of high similarity feature points by enhancing the judgment conditions of true and false matching points; on the other hand, we combine the RANSAC optimization method to filter out the outliers and further improve the precision of feature point matching. Finally, a novel guided diffusion idea combining guided matching and motion consistency is proposed, which expands the distribution range of feature point matching and improves the stability of 3D models. Experiments on 8 sets of 908 pairs of images in the public 3D reconstruction datasets demonstrate that our method can achieve better matching performance and show stronger stability in 3D reconstruction. Specifically, EMC+GD_C achieves an average improvement of 24.07% compared to SIFT-based ratio test, 9.18% to GMS and 1.94% to EMC+GD_G in feature matching precision. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. 基于ISAR 图像序列的目标三维重构技术.
- Author
-
李敏敏, 杨利红, and 吴超
- Abstract
Copyright of Computer Measurement & Control is the property of Magazine Agency of Computer Measurement & Control and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
23. 跨尺度点匹配结合多尺度特征融合的图像配准.
- Author
-
欧卓林, 吕晓琪, and 谷 宇
- Subjects
COMPUTER-aided diagnosis ,IMAGE registration ,MAGNETIC resonance ,BRAIN diseases ,DIAGNOSTIC imaging - Abstract
Copyright of Chinese Journal of Liquid Crystal & Displays is the property of Chinese Journal of Liquid Crystal & Displays and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
24. Incremental SFM 3D Reconstruction Based on Deep Learning.
- Author
-
Liu, Lei, Wang, Congzheng, Feng, Chuncheng, Gong, Wanqi, Zhang, Lingyi, Liao, Libin, and Feng, Chang
- Subjects
MACHINE learning ,POINT cloud ,COMPUTER vision ,DRONE aircraft ,DEEP learning - Abstract
In recent years, with the rapid development of unmanned aerial vehicle (UAV) technology, multi-view 3D reconstruction has once again become a hot spot in computer vision. Incremental Structure From Motion (SFM) is currently the most prevalent reconstruction pipeline, but it still faces challenges in reconstruction efficiency, accuracy, and feature matching. In this paper, we use deep learning algorithms for feature matching to obtain more accurate matching point pairs. Moreover, we adopted the improved Gauss–Newton (GN) method, which not only avoids numerical divergence but also accelerates the speed of bundle adjustment (BA). Then, the sparse point cloud reconstructed by SFM and the original image are used as the input of the depth estimation network to predict the depth map of each image. Finally, the depth map is fused to complete the reconstruction of dense point clouds. After experimental verification, the reconstructed dense point clouds have rich details and clear textures, and the integrity, overall accuracy, and reconstruction efficiency of the point clouds have been improved. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. CDTracker: Coarse-to-Fine Feature Matching and Point Densification for 3D Single-Object Tracking.
- Author
-
Zhang, Yuan, Pu, Chenghan, Qi, Yu, Yang, Jianping, Wu, Xiang, Niu, Muyuan, and Wei, Mingqiang
- Subjects
- *
POINT cloud , *ARTIFICIAL satellite tracking - Abstract
Three-dimensional (3D) single-object tracking (3D SOT) is a fundamental yet not well-solved problem in 3D vision, where the complexity of feature matching and the sparsity of point clouds pose significant challenges. To handle abrupt changes in appearance features and sparse point clouds, we propose a novel 3D SOT network, dubbed CDTracker. It leverages both cosine similarity and an attention mechanism to enhance the robustness of feature matching. By combining similarity embedding and attention assignment, CDTracker performs template and search area feature matching in a coarse-to-fine manner. Additionally, CDTracker addresses the problem of sparse point clouds, which commonly leads to inaccurate tracking. It incorporates relatively dense sampling based on the concept of point cloud segmentation to retain more target points, leading to improved localization accuracy. Extensive experiments on both the KITTI and Waymo datasets demonstrate clear improvements in CDTracker over its competitors. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. OAAFormer: Robust and Efficient Point Cloud Registration Through Overlapping-Aware Attention in Transformer.
- Author
-
Gao, Jun-Jie, Dong, Qiu-Jie, Wang, Rui-An, Chen, Shuang-Min, Xin, Shi-Qing, Tu, Chang-He, and Wang, Wenping
- Subjects
POINT cloud ,FEATURE extraction ,STATISTICAL sampling ,RECORDING & registration ,ALGORITHMS - Abstract
In the domain of point cloud registration, the coarse-to-fine feature matching paradigm has received significant attention due to its impressive performance. This paradigm involves a two-step process: first, the extraction of multilevel features, and subsequently, the propagation of correspondences from coarse to fine levels. However, this approach faces two notable limitations. Firstly, the use of the Dual Softmax operation may promote one-to-one correspondences between superpoints, inadvertently excluding valuable correspondences. Secondly, it is crucial to closely examine the overlapping areas between point clouds, as only correspondences within these regions decisively determine the actual transformation. Considering these issues, we propose OAAFormer to enhance correspondence quality. On the one hand, we introduce a soft matching mechanism to facilitate the propagation of potentially valuable correspondences from coarse to fine levels. On the other hand, we integrate an overlapping region detection module to minimize mismatches to the greatest extent possible. Furthermore, we introduce a region-wise attention module with linear complexity during the fine-level matching phase, designed to enhance the discriminative capabilities of the extracted features. Tests on the challenging 3DLoMatch benchmark demonstrate that our approach leads to a substantial increase of about 7% in the inlier ratio, as well as an enhancement of 2%–4% in registration recall. Finally, to accelerate the prediction process, we replace the Conventional Random Sample Consensus (RANSAC) algorithm with the selection of a limited yet representative set of high-confidence correspondences, resulting in a 100 times speedup while still maintaining comparable registration performance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. Depth grid-based local description for 3D point clouds.
- Author
-
Sa, Jiming, Zhang, Xuecheng, Zhang, Chi, Song, Yuyan, Ding, Liwei, and Huang, Yechen
- Abstract
With the rapid development and extensive application of next-generation image processing technologies, the manufacturing industry is increasingly adopting intelligent equipment. To meet the demands of high precision and high efficiency production, there has been a growing focus on researching 3D point cloud processing methods that go beyond traditional approaches. A fundamental and crucial challenge in the field of point cloud processing is establishing a point-to-point correspondence mapping between two point clouds, which relies on leveraging the local feature description information inherent in the point cloud. This paper thoroughly investigates novel local description methods based on point cloud processing. It addresses the issue of inadequate descriptive capability and robustness found in existing local description methods. Specifically, this study explores the encoding of point information in the neighborhood space and multi-view projection mapping, proposing a local point cloud description method based on depth grids. This method leverages a local reference frame and establishes a depth grid after obtaining the local reference frame through neighborhood projection and distance weighting. The contribution of neighboring points to the depth of the grid is calculated to obtain the eigenvalues. To enhance efficiency, the calculation of eigenvalues incorporates normalization and multi-view projection techniques. The proposed method is compared and evaluated against various local description methods to verify its effectiveness and accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. SIFT and ORB performance assessment for object identification in different test cases.
- Author
-
Sabry, Eman S., Elagooz, Salah, El-Samie, Fathi E. Abd, El-Bahnasawy, Nirmeen A., and El-Banby, Ghada M.
- Abstract
Computer vision is a catch-all term for a variety of applications. This makes it a good research environment for new ideas and concepts. Feature extraction is considered an essential step in such applications. Several research studies introduced the Scale-Invariant Feature Transform (SIFT) as a benchmark method to extract visual features from objects inside images. This ensures the need for a deep study of SIFT in a variety of settings. Hence, this paper presents an assessment of SIFT from different perspectives that are not explicitly expressed in the literature. In addition, it presents an illustration of the majority of Oriented FAST and Rotated BRIEF (ORB) feature extraction characteristics to facilitate the choice procedure between SIFT and ORB. Several experimental cases are included, each of which evaluates the performance of such methods from distinct and different aspects. At first, the paper presents an assessment of these methods to identify objects inside geometrically–affine-transformed images. This is done by comparing how well their gathered feature descriptors from images perform against one another. Second, this paper presents an evaluation of the invariance of these methods to the changes in illumination. Furthermore, the computational and asymptotic complexity of such methods is investigated to examine its impact on the complexity of any feature-based system. Finally, the efficiency of these methods is verified by assessing their ability to support real-time applications, through the evaluation of their time and space complexities over all investigated test scenarios. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. Advanced Image Stitching Method for Dual-Sensor Inspection.
- Author
-
Shahsavarani, Sara, Lopez, Fernando, Ibarra-Castanedo, Clemente, and Maldague, Xavier P. V.
- Subjects
- *
GRAPH neural networks , *INFRARED imaging , *CONVOLUTIONAL neural networks , *SURFACE defects - Abstract
Efficient image stitching plays a vital role in the Non-Destructive Evaluation (NDE) of infrastructures. An essential challenge in the NDE of infrastructures is precisely visualizing defects within large structures. The existing literature predominantly relies on high-resolution close-distance images to detect surface or subsurface defects. While the automatic detection of all defect types represents a significant advancement, understanding the location and continuity of defects is imperative. It is worth noting that some defects may be too small to capture from a considerable distance. Consequently, multiple image sequences are captured and processed using image stitching techniques. Additionally, visible and infrared data fusion strategies prove essential for acquiring comprehensive information to detect defects across vast structures. Hence, there is a need for an effective image stitching method appropriate for infrared and visible images of structures and industrial assets, facilitating enhanced visualization and automated inspection for structural maintenance. This paper proposes an advanced image stitching method appropriate for dual-sensor inspections. The proposed image stitching technique employs self-supervised feature detection to enhance the quality and quantity of feature detection. Subsequently, a graph neural network is employed for robust feature matching. Ultimately, the proposed method results in image stitching that effectively eliminates perspective distortion in both infrared and visible images, a prerequisite for subsequent multi-modal fusion strategies. Our results substantially enhance the visualization capabilities for infrastructure inspection. Comparative analysis with popular state-of-the-art methods confirms the effectiveness of the proposed approach. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
30. Research on Partial Overlapping Point Cloud Registration Algorithm for Matching Geometric Features.
- Author
-
HU Jianghao and WANG Feng
- Subjects
POINT cloud ,STANDARD deviations ,FEATURE extraction ,SINGULAR value decomposition ,ANGLES ,RECORDING & registration - Abstract
In order to solve the problems of outlier, redundant points and fuzzy features in the registration of partially overlapping point clouds, a new registration algorithm for partially overlapping point clouds matching geometric features is proposed. Feature interaction and multilayer perceptron are used to calculate the overlap score and feature salient value of each point to be registered, and extract the salient feature points in the overlapping area. Geometric features are captured according to length and angle, representative feature descriptors are extracted, special geometric feature matching networks are designed, internal values and outlier of key points are identified, and outlier are filtered. The registration results are obtained using weighted singular value decomposition operations. Experimental results show that for ModelNet40 dataset, compared with the benchmark algorithm, the root mean square error and mean absolute error of the proposed algorithm in rotation and translation are reduced by 59%, 45%, 83% and 66% respectively. For the ShapeNetCore dataset, the algorithm is reduced by 63%, 32%, 78%, and 50% in four indicators, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. Accurate semantic segmentation of small-body craters for navigation.
- Author
-
Li, Shuai, Gu, Tianhao, Liu, Yanjie, and Shao, Wei
- Subjects
- *
FEATURE extraction , *DEEP learning , *IMAGE registration , *NAVIGATION , *AERONAUTICAL navigation - Abstract
Feature extraction and matching play integral roles in autonomous vision navigation. As a typical morphological feature, craters can be used as navigation landmarks. However, the spin of small bodies introduces substantial variations in illumination and viewing angles within acquired images, thereby posing challenges to feature extraction and matching algorithms. Addressing these challenges, a deep learning-based approach emerges as pivotal in mitigating the limitations of existing navigation algorithms. This paper presents an algorithm designed to segment noteworthy craters and irregular regions on such small bodies, thereby furnishing crucial autonomous navigation data for missions involving uncharted terrains. By seamlessly integrating deep learning techniques with conventional computer vision-based feature description methods, the algorithm achieves matching segmentation results of noteworthy precision. The experimental results show that the algorithm has better results compared to the existing mainstream segmentation networks. And it can achieve the accurate matching of segmentation results. • Accurate segmentation of small-body craters via deep-learning Algorithms. • Gradient descriptor construction method based on segmentation results. • The proposed method has viewpoint, illumination, scale, and rotation invariance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. 基于复合型 2S 网络的红外与可见光图像配准研究.
- Author
-
郑博文, 王琢, and 曹昕宇
- Abstract
Aiming at the problem that traditional image registration methods have poor effect in infrared and visible image registration tasks. A feature matching method based on Superpoint + Superglue (2S) composite network was proposed for infrared and visible image registration. The method first used Superpoint's unique feature extraction method to fully extract common features between infrared and visible light images. Secondly, the idea of adding matching constraints and using attention mechanism in Superlube feature matching method was used to give full play to the advantages of neural network and improve the matching efficiency. In the training phase, the method of using self built datasets was used to improve the generalization and accuracy of the neural network. The results show that the repeatability and accuracy scores of traditional registration methods for feature point extraction on three sets of experimental images are (0. 006 7, 0. 006 1), (0. 001 0, 0. 000 8), and (0, 0), respectively. The correct matching logarithms of feature points are 7 pairs, 1 pair, and 0 pairs, with an average number lower than the minimum four matching point pairs required to estimate the transformation matrix. The scores of infrared and visible image registration methods based on Superpoint + Superglue are (0. 240 2, 0. 262 5), (0. 193 9, 0. 172 2), (0. 263 0, 0. 264 4), and the correct matching logarithms of feature points are 252, 165, and 252 pairs. The evaluation index of feature point extraction and the number of correct matching of feature point pairs are significantly increased compared with traditional methods, which can better complete the registration task. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. Overview of Essential Components in deep learning reference-based super resolution methods.
- Author
-
Xue, Jiayu, Liu, Junjie, and Shi, Yong
- Subjects
DEEP learning - Abstract
Reference-based super resolution (RefSR) aims to recover the lost details in a low-resolution image and generate a high-resolution result, guided by a high-resolution reference image with similar contents or textures. In contrast to the traditional single-image super-resolution, which focuses on the intrinsic properties of the single low-resolution image, the challenge of RefSR lies in matching and aggregating highly-related but misaligned reference images with low-resolution images. Several effective but complex designs have been proposed to address this challenge, which poses difficulties in implementing RefSR in real-world applications. In order to better understand the working mechanism of RefSR and design a more efficient and lightweight architecture, we provide a review about the essential components of the existing deep learning-based RefSR. We decompose and classify the common pipeline into four submodules according to the functionalities. Then, we summarize and describe the implementation details of the commonly-adopted approaches in each submodule. Finally, we discuss the challenges and promising research directions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. Designing A Valid and Reliable AAC App Evaluation Tool: Differences Between Team and Novice Raters.
- Author
-
Boesch, Miriam C., Da Fonte, M. Alexandra, Cavagnini, Melissa J., Shaw, Kaitlyn R., Deneny, Keren E., and Davis, Margaret F.
- Subjects
MEANS of communication for people with disabilities ,MOBILE communication systems ,MOBILE apps ,TELECOMMUNICATION systems - Abstract
Students with complex communication needs have increasingly been using non-dedicated communication systems, such as mobile devices, to support their communication needs. This in turn, has led to an increased used of augmentative and alternative communication apps. The main challenge currently faced is the lack of empirically validated apps and evaluation systems to assess the features of the apps. As a result, this study attempted to determine the reliability of an app evaluation tool that was grounded in the components of the feature matching model. The goal was also to identify if the app evaluation tool could be used to evaluate various types of augmentative and alternative communication apps. Participants evaluated apps across the dimensions of usability, output, and display. Results suggest that expert raters were more reliability than novice raters across the various types of apps. Practical implications and future research directions are discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. ISFM-SLAM: dynamic visual SLAM with instance segmentation and feature matching
- Author
-
Chao Li, Yang Hu, Jianqiang Liu, Jianhai Jin, and Jun Sun
- Subjects
simultaneous localization and mapping (SLAM) ,instance segmentation network ,dynamic environment ,motion consistency detection ,feature matching ,Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 - Abstract
IntroductionSimultaneous Localization and Mapping (SLAM) is a technology used in intelligent systems such as robots and autonomous vehicles. Visual SLAM has become a more popular type of SLAM due to its acceptable cost and good scalability when applied in robot positioning, navigation and other functions. However, most of the visual SLAM algorithms assume a static environment, so when they are implemented in highly dynamic scenes, problems such as tracking failure and overlapped mapping are prone to occur.MethodsTo deal with this issue, we propose ISFM-SLAM, a dynamic visual SLAM built upon the classic ORB-SLAM2, incorporating an improved instance segmentation network and enhanced feature matching. Based on YOLACT, the improved instance segmentation network applies the multi-scale residual network Res2Net as its backbone, and utilizes CIoU_Loss in the bounding box loss function, to enhance the detection accuracy of the segmentation network. To improve the matching rate and calculation efficiency of the internal feature points, we fuse ORB key points with an efficient image descriptor to replace traditional ORB feature matching of ORB-SLAM2. Moreover, the motion consistency detection algorithm based on external variance values is proposed and integrated into ISFM-SLAM, to assist the proposed SLAM systems in culling dynamic feature points more effectively.Results and discussionSimulation results on the TUM dataset show that the overall pose estimation accuracy of the ISFM-SLAM is 97% better than the ORB-SLAM2, and is superior to other mainstream and state-of-the-art dynamic SLAM systems. Further real-world experiments validate the feasibility of the proposed SLAM system in practical applications.
- Published
- 2024
- Full Text
- View/download PDF
36. Global Modeling and Local Matching: A Dynamic Fusion Approach
- Author
-
Zeng, Youlong, Sun, Haiyan, Li, Xiaobin, Chen, Zhuoyi, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Li, Yong, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Oneto, Luca, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zamboni, Walter, Series Editor, Tan, Kay Chen, Series Editor, Jia, Yingmin, editor, Zhang, Weicun, editor, Fu, Yongling, editor, and Yang, Huihua, editor
- Published
- 2024
- Full Text
- View/download PDF
37. Innovative Fusion of Transformer Models with SIFT for Superior Panorama Stitching
- Author
-
Xiang, Zheng, Fournier-Viger, Philippe, Series Editor, and Wang, Yulin, editor
- Published
- 2024
- Full Text
- View/download PDF
38. Keypoint Matching for Instrument-Free 3D Registration in Video-Based Surgical Navigation
- Author
-
Baptista, Tânia, Raposo, Carolina, Marques, Miguel, Antunes, Michel, Barreto, Joao P., Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Linguraru, Marius George, editor, Dou, Qi, editor, Feragen, Aasa, editor, Giannarou, Stamatia, editor, Glocker, Ben, editor, Lekadir, Karim, editor, and Schnabel, Julia A., editor
- Published
- 2024
- Full Text
- View/download PDF
39. PCB Large Color Variation Image Registration with Local Optimization LoFTR
- Author
-
Hou, Yingyan, Zhang, Yidan, Liu, Xiaoxuan, Wu, Hui, Jia, Jie, Li, Xiaohe, Liu, Shixiong, Wang, Lei, Zhao, Xinyu, Akan, Ozgur, Editorial Board Member, Bellavista, Paolo, Editorial Board Member, Cao, Jiannong, Editorial Board Member, Coulson, Geoffrey, Editorial Board Member, Dressler, Falko, Editorial Board Member, Ferrari, Domenico, Editorial Board Member, Gerla, Mario, Editorial Board Member, Kobayashi, Hisashi, Editorial Board Member, Palazzo, Sergio, Editorial Board Member, Sahni, Sartaj, Editorial Board Member, Shen, Xuemin, Editorial Board Member, Stan, Mircea, Editorial Board Member, Jia, Xiaohua, Editorial Board Member, Zomaya, Albert Y., Editorial Board Member, Yu, Weng, editor, and Xuan, Liu, editor
- Published
- 2024
- Full Text
- View/download PDF
40. Local Feature Descriptor Based on Directional Structure Map for Improving the Hotspot Detection in the Multispectral Aerial Image of a Large-Scale PV System
- Author
-
Tan, Li Ven, Jadin, Mohd Shawal, Osman, Muhammad Khusairi, Bakar, Mohd Shafie, Islam, Md. Imamul, Al Mansur, Ahmed, Ul Haq, Mohammad Asif, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Hirche, Sandra, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Li, Yong, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Oneto, Luca, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zamboni, Walter, Series Editor, Tan, Kay Chen, Series Editor, Md. Zain, Zainah, editor, Sulaiman, Norizam, editor, Mustafa, Mahfuzah, editor, Shakib, Mohammed Nazmus, editor, and A. Jabbar, Waheb, editor
- Published
- 2024
- Full Text
- View/download PDF
41. Loosening Bolt Detection of Sling Cars Based on Deep Learning and Feature Matching
- Author
-
Qiao, Kaifan, Feng, Guojin, Zhen, Dong, Liang, Xiaoxia, Meng, Zhaozong, Gu, Fengshou, Ceccarelli, Marco, Series Editor, Corves, Burkhard, Advisory Editor, Glazunov, Victor, Advisory Editor, Hernández, Alfonso, Advisory Editor, Huang, Tian, Advisory Editor, Jauregui Correa, Juan Carlos, Advisory Editor, Takeda, Yukio, Advisory Editor, Agrawal, Sunil K., Advisory Editor, Liu, Tongtong, editor, Zhang, Fan, editor, Huang, Shiqing, editor, Wang, Jingjing, editor, and Gu, Fengshou, editor
- Published
- 2024
- Full Text
- View/download PDF
42. On Authentication in Virtual Reality Environments for Rehabilitation and Psychotherapy Systems
- Author
-
Ungureanu, Florina, Bordea, Bianca Andreea, Lupu, Robert Gabriel, Vieriu, George, Magjarević, Ratko, Series Editor, Ładyżyński, Piotr, Associate Editor, Ibrahim, Fatimah, Associate Editor, Lackovic, Igor, Associate Editor, Rock, Emilio Sacristan, Associate Editor, Costin, Hariton-Nicolae, editor, and Petroiu, Gladiola Gabriela, editor
- Published
- 2024
- Full Text
- View/download PDF
43. SuperPoint and SuperGlue-Based-VINS-Fusion Model
- Author
-
Gao, Ming, Geng, Zhitao, Pan, Jingjing, Yan, Zhenghui, Zhang, Chen, Shi, Gongcheng, Fan, Haifeng, Zhang, Chuanlei, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Huang, De-Shuang, editor, Zhang, Chuanlei, editor, and Pan, Yijie, editor
- Published
- 2024
- Full Text
- View/download PDF
44. RoTIR: Rotation-Equivariant Network and Transformers for Zebrafish Scale Image Registration
- Author
-
Wang, Ruixiong, Achim, Alin, Raele-Rolfe, Renata, Tong, Qiao, Bergen, Dylan, Hammond, Chrissy, Cross, Stephen, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Yap, Moi Hoon, editor, Kendrick, Connah, editor, Behera, Ardhendu, editor, Cootes, Timothy, editor, and Zwiggelaar, Reyer, editor
- Published
- 2024
- Full Text
- View/download PDF
45. Cross-Domain Feature Extraction Using CycleGAN for Large FoV Thermal Image Creation
- Author
-
Rathore, Sudeep, Upadhyay, Avinash, Sharma, Manoj, Yadav, Ajay, Chand, G. Shyam, Singhal, Amit, Mukherjee, Prerana, Lall, Brejesh, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Kaur, Harkeerat, editor, Jakhetiya, Vinit, editor, Goyal, Puneet, editor, Khanna, Pritee, editor, Raman, Balasubramanian, editor, and Kumar, Sanjeev, editor
- Published
- 2024
- Full Text
- View/download PDF
46. Three-Dimensional Reconstruction Optimization Algorithm Under Large Viewpoint Variations Scenes
- Author
-
Gu, Yuntao, Yang, Zhile, Guo, Yuanjun, Ding, Wenjun, Cheng, Lan, Liu, Yu, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Hirche, Sandra, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Li, Yong, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Oneto, Luca, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zamboni, Walter, Series Editor, Tan, Kay Chen, Series Editor, Hu, Cungang, editor, and Cao, Wenping, editor
- Published
- 2024
- Full Text
- View/download PDF
47. Research on the Identification Method of Colour-Based Status Indicator Based on Feature Matching
- Author
-
Zhang, Guixin, Wan, Yi, Hou, Jiarui, Li, Yanan, Li, Jianwei, Ceccarelli, Marco, Series Editor, Corves, Burkhard, Advisory Editor, Glazunov, Victor, Advisory Editor, Hernández, Alfonso, Advisory Editor, Huang, Tian, Advisory Editor, Jauregui Correa, Juan Carlos, Advisory Editor, Takeda, Yukio, Advisory Editor, Agrawal, Sunil K., Advisory Editor, Tan, Jianrong, editor, Liu, Yu, editor, Huang, Hong-Zhong, editor, Yu, Jingjun, editor, and Wang, Zequn, editor
- Published
- 2024
- Full Text
- View/download PDF
48. CT-MVSNet: Efficient Multi-view Stereo with Cross-Scale Transformer
- Author
-
Wang, Sicheng, Jiang, Hao, Xiang, Lei, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Rudinac, Stevan, editor, Hanjalic, Alan, editor, Liem, Cynthia, editor, Worring, Marcel, editor, Jónsson, Björn Þór, editor, Liu, Bei, editor, and Yamakata, Yoko, editor
- Published
- 2024
- Full Text
- View/download PDF
49. Feature Matching in the Changed Environments for Visual Localization
- Author
-
Hu, Qian, Shen, Xuelun, Li, Zijun, Liu, Weiquan, Wang, Cheng, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Liu, Qingshan, editor, Wang, Hanzi, editor, Ma, Zhanyu, editor, Zheng, Weishi, editor, Zha, Hongbin, editor, Chen, Xilin, editor, Wang, Liang, editor, and Ji, Rongrong, editor
- Published
- 2024
- Full Text
- View/download PDF
50. An Iterative Selection Matching Algorithm Based on Fast Sample Consistency
- Author
-
Wang, Yanwei, Fu, Huaide, Cheng, Junting, Meng, Xianglin, Xie, Zeming, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Prates, Raquel Oliveira, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Zhang, Min, editor, Xu, Bin, editor, Hu, Fuyuan, editor, Lin, Junyu, editor, Song, Xianhua, editor, and Lu, Zeguang, editor
- Published
- 2024
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.