701 results on '"visual guidance"'
Search Results
2. The Visual Guidance of Scenic Windows in Spatial Sequences: A Case of New Garden of Qinghui Garden
- Author
-
Liu, Chen, Liang, Mingjie, Feng, Junxi, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Zaphiris, Panayiotis, editor, Ioannou, Andri, editor, Sottilare, Robert A., editor, Schwarz, Jessica, editor, and Rauterberg, Matthias, editor
- Published
- 2025
- Full Text
- View/download PDF
3. Weakly Supervised 3D Object Detection via Multi-level Visual Guidance
- Author
-
Huang, Kuan-Chih, Tsai, Yi-Hsuan, Yang, Ming-Hsuan, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Leonardis, Aleš, editor, Ricci, Elisa, editor, Roth, Stefan, editor, Russakovsky, Olga, editor, Sattler, Torsten, editor, and Varol, Gül, editor
- Published
- 2025
- Full Text
- View/download PDF
4. Guiding the Hand to an Invisible Target.
- Author
-
Furtak, Marcin and Brenner, Eli
- Subjects
- *
LOW vision , *ASSISTIVE technology , *FREE enterprise , *PEOPLE with visual disabilities - Abstract
AbstractNumerous devices are being developed to assist visually impaired and blind individuals in performing everyday tasks such as reaching out to grasp objects. Considering that the size, weight, and cost of assistive devices significantly impact their acceptance, it would be useful to know how effective various types of guiding information can be. As an initial exploration of this issue, we conducted four studies in which participants with normal vision were visually guided toward targets. They were guided by information about the direction to the target, and either about the distance to the target or about the time required to reach the target. We compared participants’ performance when provided with different amounts of each of these kinds of information. We found that restricting information about the distance from the target or the time it would take to reach the target to only a few possible values does not affect performance substantially. Restricting information about the direction to the target to only a few possible values appears to be more detrimental, but the disadvantage of having few possible directions can be mitigated by combining values in multiple directions. These findings can help optimize haptic presentations in assistive technology. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
5. Six-Dimensional Pose Estimation of Molecular Sieve Drying Package Based on Red Green Blue–Depth Camera.
- Author
-
Chen, Yibing, Cao, Songxiao, Wang, Qixuan, Xu, Zhipeng, Song, Tao, and Jiang, Qing
- Subjects
- *
OBJECT recognition (Computer vision) , *MOLECULAR sieves , *RECOGNITION (Psychology) , *PRINCIPAL components analysis , *POINT cloud - Abstract
This paper aims to address the challenge of precise robotic grasping of molecular sieve drying bags during automated packaging by proposing a six-dimensional (6D) pose estimation method based on an red green blue-depth (RGB-D) camera. The method consists of three components: point cloud pre-segmentation, target extraction, and pose estimation. A minimum bounding box-based pre-segmentation method was designed to minimize the impact of packaging wrinkles and skirt curling. Orientation filtering combined with Euclidean clustering and Principal Component Analysis (PCA)-based iterative segmentation was employed to accurately extract the target body. Lastly, a multi-target feature fusion method was applied for pose estimation to compute an accurate grasping pose. To validate the effectiveness of the proposed method, 102 sets of experiments were conducted and compared with classical methods such as Fast Point Feature Histograms (FPFH) and Point Pair Features (PPF). The results showed that the proposed method achieved a recognition rate of 99.02%, processing time of 2 s, pose error rate of 1.31%, and spatial position error of 3.278 mm, significantly outperforming the comparative methods. These findings demonstrated the effectiveness of the method in addressing the issue of accurate 6D pose estimation of molecular sieve drying bags, with potential for future applications to other complex-shaped objects. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
6. The influence of paradigm interface guided by different visual types on MI-BCI performance.
- Author
-
Shao, Jiang, Bai, Yuxin, Yao, Jun, Zhang, Ying, Tian, Fangyuan, and Xue, Chengqi
- Subjects
- *
BRAIN-computer interfaces , *PARADIGMS (Social sciences) - Abstract
Visual paradigms of Brain-Computer Interfaces (BCI) for motor imagery (MI) tasks are the basis for communication through (electroencephalogram) EEG signals. During the MI-BCI user training process, this study analyzes and summarises four different visual paradigms and compares their impact on the outcomes of MI-BCI training. Four different visual paradigms are experimentally compared through classification outcomes and subjective evaluation. EEG features were extracted via Common Spatial Patterns (CSP) and passed to a Support Vector Machine (SVM) model for their classification. The results show that all four types of visual paradigms have a significant impact on the outcomes of MI-BCI training, with Paradigm Set II having the most significant impact. This is because paradigm set II offers a paradigm interface with relatively low visual complexity on the basis of action observation, and visual guidance with more clarity and more accurate EEG classification. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
7. 基于视觉实时引导的煤矸石精准跟踪方法.
- Author
-
曹现刚, 王虎生, 王 鹏, 吴旭东, 向敬芳, and 李 虎
- Abstract
Copyright of Coal Science & Technology (0253-2336) is the property of Coal Science & Technology and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2025
- Full Text
- View/download PDF
8. Design of unmanned surface vehicle docking system based on multi-source observation
- Author
-
Dian KONG, Shaolong YANG, Lichun YANG, and Xianbo XIANG
- Subjects
unmanned surface vehicle (usv) ,docking recovery ,visual guidance ,field test of unmanned surface vehicle ,Naval architecture. Shipbuilding. Marine engineering ,VM1-989 - Abstract
ObjectivesThis paper proposes a multi-source observation-based recovery bucket guidance docking strategy for the reliable recovery observation and motion goal tracking of unmanned surface vessels (USVs). MethodsDuring the entire docking process, the interference zone of the mother ship's wake is first avoided in order to complete the rough alignment of the USV route; the heading tracking guidance line is then maintained to prepare for terminal docking recovery adjustments; finally, the data obtained by the visual sensor and inertial navigation sensor is filtered and fused to calculate the recovery guidance line, which is then transmitted to the USV. The USV completes the tracking of the terminal guidance line and docking recovery task through its own guidance and control systems. A USV docking recovery system is simultaneously designed on the basis of visual and integrated navigation fusion, the hardware and software of the real boat is independently designed, and lake field tests are conducted to verify the feasibility of the system design and docking strategy.ResultsThe experimental results show that the success rate of the USV in performing autonomous docking tasks reaches 91.6%. The proposed docking strategy can meet the high-precision docking and recovery requirements of USVs. ConclusionsThe findings of this study can provide critical technical support for USV recovery operations.
- Published
- 2024
- Full Text
- View/download PDF
9. Angle-of-approach and reversal-movement effects in lateral manual interception.
- Author
-
Ledouit, Simon, Borooghani, Danial, Casanova, Remy, Benguigui, Nicolas, Zaal, Frank T. J. M., and Bootsma, Reinoud J.
- Subjects
INFORMATION resources management ,VELOCITY ,ANGLES - Abstract
The present study sought to replicate two non-intuitive effects reported in the literature on lateral manual interception of uniformly moving targets, the angle-of-approach (AoA) effect and the reversal-movement (RM) effect. Both entail an influence of the target trajectory's incidence angle on the observed interceptive hand movements along the interception axis; they differ in the interception location considered. The AoA effect concerns all trajectory conditions requiring hand movement to allow successful interception, while the RM effect concerns the particular condition where the target will in fact arrive at the hand's initial position and no hand movement is therefore required but nevertheless regularly produced. Whereas the AoA effect has been systematically replicated, the RM effect has not. To determine whether the RM effect is in fact a reproducible phenomenon, we deployed a procedure enhancing the uncertainty about the target's future arrival locations with respect to the hand's initial position and included low-to-high target motion speeds. Results demonstrated the presence of both the AoA effect and the RM effect. The AoA effect was observed for all relevant interception locations, with the effect being stronger for the farther interception locations and the lower target speeds. The RM effect, with the hand first moving away from its initial position, in the direction of the target, before reversing direction, was observed in a higher proportion of trials for target trajectories with larger incidence angles and lower speeds. Earlier initiation gave rise to reversal movements of larger amplitude. Both effects point to visual guidance of hand movement partially based in reliance on information with respect to current lateral ball position. We conclude that the information used in lateral manual interception is of an intermediate order, which can be conceived as resulting from a partial combination of target position and velocity information or information in the form of a fractional order derivative. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. 雾天桥梁运行风险主动预警防控策略的改善效用.
- Author
-
戴义博, 赵晓华, 边扬, and 张建华
- Abstract
Copyright of Journal of South China University of Technology (Natural Science Edition) is the property of South China University of Technology and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
11. Impact of leading line composition on visual cognition: An eye-tracking study
- Author
-
Hsien-Chih Chuang, Han-Yi Tseng, and Chia-Yun Chiang
- Subjects
photography ,composition ,leading lines ,eye tracking ,visual guidance ,Human anatomy ,QM1-695 - Abstract
Leading lines, a fundamental composition technique in photography, are crucial to guiding the viewer’s visual attention. Leading line composition is an effective visual strategy for influencing viewers’ cognitive processes. However, in-depth research on the impact of leading line composition on cognitive psychology is lacking. This study investigated the cognitive effects of leading line composition on perception and behavior. The eye movement behaviors of 34 participants while they viewed photographic works with leading lines were monitored through eye-tracking experiments. Additionally, subjective assessments were conducted to collect the participants’ perceptions of the images in terms of aesthetics, complexity, and directional sense. The results revealed that leading lines significantly influenced the participants’ attention to key elements of the work, particularly when prominent subject elements were present. This led to greater engagement, longer viewing times, and enhanced ratings on aesthetics and directional sense. These findings suggest that skilled photographers can employ leading lines to guide the viewer’s gaze and create visually compelling and aesthetically pleasing works. This research offers specific compositional strategies for photography applications and underscores the importance of leading lines and subject elements in enhancing visual impact and artistic expression.
- Published
- 2024
- Full Text
- View/download PDF
12. An externally guided spatial augmented reality assembly assistance system in the aviation manufacturing industry.
- Author
-
Wang, Jiarui, Cui, Haihua, Cheng, Changzhi, Zhao, Xifu, Yang, Renchuan, and Yang, Feng
- Subjects
- *
AUGMENTED reality , *MANUFACTURING processes , *MANUFACTURING industries , *PROJECTORS , *CAMERAS - Abstract
As an augmented reality paradigm, spatial augmented reality (SAR) holds significant application value in assembly operations. However, in the current research, the camera is fixed with the projector, which results in the limited projection angle and poor flexibility. This makes it difficult to be applied in the assembly of cabin-type components in the aviation industry. In this paper, we present an externally-guided spatial augmented reality (EGSAR) system which allows the projector to move freely. First, the proposed system framework, especially the related transformation, is introduced. Under this framework, the implication has been simplified into pose measurement problem. Meanwhile, to track the moving projector, a dodecahedron transfer position target is designed. Due to the EGSAR being a combined system, then, we propose a calibration process for the target, which eliminates the manufacturing errors of the target. Based on the calibrated cameras, projector, and target, the pose measurement problem has been solved. The proposed EGSAR method is experimented in a factory-like scenario, where an operator performs assembly tasks under its guidance. The feasibility and accuracy of the prototype are verified. In conclusion, the proposed EGSAR framework improves the flexibility and can be a valid solution to extend the uses of the SAR system. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Monocular Vision Guidance for Unmanned Surface Vehicle Recovery.
- Author
-
Li, Zhongguo, Xi, Qian, Shi, Zhou, and Wang, Qi
- Subjects
AUTONOMOUS vehicles ,MONOCULAR vision ,BINOCULAR vision ,MONOCULARS ,MOTHERS - Abstract
The positioning error of the GPS method at close distances is relatively large, rendering it incapable of accurately guiding unmanned surface vehicles (USVs) back to the mother ship. Therefore, this study proposes a near-distance recovery method for USVs based on monocular vision. By deploying a monocular camera on the USV to identify artificial markers on the mother ship and subsequently leveraging the geometric relationships among these markers, precise distance and angle information can be extracted. This enables effective guidance for the USVs to return to the mother ship. The experimental results validate the effectiveness of this approach, with positioning distance errors of less than 40 mm within a 10 m range and positioning angle errors of less than 5 degrees within a range of ±60 degrees. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Intelligent In situ Printing of Multimaterial Bioinks for First‐Aid Wound Care Guided by Eye‐In‐Hand Robot Technology.
- Author
-
Jeong, Seol‐Ha, Kim, Jihyun, Thibault, Brendan Craig, Soto, Javier Alejandro Lozano, Tourk, Fatima, Steakelum, Joshua, Azuela, Diego, Carvalho, Violeta, Quiroga‐Ocaña, Guillermo, Zhuang, Weida, Cham‐Pérez, Mei Li L., Huang, Lucia L., Li, Zhuqing, Valsami, Eleftheria‐Angeliki, Wang, Enya, Rodrigues, Nelson, Teixeira, Senhorinha F.C.F., Lee, Yuhan, Seo, Jungmok, and Veves, Aristidis
- Subjects
- *
WOUND care , *IMAGE recognition (Computer vision) , *ROBOTS , *RHEOLOGY , *COMPRESSION bandages , *EMERGENCY medical services , *VALVES , *3-D printers , *HYDROCOLLOID surgical dressings - Abstract
INSIGHT (INtelligent in situ printing Guided by Eye‐in‐Hand robot Technology), an innovative computer vision‐enabled system that combines a depth camera with a 6‐degree of freedom robot arm, empowering it to identify arbitrary areas at various angles through real time adjustments and to enable volumetric printing performed by dynamic image recognition based on color and contour differences is presented. Continuous targeting of multiple wounds at different locations is achieved. The optimized pneumatic valve synchronized with the INSIGHT can print multiple inks with diverse rheological properties to fabricate scaffolds and bandages with the capacity to treat various types of wounds. The design of dual printed modes, such as extrusion and spray, can significantly decrease printing time for large‐scale wounds on an ex vivo porcine model. INSIGHT demonstrates its ability to treat diabetic wounds, using a microgel‐based ink possessing an inherent porous microstructure to facilitate cell infiltration. In vivo verification highlights its adaptability to enable customized care for rapid emergency treatment of trauma patients. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Autonomous Underwater Vehicle Cruise Positioning and Docking Guidance Scheme.
- Author
-
Zhang, Zhuoyu, Ding, Wangjie, Wu, Rundong, Lin, Mingwei, Li, Dejun, and Lin, Ri
- Subjects
AUTONOMOUS underwater vehicles ,ELECTRIC power ,SUBMERSIBLES - Abstract
The Autonomous Underwater Vehicle (AUV) is capable of autonomously conducting underwater cruising tasks. When combined with docking operations, the AUV can replenish its electric power after long-distance travel, enabling it to achieve long-range autonomous monitoring. This paper proposes a positioning method for the cruising and docking stages of AUVs. Firstly, a vision guidance algorithm based on monocular vision and threshold segmentation is studied to address the issue of regional noise that commonly occurs during underwater docking. A solution for regional noise based on threshold segmentation and proportional circle selection is proposed. Secondly, in order to enhance the positioning accuracy during the cruising stage, a fusion positioning algorithm based on particle filtering is presented, incorporating the Doppler Velocity Log (DVL) and GPS carried by the AUV. In simulation, this algorithm improves positioning accuracy by over 56.0% compared to using individual sensors alone. Finally, experiments for cruising and docking were conducted in Qingjiang, Hubei, China. The effectiveness of both methods is demonstrated, with successful docking achieved in four out of five attempts. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. AdaPIP: Adaptive picture-in-picture guidance for 360° film watching.
- Author
-
Li, Yi-Xiao, Luo, Guan, Xu, Yi-Ke, He, Yu, Zhang, Fang-Lue, and Zhang, Song-Hai
- Subjects
SHARED virtual environments ,HEAD-mounted displays - Abstract
360° videos enable viewers to watch freely from different directions but inevitably prevent them from perceiving all the helpful information. To mitigate this problem, picture-in-picture (PIP) guidance was proposed using preview windows to show regions of interest (ROIs) outside the current view range. We identify several drawbacks of this representation and propose a new method for 360° film watching called AdaPIP. AdaPIP enhances traditional PIP by adaptively arranging preview windows with changeable view ranges and sizes. In addition, AdaPIP incorporates the advantage of arrow-based guidance by presenting circular windows with arrows attached to them to help users locate the corresponding ROIs more efficiently. We also adapted AdaPIP and Outside-In to HMD-based immersive virtual reality environments to demonstrate the usability of PIP-guided approaches beyond 2D screens. Comprehensive user experiments on 2D screens, as well as in VR environments, indicate that AdaPIP is superior to alternative methods in terms of visual experiences while maintaining a comparable degree of immersion. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Research and application of laser splicing welding equipment for doorings based on intelligent vision splicing system
- Author
-
Wang, Yuqiang, Wang, Rulei, Miao, Jinzhong, Feng, Anzhu, Xiao, Shengxiong, Editor-in-Chief, Bassir, David, Series Editor, Gao, Bingbing, Series Editor, Jiang, Yongchao, Series Editor, Li, Jia, Series Editor, Mazumdar, Sayantan, Series Editor, Sun, Qijun, Series Editor, Tang, Juntao, Series Editor, Xiong, Chuanyin, Series Editor, Xu, Hexiu, Series Editor, Yang, Jun, Series Editor, Zhang, Yisheng, editor, and Ma, Mingtu, editor
- Published
- 2024
- Full Text
- View/download PDF
18. Angle-of-approach and reversal-movement effects in lateral manual interception
- Author
-
Simon Ledouit, Danial Borooghani, Remy Casanova, Nicolas Benguigui, Frank T. J. M. Zaal, and Reinoud J. Bootsma
- Subjects
perceptuomotor ,control ,interception ,timing ,information ,visual guidance ,Psychology ,BF1-990 - Abstract
The present study sought to replicate two non-intuitive effects reported in the literature on lateral manual interception of uniformly moving targets, the angle-of-approach (AoA) effect and the reversal-movement (RM) effect. Both entail an influence of the target trajectory’s incidence angle on the observed interceptive hand movements along the interception axis; they differ in the interception location considered. The AoA effect concerns all trajectory conditions requiring hand movement to allow successful interception, while the RM effect concerns the particular condition where the target will in fact arrive at the hand’s initial position and no hand movement is therefore required but nevertheless regularly produced. Whereas the AoA effect has been systematically replicated, the RM effect has not. To determine whether the RM effect is in fact a reproducible phenomenon, we deployed a procedure enhancing the uncertainty about the target’s future arrival locations with respect to the hand’s initial position and included low-to-high target motion speeds. Results demonstrated the presence of both the AoA effect and the RM effect. The AoA effect was observed for all relevant interception locations, with the effect being stronger for the farther interception locations and the lower target speeds. The RM effect, with the hand first moving away from its initial position, in the direction of the target, before reversing direction, was observed in a higher proportion of trials for target trajectories with larger incidence angles and lower speeds. Earlier initiation gave rise to reversal movements of larger amplitude. Both effects point to visual guidance of hand movement partially based in reliance on information with respect to current lateral ball position. We conclude that the information used in lateral manual interception is of an intermediate order, which can be conceived as resulting from a partial combination of target position and velocity information or information in the form of a fractional order derivative.
- Published
- 2024
- Full Text
- View/download PDF
19. 基于神经网络的工业机器人视觉抓取系统设计.
- Author
-
燕硕, 李建松, and 唐昌松
- Abstract
Copyright of Computer Measurement & Control is the property of Magazine Agency of Computer Measurement & Control and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
20. A stereo visual navigation method for docking autonomous underwater vehicles.
- Author
-
Xu, Shuo, Jiang, Yanqing, Li, Ye, Wang, Bo, Xie, Tianqi, Li, Shuchang, Qi, Haodong, Li, Ao, and Cao, Jian
- Abstract
Recovering and recharging autonomous underwater vehicles (AUVs) on a regular basis allows for long‐term underwater activities. This research provides a vision‐based navigation strategy for AUVs to independently identify and reconstruct the docking station (DS). The proposed framework includes a light beacon detection approach, a filter‐based light beacon matching method, and a fusion pose estimation method for DS positioning. Four green LED light beacons are mounted symmetrically on the docking ring, enabling the stereo camera to observe them from close range. A method for detecting light beacons is proposed that assures detection accuracy by identifying false positives. On a single frame from a stereo camera, filter‐based matching follows the light beacons precisely. In addition, we construct a position estimate method that significantly improves accuracy and efficiency by consisting of analyzation and iteration. A series of virtual and real‐world experiments demonstrate that our methodology can provide AUVs with reliable docking navigation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. Study on vision-guided 3D tracking control for UUV docking
- Author
-
Youwang LU, Yingkai XIA, Guohua XU, Jiawei LI, Gen XU, and Zixuan HE
- Subjects
uuv ,underwater docking ,visual guidance ,3d trajectory tracking ,tank test ,Naval architecture. Shipbuilding. Marine engineering ,VM1-989 - Abstract
ObjectiveAutonomous docking is the key to the cooperative operation of unmanned underwater vehicles (UUVs). However, due to environmental complexity and object characteristics, it is very difficult to achieve precise guidance and docking. In order to improve the accuracy and robustness of underwater docking, this study proposes a vision-guided docking scheme which encompasses vision processing and 3D trajectory tracking control. MethodsFirst, the overall vision-guided docking scheme is designed in combination with an analysis of task and object characteristics. Second, the YOLOv5 neural network is designed to complete the target detection of the underwater docking station, and the online measurement of the relative position and attitude relationship between the docking station and UUV is realized by an efficient perspective-n-point (EPnP) algorithm. Next, combined with the visual measurement results, an effective 3D robust trajectory tracking controller is designed on the basis of the 3D LOS guidance law, radial basis function neural network (RBFNN) and terminal sliding mode control (TSMC). Finally, the validity of the proposed scheme is verified through numerical simulation and a tank test. ResultsIn the tank test, the proposed vision-guided control algorithm can effectively complete the online detection and relative positioning of the underwater docking station, thereby achieving precise underwater docking. ConclusionThe results of this study show that the proposed vision-guided 3D trajectory tracking control scheme is reasonable and efficient, and can lay a good foundation for UUV docking.
- Published
- 2024
- Full Text
- View/download PDF
22. Exploration of visual variable guidance in outdoor augmented reality geovisualization
- Author
-
Guoyong Zhang, Jun Sun, Jianhua Gong, Dong Zhang, Shui Li, Weidong Hu, and Yi Li
- Subjects
visual guidance ,visual variable ,augmented reality visualization ,virtual geographic environment ,Mathematical geography. Cartography ,GA1-1776 - Abstract
The visual perception of augmented reality (AR) geovisualization is significantly different from traditional controllable 2D and 3D visualization. In this study, we extended the rendering styles of color variables to include natural material color (NMC) and illuminating material color (IMC) and extended the size to include linear size (LS) and angular size (AS). Outdoor AR geovisualization user experiments were conducted, examining the guidance characteristics of five static variables (NMC, IMC, shape, AS, LS) and two dynamic variables (vibration, flicker). The results showed that the two dynamic variables provided the highest guidance, and among all the static variables, the order of guidance was shape, IMC, AS, NMC, and finally LS. This is a new finding that is different from the color, size, and shape guidance order in 2D visualization and the color, shape, and size order in 3D visualization. The results could be utilized to guide the selection of visual variables for symbol design in AR geovisualization.
- Published
- 2023
- Full Text
- View/download PDF
23. Detection Method of Autonomous Landing Marker for UAV Based on Deep Learning
- Author
-
Li Dan, Deng Fei, Zhao Liangyu, Liu Fuxiang
- Subjects
uav ,visual guidance ,autonomous landing ,marker detection ,deep learning ,Motor vehicles. Aeronautics. Astronautics ,TL1-4050 - Abstract
Aiming at improving the real-time performance and accuracy of UAV autonomous landing, a landing marker detection method based on deep learning is proposed. Firstly, the lightweight network MobileNetv2 is used as the backbone network for feature extraction. Secondly, drawing on the network structure of YOLOv4, depthwise separable convolution is introduced to reduce the number of parameters without affecting model performance. Then, a feature pyramid module based on skip connection structures is proposed. With this module, the feature maps output from the backbone can be stitched and the detail information and semantic information can be fused to obtain features with stronger characterization capability. Finally, the detection head is optimized by depthwise separable convolution to complete the target detection task. Experiments are conducted on the Pascal VOC dataset and the landing marker dataset. The results show that the improved detection algorithm effectively reduces the computational and parameter complexity of the model, improves the detection speed, and can meet the accuracy requirements of autonomous UAV landing.
- Published
- 2023
- Full Text
- View/download PDF
24. FiMa-Reader: A Cost-Effective Fiducial Marker Reader System for Autonomous Mobile Robot Docking in Manufacturing Environments.
- Author
-
Bian, Xu, Chen, Wenzhao, Ran, Donglai, Liang, Zhimou, and Mei, Xuesong
- Subjects
INDUSTRIAL robots ,MOBILE robots ,AUTONOMOUS robots ,INFRARED cameras - Abstract
Featured Application: A fiducial marker reader system, featuring a novel marker design, has been proposed to guide mobile robots in accomplishing high-precision docking missions within smart factories. The implementation of this technology can provide a cost-effective and readily deployable sensor solution for automated docking in various scenarios. Accurately docking mobile robots to various workstations on the factory floor is a common and essential task. The existing docking methods face three major challenges: intricate deployment procedures, susceptibility to ambient lighting, and incapacity to recognize product information during the docking process. This paper devises a novel approach that combines the features of ArUco and Data Matrix to form a composite marker termed "DataMatrix-ArUco-Hybrid" (DAH). The DAH pattern serves as a fiducial marker capable of concurrently providing both target pose information and product information. Detection of the DAH pattern is conducted by a cost-effective fiducial marker reader system, called "FiMa-Reader", which comprises an embedded processing unit and an infrared camera equipped with a 940 nm fill-light to overcome lighting issues. The FiMa-Reader system effectively detects the DAH pattern under both well-lit and dimly lit conditions. Additionally, the implementation of the FiMa-Reader system leads to significant improvements in positioning accuracy, including an 86.42% improvement on the x-axis, a 44.7% improvement on the y-axis, and an 84.21% improvement in angular orientation when compared to traditional navigation methods. The utilization of FiMa-Reader presents an economically viable system capable of guiding mobile robots' positioning with high precision in various indoor lighting conditions. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
25. Vision-Guided Mobile Robot System for the Assembly of Long Beams on Aircraft Skin
- Author
-
Zheng, Lei, Liu, Huaying, Zhu, Hongsheng, Zhao, Xingwei, Tao, Bo, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Yang, Huayong, editor, Liu, Honghai, editor, Zou, Jun, editor, Yin, Zhouping, editor, Liu, Lianqing, editor, Yang, Geng, editor, Ouyang, Xiaoping, editor, and Wang, Zhiyong, editor
- Published
- 2023
- Full Text
- View/download PDF
26. Visual Guidance in Game Level Design
- Author
-
Guo, Gang, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, and Fang, Xiaowen, editor
- Published
- 2023
- Full Text
- View/download PDF
27. Underwater Visual Guidance Deep Learning Detection Method for AUV
- Author
-
Ping AN, Tingting WANG, Yuan ZHAO, and Ning HU
- Subjects
autonomous undersea vehicle(auv) ,underwater autonomous docking ,visual guidance ,image processing ,deep learning ,Naval architecture. Shipbuilding. Marine engineering ,VM1-989 - Abstract
The autonomous docking and recovery of autonomous undersea vehicle(AUV) technology mainly realizes the autonomous homing, approaching, docking, and locking of the AUV and the docking device by means of guidance and positioning. To satisfy the requirements of real time, high accuracy, and robustness in the process of AUV underwater autonomous docking, an underwater visual guidance detection method based on deep learning is proposed. To address the poor detection effect of traditional image processing methods in complex underwater scenes, the guiding light source and docking device are detected by employing a deep learning visual guidance detection method based on the YOLO(you only look once)v5 model. First, the object images are sent to YOLOv5 model for iterative training, and the optimal model parameters obtained from the training are saved for subsequent real-time detection. Subsequently, in the underwater autonomous docking process, the AUV utilizes the robot operating system(ROS) platform to read the underwater data and call the YOLO service to detect the underwater image in real-time, thereby outputting the location information of the guidance light source and the docking device. Based on position calculation, the detected center coordinates are transformed into the AUV camera coordinate system. Finally, the relative positions of the AUV with respect to the docking device and navigation directions of the AUV are calculated continuously and fed back into the AUV, which provides real-time guidance information until the docking progress is completed. In the sea trail, the actual accuracy of underwater visual guidance detection was 97.9%, and the detection time of a single frame was 45 ms. The test results demonstrate that this method meets the requirements of real-time underwater docking accuracy for autonomous docking and recovery technology, and has practical application value.
- Published
- 2023
- Full Text
- View/download PDF
28. Monocular Vision Guidance for Unmanned Surface Vehicle Recovery
- Author
-
Zhongguo Li, Qian Xi, Zhou Shi, and Qi Wang
- Subjects
unmanned surface vehicle ,perspective-n-point ,artificial markers ,visual guidance ,Technology ,Engineering (General). Civil engineering (General) ,TA1-2040 ,Biology (General) ,QH301-705.5 ,Physics ,QC1-999 ,Chemistry ,QD1-999 - Abstract
The positioning error of the GPS method at close distances is relatively large, rendering it incapable of accurately guiding unmanned surface vehicles (USVs) back to the mother ship. Therefore, this study proposes a near-distance recovery method for USVs based on monocular vision. By deploying a monocular camera on the USV to identify artificial markers on the mother ship and subsequently leveraging the geometric relationships among these markers, precise distance and angle information can be extracted. This enables effective guidance for the USVs to return to the mother ship. The experimental results validate the effectiveness of this approach, with positioning distance errors of less than 40 mm within a 10 m range and positioning angle errors of less than 5 degrees within a range of ±60 degrees.
- Published
- 2024
- Full Text
- View/download PDF
29. Autonomous Underwater Vehicle Cruise Positioning and Docking Guidance Scheme
- Author
-
Zhuoyu Zhang, Wangjie Ding, Rundong Wu, Mingwei Lin, Dejun Li, and Ri Lin
- Subjects
autonomous underwater vehicle ,docking ,visual guidance ,Naval architecture. Shipbuilding. Marine engineering ,VM1-989 ,Oceanography ,GC1-1581 - Abstract
The Autonomous Underwater Vehicle (AUV) is capable of autonomously conducting underwater cruising tasks. When combined with docking operations, the AUV can replenish its electric power after long-distance travel, enabling it to achieve long-range autonomous monitoring. This paper proposes a positioning method for the cruising and docking stages of AUVs. Firstly, a vision guidance algorithm based on monocular vision and threshold segmentation is studied to address the issue of regional noise that commonly occurs during underwater docking. A solution for regional noise based on threshold segmentation and proportional circle selection is proposed. Secondly, in order to enhance the positioning accuracy during the cruising stage, a fusion positioning algorithm based on particle filtering is presented, incorporating the Doppler Velocity Log (DVL) and GPS carried by the AUV. In simulation, this algorithm improves positioning accuracy by over 56.0% compared to using individual sensors alone. Finally, experiments for cruising and docking were conducted in Qingjiang, Hubei, China. The effectiveness of both methods is demonstrated, with successful docking achieved in four out of five attempts.
- Published
- 2024
- Full Text
- View/download PDF
30. Vision-Guided Hierarchical Control and Autonomous Positioning for Aerial Manipulator.
- Author
-
Ye, Xia, Cui, Haohao, Wang, Lidong, Xie, Shangjun, and Ni, Hong
- Subjects
MOTION capture (Human mechanics) ,POSE estimation (Computer vision) ,UNITS of measurement - Abstract
Aerial manipulator systems possess active operational capability, and by incorporating various sensors, the systems' autonomy is further enhanced. In this paper, we address the challenge of accurate positioning between an aerial manipulator and the operational targets during tasks such as grasping and delivery in the absence of motion capture systems indoors. We propose a vision-guided aerial manipulator system comprising a quad-rotor UAV and a single-degree-of-freedom manipulator. First, the overall structure of the aerial manipulator is designed, and a hierarchical control system is established. We employ the fusion of LiDAR-based SLAM (simultaneous localization and mapping) and IMU (inertial measurement unit) to enhance the positioning accuracy of the aerial manipulator. Real-time target detection and recognition are achieved by combining a depth camera and laser sensor for distance measurements, enabling adjustment of the grasping pose of the aerial manipulator. Finally, we employ a segmented grasping strategy to position and grasp the target object precisely. Experimental results demonstrate that the designed aerial manipulator system maintains a stable orientation within a certain range of ±5° during operation; its position movement is independent of orientation changes. The successful autonomous grasping of lightweight cylindrical objects in real-world scenarios verifies the effectiveness and rationality of the proposed system, ensuring high operational efficiency and robust disturbance resistance. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
31. 基于深度学习的无人机自主降落标识检测方法.
- Author
-
李丹, 邓飞, 赵良玉, and 刘福祥
- Abstract
Copyright of Aero Weaponry is the property of Aero Weaponry Editorial Office and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2023
- Full Text
- View/download PDF
32. A Palletizing System for Microchannel Heat Exchangers Based on 3D Visual Guidance
- Author
-
Jiaze Chen, Songxiao Cao, Zhipeng Xu, Tao Song, and Qing Jiang
- Subjects
Palletizing system ,point cloud segmentation ,automatic calibration ,visual guidance ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Aiming at the problems of slow manual palletizing, high risk and easy damage to products after the assembly of microchannel heat exchanger, a palletizing system based on 3D visual guidance is presented in this paper. Firstly, an automatic calibration method for RGB-D camera extrinsic parameters was adopted: used random sample consensus algorithm RANSAC for plane fitting, and the pallet was idealized as a plane. The camera extrinsic parameters were calculated according to the rotation relationship between the plane normal vector and the camera coordinate system. When it was performing detection, converted depth data into point cloud data and preprocessed, including coordinate transformation, downsampling, etc. After that, according to the characteristics of stacking, a point cloud segmentation algorithm based on depth and the number of points was used, which could not only segment the pallet and sponge strips, but also effectively detect obstacles. For different parts segmented, the pallet was positioned by fitting the minimum bounding rectangle, and the stacking depths were obtained by calculating the centroid of the sponge strips. Finally, a series of constraints were used to determine whether the unloading conditions were met. The experimental results show that the automatic calibration is effective, the time consuming is about 0.43 s, and the depth range of the pallet is 11 mm after calibration. The algorithm can accurately identify various situations above the stacking and the time consuming is about 1.63 s.
- Published
- 2023
- Full Text
- View/download PDF
33. Research on Robot Monocular Vision-Based 6DOF Object Positioning and Grasping Approach Combined With Image Generation Technology
- Author
-
Guoyang Wan, Jincheng Chen, Jian Zhang, Binyou Liu, Hong Zhang, and Xiuwen Tao
- Subjects
Machine vision ,industrial robot ,data augmentation ,object detection ,visual guidance ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
To address the challenges related to poor positioning accuracy and high usage cost of 6DOF visual measurement systems in industrial settings, this paper presents a monocular vision-based robot vision guidance approach. The goal is to address the issues of expensive 6DOF pose measurement and limited measurement robustness when robots need to manipulate metal objects in industrial environments. The proposed approach enables precise and robust measurement of the 6DOF pose of the target workpiece. The approach integrates two main algorithms: a virtual reality-based image data enhancement algorithm and a 6DOF pose measurement algorithm that combines a multi-keypoint detection model and the Efficient Perspective-n-Points (EPnP) algorithm. The image data enhancement algorithm enhances the data of small-sample industrial objects using image enhancement techniques. This improves the robustness of the detection model by mitigating the challenges of high-cost image acquisition and long acquisition time associated with industrial objects. On the other hand, the 6DOF pose measurement algorithm performs the pose measurement of the target workpiece using a single image, enabling cost-effective 6DOF pose measurement by utilizing only a monocular camera. Experimental results demonstrate that the proposed method achieves measurement errors of 4.21% in the X direction, 2.94% in the Y direction, and 0.39% in the Z direction of the target workpiece. These results highlight the effectiveness of the proposed approach in achieving accurate and reliable pose measurement.
- Published
- 2023
- Full Text
- View/download PDF
34. Visual Guidance Algorithm for AUV Recovery Based on CNN Object Tracking
- Author
-
Ze-kai HAN, Xing-hua ZHU, Xiao-jun HAN, Kai SUN, and Xiao-yu LIU
- Subjects
autonomous undersea vehicle ,underwater recovery ,visual guidance ,convolutional neural network ,position and attitude estimation ,Naval architecture. Shipbuilding. Marine engineering ,VM1-989 - Abstract
The development of autonomous undersea vehicle recovery technology is the main approach to solve problems pertaining to energy and information transmission and to enhance the underwater detection and concealment capabilities of unmanned systems. In this study, an underwater visual guidance scheme is designed for recovery with funnel-shaped docking stations in an actual environment. Additionally, an improved detect-by-tracking algorithm based on a convolutional neural network(CNN) is proposed. First, the CNN is trained using a docking station dataset to detect the target. Next, the improved tracking algorithm is combined with the position and attitude spatial information to achieve robust tracking. Finally, based on an improved PnP-P3P position and attitude estimation framework, the problem of insufficient observable beacons under a large offset is solved, and the underwater visual guidance workspace is effectively expanded. The beacon array design and algorithm are validated via workspace simulation, and relevant effective workspace indexes are proposed. An optical guidance experiment is performed in a pool, and acousto–optic joint guidance is performed based on an ultrashort baseline in an actual lake test. The feasibility of the proposed framework for engineering is confirmed by the results obtained.
- Published
- 2022
- Full Text
- View/download PDF
35. Analysis of influencing factors of urban road traffic safety and countermeasures.
- Author
-
Xiong, H. H., Jiang, J. S., Chen, J. J., and He, C.
- Abstract
The paper analyzes the characteristics of influencing factors of urban road traffic safety and designs an evaluation method for urban road traffic safety factors based on visual guidance and hazard assessment. A three-dimensional evaluation system structure was established for drivers, vehicles, and roads based on their psychological and physiological factors, driving behavior factors, climate factors, and road facilities; By analyzing the feature norm set that affects driver decision-making, dynamic evaluation of urban road traffic safety under different risk levels is achieved; Taking into account traffic flow, vehicle speed, and fog frequency, a visual guidance model for driving optical flow in dangerous sections of urban road traffic was constructed based on visual guidance and visual psychological feedback models; Based on the dynamic distribution of the impact of urban road traffic safety, measures such as acceleration and deceleration, parking, and distance maintenance have been formulated. The experimental results indicate that the model can accurately achieve the graded evaluation of urban road traffic safety and effectively reduce the probability of accidents. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
36. Fast Grasping Technique for Differentiated Mobile Phone Frame Based on Visual Guidance.
- Author
-
Zhao, Rongli, Bao, Zeren, Xiao, Wanyu, Zou, Shangwen, Zou, Guangxin, Xie, Yuan, and Leng, Jiewu
- Subjects
INDUSTRIAL robots ,LOADING & unloading ,ASSEMBLY line methods ,ONLINE education ,ROBOT motion ,CELL phones - Abstract
With the increasing automation of mobile phone assembly, industrial robots are gradually being used in production lines for loading and unloading operations. At present, industrial robots are mainly used in online teaching mode, in which the robot's movement and path are set by teaching in advance and then repeat the point-to-point operation. This mode of operation is less flexible and requires high professionalism in teaching and offline programming. When positioning and grasping different materials, the adjustment time is long, which affects the efficiency of production changeover. To solve the problem of poor adaptability of loading robots to differentiated products in mobile phone automatic assembly lines, it is necessary to quickly adjust the positioning and grasping of different models of mobile phone middle frames. Therefore, this paper proposes a highly adaptive grasping and positioning method for vision-guided right-angle robots. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
37. 一种自攻锁紧螺栓及其装配工艺研究.
- Author
-
刘金锋, 柴之龙, 王斯博, 苍衍, 邓洋, and 田博
- Subjects
AUTOMOTIVE engineering ,AUTOMOBILE industry ,ELECTRIC welding ,AUTOMOBILES - Abstract
Copyright of Automobile Technology & Material is the property of Automobile Technology & Material Editorial Office and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2023
- Full Text
- View/download PDF
38. 双机新型协调机器人带变位机的弧焊CMT工作站.
- Author
-
潘福禄, 唐广辉, 刘泽博, 李磊, 王彦涛, and 吕春龙
- Subjects
TECHNOLOGICAL innovations ,ELECTRIC welding ,WELDING ,PROBLEM solving ,QUALITY control ,RESEARCH & development - Abstract
Copyright of Automobile Technology & Material is the property of Automobile Technology & Material Editorial Office and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2023
- Full Text
- View/download PDF
39. Emerging Automated Technologies on Tractors
- Author
-
Zhao, Jianzhu, Mao, Enrong, Zhang, Qin, Series Editor, Ma, Shaochun, editor, Lin, Tao, editor, Mao, Enrong, editor, Song, Zhenghe, editor, and Ting, Kuan-Chong, editor
- Published
- 2022
- Full Text
- View/download PDF
40. 汽车总装车间车身端发动机的支承拧紧自动化工艺 研究和应用.
- Author
-
陈典汉, 廖伟, and 陈明全
- Abstract
Copyright of Automobile Technology & Material is the property of Automobile Technology & Material Editorial Office and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2023
- Full Text
- View/download PDF
41. FiMa-Reader: A Cost-Effective Fiducial Marker Reader System for Autonomous Mobile Robot Docking in Manufacturing Environments
- Author
-
Xu Bian, Wenzhao Chen, Donglai Ran, Zhimou Liang, and Xuesong Mei
- Subjects
mobile robot ,smart factory ,target docking ,fiducial markers ,visual guidance ,Technology ,Engineering (General). Civil engineering (General) ,TA1-2040 ,Biology (General) ,QH301-705.5 ,Physics ,QC1-999 ,Chemistry ,QD1-999 - Abstract
Accurately docking mobile robots to various workstations on the factory floor is a common and essential task. The existing docking methods face three major challenges: intricate deployment procedures, susceptibility to ambient lighting, and incapacity to recognize product information during the docking process. This paper devises a novel approach that combines the features of ArUco and Data Matrix to form a composite marker termed “DataMatrix-ArUco-Hybrid” (DAH). The DAH pattern serves as a fiducial marker capable of concurrently providing both target pose information and product information. Detection of the DAH pattern is conducted by a cost-effective fiducial marker reader system, called “FiMa-Reader”, which comprises an embedded processing unit and an infrared camera equipped with a 940 nm fill-light to overcome lighting issues. The FiMa-Reader system effectively detects the DAH pattern under both well-lit and dimly lit conditions. Additionally, the implementation of the FiMa-Reader system leads to significant improvements in positioning accuracy, including an 86.42% improvement on the x-axis, a 44.7% improvement on the y-axis, and an 84.21% improvement in angular orientation when compared to traditional navigation methods. The utilization of FiMa-Reader presents an economically viable system capable of guiding mobile robots’ positioning with high precision in various indoor lighting conditions.
- Published
- 2023
- Full Text
- View/download PDF
42. Exploration of visual variable guidance in outdoor augmented reality geovisualization.
- Author
-
Zhang, Guoyong, Sun, Jun, Gong, Jianhua, Zhang, Dong, Li, Shui, Hu, Weidong, and Li, Yi
- Subjects
VISUAL perception ,AUGMENTED reality ,DATA visualization - Abstract
The visual perception of augmented reality (AR) geovisualization is significantly different from traditional controllable 2D and 3D visualization. In this study, we extended the rendering styles of color variables to include natural material color (NMC) and illuminating material color (IMC) and extended the size to include linear size (LS) and angular size (AS). Outdoor AR geovisualization user experiments were conducted, examining the guidance characteristics of five static variables (NMC, IMC, shape, AS, LS) and two dynamic variables (vibration, flicker). The results showed that the two dynamic variables provided the highest guidance, and among all the static variables, the order of guidance was shape, IMC, AS, NMC, and finally LS. This is a new finding that is different from the color, size, and shape guidance order in 2D visualization and the color, shape, and size order in 3D visualization. The results could be utilized to guide the selection of visual variables for symbol design in AR geovisualization. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
43. Large Scale Ultrafast Manufacturing of Wireless Soft Bioelectronics Enabled by Autonomous Robot Arm Printing Assisted by a Computer Vision-Enabled Guidance System for Personalized Wound Healing.
- Author
-
Kim J, Jeong SH, Thibault BC, Soto JAL, Tetsuka H, Devaraj SV, Riestra E, Jang Y, Seo JW, Rodríguez RAC, Huang LL, Lee Y, Preda I, Sonkusale S, Fiondella L, Seo J, Pirrami L, and Shin SR
- Subjects
- Humans, Wireless Technology, Human Umbilical Vein Endothelial Cells, Printing, Three-Dimensional, Precision Medicine methods, Fibroblasts cytology, Wound Healing, Robotics instrumentation
- Abstract
A Customized wound patch for Advanced tissue Regeneration with Electric field (CARE), featuring an autonomous robot arm printing system guided by a computer vision-enabled guidance system for fast image recognition is introduced. CARE addresses the growing demand for flexible, stretchable, and wireless adhesive bioelectronics tailored for electrotherapy, which is suitable for rapid adaptation to individual patients and practical implementation in a comfortable design. The visual guidance system integrating a 6-axis robot arm enables scans from multiple angles to provide a 3D map of complex and curved wounds. The size of electrodes and the geometries of power-receiving coil are essential components of the CARE and are determined by a MATLAB simulation, ensuring efficient wireless power transfer. Three heterogeneous inks possessing different rheological behaviors can be extruded and printed sequentially on the flexible substrates, supporting fast manufacturing of large customized bioelectronic patches. CARE can stimulate wounds up to 10 mm in depth with an electric field strength of 88.8 mV mm
-1 . In vitro studies reveal the ability to accelerate cell migration by a factor of 1.6 and 1.9 for human dermal fibroblasts and human umbilical vein endothelial cells, respectively. This study highlights the potential of CARE as a clinical wound therapy method to accelerate healing., (© 2024 Wiley‐VCH GmbH.)- Published
- 2025
- Full Text
- View/download PDF
44. Visual guidance of target-oriented flight behaviours in birds
- Author
-
Walker, James and Taylor, Graham
- Subjects
598.157 ,Visual guidance ,Aerial views ,Birds - Abstract
Birds are highly dependent on vision to guide their flight during goal-directed tasks. They shift their visual attention primarily through head reorientation, with limited eye movement; however, measuring head orientation in free-flying birds is a significant technical challenge. While the gaze strategies of some birds have been studied in considerable detail in laboratory environments, studies of visual guidance in natural environments have largely been limited to inferences based on GPS-derived flight trajectories. This thesis aims to advance our understanding of how birds use their visual system to guide target-directed flight, using novel instrumentation to measure the gaze strategy of homing pigeons (Columba livia) and peregrine falcons (Falco peregrinus) flying in their natural environments. We use GPS-derived flight tracks to explore the visual mechanisms available to pigeons to compensate for wind drift, finding that they partially compensate for lateral displacement caused by the wind performing better than a naive strategy requiring no knowledge of the wind. We then describe the development of a custom-built sensor, incorporating a GPS receiver and head-mounted inertial measurement unit (IMU), which measures bird position and head orientation. Using this instrumentation, we find that pigeons coordinate angular head saccades with their wingbeats. Our results also reveal that vertical head stabilisation is enhanced when flying with flock companions, largely via increased wingbeat frequency. The focus of pigeons' visual attention during homing flight is measured with the sensor, allowing specific points of interest to be identified in the landscape that do not lie on the bird's track. Finally, we find that in the closing phases of predatory pursuit, peregrine falcons continuously track their target position using either their frontally or laterally facing fovea. Refined iterations of the exploratory technique developed here have the potential to revolutionise our understanding of large-scale spatial cognition and short-range guidance in birds and may, in turn, lead to applications in the design of visually-guided unmanned aerial systems.
- Published
- 2018
45. Vision-Guided Hierarchical Control and Autonomous Positioning for Aerial Manipulator
- Author
-
Xia Ye, Haohao Cui, Lidong Wang, Shangjun Xie, and Hong Ni
- Subjects
aerial manipulator system ,visual guidance ,SLAM algorithm ,aerial grasping ,depth camera ,LiDAR ,Technology ,Engineering (General). Civil engineering (General) ,TA1-2040 ,Biology (General) ,QH301-705.5 ,Physics ,QC1-999 ,Chemistry ,QD1-999 - Abstract
Aerial manipulator systems possess active operational capability, and by incorporating various sensors, the systems’ autonomy is further enhanced. In this paper, we address the challenge of accurate positioning between an aerial manipulator and the operational targets during tasks such as grasping and delivery in the absence of motion capture systems indoors. We propose a vision-guided aerial manipulator system comprising a quad-rotor UAV and a single-degree-of-freedom manipulator. First, the overall structure of the aerial manipulator is designed, and a hierarchical control system is established. We employ the fusion of LiDAR-based SLAM (simultaneous localization and mapping) and IMU (inertial measurement unit) to enhance the positioning accuracy of the aerial manipulator. Real-time target detection and recognition are achieved by combining a depth camera and laser sensor for distance measurements, enabling adjustment of the grasping pose of the aerial manipulator. Finally, we employ a segmented grasping strategy to position and grasp the target object precisely. Experimental results demonstrate that the designed aerial manipulator system maintains a stable orientation within a certain range of ±5° during operation; its position movement is independent of orientation changes. The successful autonomous grasping of lightweight cylindrical objects in real-world scenarios verifies the effectiveness and rationality of the proposed system, ensuring high operational efficiency and robust disturbance resistance.
- Published
- 2023
- Full Text
- View/download PDF
46. Exploring the Visual Guidance of Motor Imagery in Sustainable Brain–Computer Interfaces.
- Author
-
Yang, Cheng, Kong, Lei, Zhang, Zhichao, Tao, Ye, and Chen, Xiaoyu
- Abstract
Motor imagery brain–computer interface (MI-BCI) systems hold the possibility of restoring motor function and also offer the possibility of sustainable autonomous living for individuals with various motor and sensory impairments. When utilizing the MI-BCI, the user's performance impacts the system's overall accuracy, and concentrating on the user's mental load enables a better evaluation of the system's overall performance. The impacts of various levels of abstraction on visual guidance of mental training in motor imagery (MI) may be comprehended. We proposed hypotheses about the effects of visually guided abstraction on brain activity, mental load, and MI-BCI performance, then used the event-related desynchronization (ERD) value to measure the user's brain activity, extracted the brain power spectral density (PSD) to measure the brain load, and finally classified the left- and right-handed MI through a support vector machine (SVM) classifier. The results showed that visual guidance with a low level of abstraction could help users to achieve the highest brain activity and the lowest mental load, and the highest accuracy rate of MI classification was 97.14%. The findings imply that to improve brain–computer interaction and enable those less capable to regain their mobility, visual guidance with a low level of abstraction should be employed when training brain–computer interface users. We anticipate that the results of this study will have considerable implications for human-computer interaction research in BCI. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
47. Performance evaluation of a visual guidance patient-controlled respiratory gating system for respiratory-gated magnetic-resonance image-guided radiation therapy.
- Author
-
Choun, Hyung Jin, Kim, Jung-in, Choi, Chang Heon, Jung, Seongmoon, Jin, Hyeongmin, Wu, Hong-Gyun, Chie, Eui Kyu, and Park, Jong Min
- Abstract
The performance of a visual guidance patient-controlled (VG-PC) respiratory gating system for magnetic-resonance (MR) image-guided radiation therapy (MR-IGRT) was evaluated through a clinical trial of patients with either lung or liver cancer. Patients can voluntarily control their respiration utilizing the VG-PC respiratory gating system. The system enables patients to view near-real-time cine planar MR images projected inside the bore of MR-IGRT systems or an external screen. Twenty patients who had received stereotactic ablative radiotherapy (SABR) for lung or liver cancer were prospectively selected for this study. Before the first treatment, comprehensive instruction on the VG-PC respiratory gating system was provided to the patients. Respiratory-gated MR-IGRT was performed for each patient with it in the first fraction and then without it in the second fraction. For both the fractions, the total treatment time, beam-off time owing to the respiratory gating, and number of beam-off events were analyzed. The average total treatment time, beam-off time, and number of beam-off events with the system were 1507.3 s, 679.5 s, and 185, respectively, and those without the system were 2023.7 s (p < 0.001), 1195.0 s (p < 0.001), and 380 times (p < 0.001), respectively. The VG-PC respiratory gating system improved treatment efficiency through a reduction in the beam-off time, the number of beam-off events, and consequently the total treatment time when performing respiratory-gated MR-IGRT for lung and liver SABR. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
48. Research on the Visual Guidance System of Zoning Casting Grinding Based on Feature Points.
- Author
-
Zhu, Minjian, Shang, Tao, Jin, Zelin, Liu, Chunshan, Deng, Wenbin, and Chen, Yanli
- Subjects
MANUFACTURING process automation ,POINT cloud ,ZONING ,STATISTICAL sampling - Abstract
Compared to traditional rough casting grinding (RCG), the individualization of castings is very different, which makes it difficult to realize the automation of casting grinding. At this stage, the primary method is manual grinding. In this study, the regional casting grinding system based on feature points is adopted to achieve the personalized grinding of castings and improve the grinding efficiency and the automation level of the manufacturing process. After preprocessing the point cloud, the fast point feature histogram (FPFH) descriptor is used to describe the features of each region and construct the local template. The position of the local region is obtained by template matching. The random sample consensus (RANSAC) algorithm is used to calculate the plane and fit the point cloud to obtain the contact point trajectory of the grinding head. Then, according to different polishing methods, different polishing poses are generated. The simulation experimental results show that the system has good adaptability, and the consistency of finished products is good. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
49. Bi-directional information interaction for multi-modal 3D object detection in real-world traffic scenes.
- Author
-
Wang, Yadong, Zhang, Shuqin, Deng, Yongqiang, Li, Juanjuan, Yang, Yanlong, and Wang, Kunfeng
- Subjects
- *
OBJECT recognition (Computer vision) , *POINT cloud , *TRAFFIC monitoring , *DENSITY - Abstract
Multimodal 3D object detection methods are poorly adapted to real-world traffic scenes due to sparse distribution of point clouds and misalignment multimodal data during actual collection. Among the existing methods, they focus on high-quality open-source datasets, with performance relying on the accurate structural representation of point clouds and the precise mapping relationship between point clouds and images. To solve the above challenges, this paper proposes a multimodal feature-level fusion method based on the bi-directional interaction between image and point cloud. To overcome the sparsity issue in asynchronous multi-modal data, a point cloud densification scheme based on visual guidance and point cloud density guidance is proposed. This scheme can generate object-level virtual point clouds even when the point cloud and image are misaligned. To deal with the unalignment issue between point cloud and image, a bi-directional interaction module based on image-guided interaction with key points of point clouds and point cloud-guided interaction with image context information is proposed. It achieves effective feature fusion even when the point cloud and image are misaligned. The experiments on the VANJEE and KITTI datasets demonstrated the effectiveness of the proposed method, with average precision improvements of 6.20% and 1.54% compared to the baseline. • Object-level virtual point clouds are generated to overcome point cloud sparsity. • Image-guided interaction with point cloud key points to address misalignment issue. • Point cloud-guided interaction with image context information to address misalignment issue. • Accuracy is improved on both real-wold and open-source traffic datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
50. Fast Grasping Technique for Differentiated Mobile Phone Frame Based on Visual Guidance
- Author
-
Rongli Zhao, Zeren Bao, Wanyu Xiao, Shangwen Zou, Guangxin Zou, Yuan Xie, and Jiewu Leng
- Subjects
visual guidance ,hand–eye calibration ,relative posture ,template matching ,robot grasping ,mobile phone frame ,Mechanical engineering and machinery ,TJ1-1570 - Abstract
With the increasing automation of mobile phone assembly, industrial robots are gradually being used in production lines for loading and unloading operations. At present, industrial robots are mainly used in online teaching mode, in which the robot’s movement and path are set by teaching in advance and then repeat the point-to-point operation. This mode of operation is less flexible and requires high professionalism in teaching and offline programming. When positioning and grasping different materials, the adjustment time is long, which affects the efficiency of production changeover. To solve the problem of poor adaptability of loading robots to differentiated products in mobile phone automatic assembly lines, it is necessary to quickly adjust the positioning and grasping of different models of mobile phone middle frames. Therefore, this paper proposes a highly adaptive grasping and positioning method for vision-guided right-angle robots.
- Published
- 2023
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.