30 results on '"Shen, Shaojie"'
Search Results
2. Characteristics of Lateral Aerodynamic Force Variation of 160 km/h Train Passing through Tunnels
- Author
-
BEN Xiaodong, FANG Ming, XU Chengzhou, YUE Wenzhi, SHEN Shaojie, and BI Haiquan
- Subjects
bullet train ,tail car lateral sway ,tunnel ,aerodynamic load ,main frequency of lateral force ,vortex shedding ,Transportation engineering ,TA1001-1280 - Abstract
Objective At a speed of 160 km/h, a certain EMU in China exhibits good stability when running on open tracks. However, when passing through single-track tunnels, the tail car experiences periodic lateral swaying issues. Thus, it is necessary to conduct an in-depth study on the aerodynamic characteristics of the train when passing through tunnels. Method Using numerical simulation methods, a three-dimensional compressible transient simulation model of a certain EMU with bogies is established based on the RNG k-epsilon turbulent model and sliding mesh technology. The study focused on the variation surface pressure on the tail car when the train passes through tunnels. Simultaneously, the transient nature of the lateral aerodynamic force characteristics of the train is analyzed in both time and frequency domains. Result & Conclusion The research results indicate that due to the restriction of airflow by the tunnel walls and the effect of the train's wake flow, there exists a pressure difference on both sides of the train body with aliternating characteristics when passing through tunnel. Furthermore, the alternation of lateral forces on the tail car is more pronounced. The lateral forces acting on the train exhibit low-frequency periodic alternation, with a main frequency of approximately 2.2 Hz. The aerodynamic effect is the strongest when the tail car just enters the tunnel, resulting in the largest fluctuation in lateral force amplitude. The variation in train lateral force can be used as a boundary condition for analyzing the vehicle lateral stability.
- Published
- 2024
- Full Text
- View/download PDF
3. MetaFollower: Adaptable personalized autonomous car following
- Author
-
Chen, Xianda, Chen, Kehua, Zhu, Meixin, Yang, Hao (Frank), Shen, Shaojie, Wang, Xuesong, and Wang, Yinhai
- Published
- 2024
- Full Text
- View/download PDF
4. Edge alignment-based visual–inertial fusion for tracking of aggressive motions
- Author
-
Ling, Yonggen, Kuse, Manohar, and Shen, Shaojie
- Published
- 2018
- Full Text
- View/download PDF
5. GVINS: Tightly Coupled GNSS–Visual–Inertial Fusion for Smooth and Consistent State Estimation.
- Author
-
Cao, Shaozu, Lu, Xiuyuan, and Shen, Shaojie
- Subjects
GLOBAL Positioning System ,DOPPLER effect - Abstract
Visual–inertial odometry (VIO) is known to suffer from drifting, especially over long-term runs. In this article, we present GVINS, a nonlinear optimization-based system that tightly fuses global navigation satellite system (GNSS) raw measurements with visual and inertial information for real-time and drift-free stateestimation. Our system aims to provide accurate global six-degree-of-freedom estimation under complex indoor–outdoor environments, where GNSS signals may be intermittent or even inaccessible. To establish the connection between global measurements and local states, a coarse-to-fine initialization procedure is proposed to efficiently calibrate the transformation online and initialize GNSS states from only a short window of measurements. The GNSS code pseudorange and Doppler shift measurements, along with visual and inertial information, are then modeled and used to constrain the system states in a factor graph framework. For complex and GNSS-unfriendly areas, the degenerate cases are discussed and carefully handled to ensure robustness. Thanks to the tightly coupled multisensor approach and system design, our system fully exploits the merits of three types of sensors and is able to seamlessly cope with the transition between indoor and outdoor environments, where satellites are lost and reacquired. We extensively evaluate the proposed system by both simulation and real-world experiments, and the results demonstrate that our system substantially suppresses the drift of the VIO and preserves the local accuracy in spite of noisy GNSS measurements. The versatility and robustness of the system are verified on large-scale data collected in challenging environments. In addition, experiments show that our system can still benefit from the presence of only one satellite, whereas at least four satellites are required for its conventional GNSS counterparts. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
6. EPSILON: An Efficient Planning System for Automated Vehicles in Highly Interactive Environments.
- Author
-
Ding, Wenchao, Zhang, Lu, Chen, Jing, and Shen, Shaojie
- Subjects
PARTIALLY observable Markov decision processes ,AUTONOMOUS vehicles ,AUTOMATED planning & scheduling ,CITY traffic - Abstract
In this article, we present an efficient planning system for automated vehicles in highly interactive environments (EPSILON). EPSILON is an efficient interaction-aware planning system for automated driving, and is extensively validated in both simulation and real-world dense city traffic. It follows a hierarchical structure with an interactive behavior planning layer and an optimization-based motion planning layer. The behavior planning is formulated from a partially observable Markov decision process (POMDP), but is much more efficient than naively applying a POMDP to the decision-making problem. The key to efficiency is guided branching in both the action space and observation space, which decomposes the original problem into a limited number of closed-loop policy evaluations. Moreover, we introduce a new driver model with a safety mechanism to overcome the risk induced by the potential imperfectness of prior knowledge. For motion planning, we employ a spatio-temporal semantic corridor (SSC) to model the constraints posed by complex driving environments in a unified way. Based on the SSC, a safe and smooth trajectory is optimized, complying with the decision provided by the behavior planner. We validate our planning system in both simulations and real-world dense traffic, and the experimental results show that our EPSILON achieves human-like driving behaviors in highly interactive traffic flow smoothly and safely without being overconservative compared to the existing planning methods. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
7. Geometric Calibration for Cameras with Inconsistent Imaging Capabilities.
- Author
-
Wang, Ke, Liu, Chuhao, and Shen, Shaojie
- Subjects
CAMERA calibration ,MAXIMUM likelihood statistics ,CAMERAS ,TELEOLOGY ,GEOMETRIC modeling - Abstract
Traditional calibration methods rely on the accurate localization of the chessboard points in images and their maximum likelihood estimation (MLE)-based optimization models implicitly require all detected points to have an identical uncertainty. The uncertainties of the detected control points are mainly determined by camera pose, the slant of the chessboard and the inconsistent imaging capabilities of the camera. The negative influence of the uncertainties that are induced by the two former factors can be eliminated by adequate data sampling. However, the last factor leads to the detected control points from some sensor areas having larger uncertainties than those from other sensor areas. This causes the final calibrated parameters to overfit the control points that are located at the poorer sensor areas. In this paper, we present a method for measuring the uncertainties of the detected control points and incorporating these measured uncertainties into the optimization model of the geometric calibration. The new model suppresses the influence from the control points with large uncertainties while amplifying the contributions from points with small uncertainties for the final convergence. We demonstrate the usability of the proposed method by first using eight cameras to collect a calibration dataset and then comparing our method to other recent works and the calibration module in OpenCV using that dataset. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
8. RAPTOR: Robust and Perception-Aware Trajectory Replanning for Quadrotor Fast Flight.
- Author
-
Zhou, Boyu, Pan, Jie, Gao, Fei, and Shen, Shaojie
- Subjects
TRAJECTORY optimization ,FLIGHT ,TEST methods - Abstract
Recent advances in trajectory replanning have enabled quadrotor to navigate autonomously in unknown environments. However, high-speed navigation still remains a significant challenge. Given very limited time, existing methods have no strong guarantee on the feasibility or quality of the solutions. Moreover, most methods do not consider environment perception, which is the key bottleneck to fast flight. In this article, we present RAPTOR, a robust and perception-aware replanning framework to support fast and safe flight, which addresses these issues systematically. A path-guided optimization approach that incorporates multiple topological paths is devised, to ensure finding feasible and high-quality trajectories in very limited time. We also introduce two perception-aware planning approaches to actively observe and avoid unknown obstacles. A risk-aware trajectory refinement ensures that unknown obstacles which may endanger the quadrotor can be observed earlier and avoid in time. The motion of yaw angle is planned to actively explore the surrounding space that is relevant for safe navigation. The proposed methods are tested extensively through benchmark comparisons and challenging indoor and outdoor aggressive flights. We release our implementation as an open-source package1 for the community. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
9. Event-Based Stereo Visual Odometry.
- Author
-
Zhou, Yi, Gallego, Guillermo, and Shen, Shaojie
- Subjects
VISUAL odometry ,IMAGE sensors ,STEREOSCOPIC cameras ,HIGH dynamic range imaging ,ROBOT vision - Abstract
Event-based cameras are bioinspired vision sensors whose pixels work independently from each other and respond asynchronously to brightness changes, with microsecond resolution. Their advantages make it possible to tackle challenging scenarios in robotics, such as high-speed and high dynamic range scenes. We present a solution to the problem of visual odometry from the data acquired by a stereo event-based camera rig. Our system follows a parallel tracking-and-mapping approach, where novel solutions to each subproblem (three-dimensional (3-D) reconstruction and camera pose estimation) are developed with two objectives in mind: being principled and efficient, for real-time operation with commodity hardware. To this end, we seek to maximize the spatio-temporal consistency of stereo event-based data while using a simple and efficient representation. Specifically, the mapping module builds a semidense 3-D map of the scene by fusing depth estimates from multiple viewpoints (obtained by spatio-temporal consistency) in a probabilistic fashion. The tracking module recovers the pose of the stereo rig by solving a registration problem that naturally arises due to the chosen map and event data representation. Experiments on publicly available datasets and on our own recordings demonstrate the versatility of the proposed method in natural scenes with general 6-DoF motion. The system successfully leverages the advantages of event-based cameras to perform visual odometry in challenging illumination conditions, such as low-light and high dynamic range, while running in real-time on a standard CPU. We release the software and dataset under an open source license to foster research in the emerging topic of event-based simultaneous localization and mapping. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
10. The competition and cooperativity of hydrogen/halogen bond and π‐hole bond involving the heteronuclear ethylene analogues.
- Author
-
Shen, Shaojie, Jing, Xinyue, Zhang, Xueying, Li, Xiaoyan, and Zeng, Yanli
- Subjects
- *
HALOGENS , *PERTURBATION theory , *LEWIS bases , *HYDROGEN , *ETHYLENE - Abstract
The noncovalent interactions involving heteronuclear ethylene analogues H2CEH2 (E = Si, Ge and Sn) have been studied by the Møller–Plesset perturbation theory to investigate the competition and cooperativity between the hydrogen/halogen bond and π‐hole bond. H2CEH2 has a dual role of being a Lewis base and acid with the region of π‐electron accumulation above the carbon atom and the region of π‐electron depletion (π‐hole) above the E atom to participate in the NCX···CE (X = H and Cl) hydrogen/halogen bond and CE···NCY (Y = H, Cl, Li and Na) π‐hole bond, respectively. When HCN/ClCN interacts with H2CEH2 by two sites, the strength of hydrogen bond/halogen bond is stronger than that of π‐hole bond. The π‐hole bond becomes obviously stronger when the metal substituent of YCN (Y = Li and Na) interacting with H2CEH2, showing the character of partial covalent, its strength is much greater than that of hydrogen/halogen bond. In the ternary complexes, both hydrogen/halogen bond and π‐hole bond are simultaneously strengthened compared to those in the binary complexes, especially in the systems containing alkali metal. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
11. Real-Time Temporal and Rotational Calibration of Heterogeneous Sensors Using Motion Correlation Analysis.
- Author
-
Qiu, Kejie, Qin, Tong, Pan, Jie, Liu, Siqi, and Shen, Shaojie
- Subjects
MOTION detectors ,MOTION analysis ,STATISTICAL correlation ,CALIBRATION ,ROTATIONAL motion - Abstract
Accurate and robust calibration is crucial to a multisensor fusion-based system. The calibration of heterogeneous sensors is particularly challenging because of the huge difference of the captured sensor data. On the other hand, many calibration approaches ignore temporal calibration that is in fact as important as spatial calibration. In this article, we focus on the temporal calibration of heterogeneous sensors, and the corresponding extrinsic rotation is also derived. Most existing methods are specialized for a certain sensor combination, such as an inertial measurement unit (IMU) camera or a camera-Lidar system. However, heterogeneous multisensor fusion is a tendency in the robotics area, so a unified calibration method is desired. To this end, we leverage the 3-D rotational motion feature for calibration, and auxiliary calibration boards are not needed since multiple odometry methods are available to capture 3-D sensor motion. Using a high-frequency IMU as the calibration reference, an IMU-centric scheme is designed to achieve a unified framework that adapts to various target sensors that can independently estimate 3-D rotational motion. By combining independent IMU-centric calibration pairs, an arbitrary pair of sensors can also be calibrated using the same reference IMU. Due to a novel 3-D motion correlation quantification and analysis mechanism, the temporal offset can be first estimated in real time. Given temporally aligned sensor motion, the extrinsic rotation can be derived in closed-form in the same 3-D motion correlation mechanism. Experimental results of certain sensor combinations show the accuracy and robustness of the proposed method through comparison with state-of-the-art calibration approaches, and the calibration result of a heterogeneous multisensor set demonstrates the scalability and versatility of our method. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
12. Teach-Repeat-Replan: A Complete and Robust System for Aggressive Flight in Complex Environments.
- Author
-
Gao, Fei, Wang, Luqi, Zhou, Boyu, Zhou, Xin, Pan, Jie, and Shen, Shaojie
- Subjects
FLIGHT ,AERIAL spraying & dusting in agriculture - Abstract
In this article, we propose a complete and robust system for the aggressive flight of autonomous quadrotors. The proposed system is built upon on the classical teach-and-repeat framework, which is widely adopted in infrastructure inspection, aerial transportation, and search-and-rescue. For these applications, a human's intention is essential for deciding the topological structure of the flight trajectory of the drone. However, poor teaching trajectories and changing environments prevent a simple teach-and-repeat system from being applied flexibly and robustly. In this article, instead of commanding the drone to precisely follow a teaching trajectory, we propose a method to automatically convert a human-piloted trajectory, which can be arbitrarily jerky, to a topologically equivalent one. The generated trajectory is guaranteed to be smooth, safe, and dynamically feasible, with a human preferable aggressiveness. Also, to avoid unmapped or moving obstacles during flights, a fast local perception method and a sliding-windowed replanning method are integrated into our system, to generate safe and dynamically feasible local trajectories onboard. We name our system as teach–repeat–replan. It can capture users’ intention of a flight mission, convert an arbitrarily jerky teaching path to a smooth repeating trajectory, and generate safe local replans to avoid unexpected collisions. The proposed planning system is integrated into a complete autonomous quadrotor with global and local perception and localization submodules. Our system is validated by performing aggressive flights in challenging indoor/outdoor environments. We release all components in our quadrotor system as open-source ros packages. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
13. Autonomous aerial robot using dual‐fisheye cameras.
- Author
-
Gao, Wenliang, Wang, Kaixuan, Ding, Wenchao, Gao, Fei, Qin, Tong, and Shen, Shaojie
- Subjects
AUTONOMOUS robots ,DEPTH perception ,SPRAYING & dusting in agriculture ,HETEROGENEOUS computing ,CAMERAS ,STEREO vision (Computer science) ,MOBILE robots - Abstract
Safety is undoubtedly the most fundamental requirement for any aerial robotic application. It is essential to equip aerial robots with omnidirectional perception coverage to ensure safe navigation in complex environments. In this paper, we present a light‐weight and low‐cost omnidirectional perception system, which consists of two ultrawide field‐of‐view (FOV) fisheye cameras and a low‐cost inertial measurement unit (IMU). The goal of the system is to achieve spherical omnidirectional sensing coverage with the minimum sensor suite. The two fisheye cameras are mounted rigidly facing upward and downward directions and provide omnidirectional perception coverage: 360° FOV horizontally, 50° FOV vertically for stereo, and whole spherical for monocular. We present a novel optimization‐based dual‐fisheye visual‐inertial state estimator to provide highly accurate state‐estimation. Real‐time omnidirectional three‐dimensional (3D) mapping is combined with stereo‐based depth perception for the horizontal direction and monocular depth perception for upward and downward directions. The omnidirectional perception system is integrated with online trajectory planners to achieve closed‐loop, fully autonomous navigation. All computations are done onboard on a heterogeneous computing suite. Extensive experimental results are presented to validate individual modules as well as the overall system in both indoor and outdoor environments. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
14. An Efficient B-Spline-Based Kinodynamic Replanning Framework for Quadrotors.
- Author
-
Ding, Wenchao, Gao, Wenliang, Wang, Kaixuan, and Shen, Shaojie
- Subjects
TRAJECTORY optimization ,ROBOT motion ,SEARCH algorithms - Abstract
Trajectory replanning for quadrotors is essential to enable fully autonomous flight in unknown environments. Hierarchical motion planning frameworks, which combine path planning with path parameterization, are popular due to their time efficiency. However, the path planning cannot properly deal with nonstatic initial states of the quadrotor, which may result in nonsmooth or even dynamically infeasible trajectories. In this article, we present an efficient kinodynamic replanning framework by exploiting the advantageous properties of the B-spline, which facilitates dealing with the nonstatic state and guarantees safety and dynamical feasibility. Our framework starts with an efficient B-spline-based kinodynamic (EBK) search algorithm, which finds a feasible trajectory with minimum control effort and time. To compensate for the discretization induced by the EBK search, an elastic optimization approach is proposed to refine the control point placement to the optimal location. Systematic comparisons against the state-of-the-art are conducted to validate the performance. Comprehensive onboard experiments using two different vision-based quadrotors are carried out showing the general applicability of the framework. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
15. Tracking 3-D Motion of Dynamic Objects Using Monocular Visual-Inertial Sensing.
- Author
-
Qiu, Kejie, Qin, Tong, Gao, Wenliang, and Shen, Shaojie
- Subjects
OBJECT tracking (Computer vision) ,AUGMENTED reality ,STEREOSCOPIC cameras ,LIDAR ,OPTICAL head-mounted displays ,CELL phones ,UNITS of measurement ,MOTION - Abstract
Six degree-of-freedom (6-DoF) visual tracking of dynamic objects is fundamental to a large variety of robotics and augmented reality (AR) applications. A key to this problem is accurate distance measurement of dynamic objects, which is usually obtained via stereo cameras, RGB-D sensors, or LiDARs. In this paper, however, we address the problem using only a monocular camera rigidly mounted with a low-cost inertial measurement unit. This is a light-weight, small-size, and low-cost solution, which is particularly suitable for tracking dynamic objects on drones or on mobile phones. Starting from a generic image-based two-dimensional tracker, we propose a novel method to resolve the object scale ambiguity in monocular vision in a geometric manner based on correlation analysis. This enables accurate metric three-dimensional tracking of arbitrary objects without requiring any prior knowledge about the object shape or size. We discuss the applicability by analyzing the observability condition and degenerated cases for object scale recovery. Simulation and real-world experimental results with ground truth comparison, along with AR application examples, demonstrate the feasibility of the proposed 6-DoF tracking method. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
16. Real‐time dense mapping for online processing and navigation.
- Author
-
Ling, Yonggen and Shen, Shaojie
- Subjects
AUTONOMOUS robots ,IMAGE processing ,COLLEGE campuses ,ROBOTICS - Abstract
Autonomous robots require accurate localizations and dense mappings for motion planning. We consider the navigation scenario where the dense representation of the robot surrounding must be immediately available, and require that the system is capable of an instantaneous map correction if a loop closure is detected by the localization module. To satisfy the real‐time processing requirement of online robotics applications, our presented system bounds the algorithmic complexity of the localization pipeline by restricting the number of variables to be optimized at each time instant. A dense map representation along with a local dense map reconstruction strategy is also proposed. Despite the limits that are imposed by the real‐time requirement and planning safety, the mapping quality of our method is comparable to other competitive methods. For implementations, we additionally introduce a few engineering considerations, such as the system architecture, the variable initialization, the memory management, the image processing, and so forth, to improve the system performance. Extensive experimental validations of our presented system are performed on the KITTI and NewCollege datasets, and through an online experiment around the Hong Kong University of Science and Technology (HKUST) university campus. We release our implementation as open‐source robot operating system (ROS) packages for the benefit of the community. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
17. Flying on point clouds: Online trajectory generation and autonomous navigation for quadrotors in cluttered environments.
- Author
-
Gao, Fei, Wu, William, Gao, Wenliang, and Shen, Shaojie
- Subjects
POINT cloud ,ROBOTIC trajectory control ,AUTONOMOUS vehicles ,MICRO air vehicles ,EMERGENCY management ,QUADROTOR helicopters ,NAVIGATION (Astronautics) ,CLUTTER (Radar) - Abstract
Micro aerial vehicles (MAVs), especially quadrotors, have been widely used in field applications, such as disaster response, field surveillance, and search‐and‐rescue. For accomplishing such missions in challenging environments, the capability of navigating with full autonomy while avoiding unexpected obstacles is the most crucial requirement. In this paper, we present a framework for online generating safe and dynamically feasible trajectories directly on the point cloud, which is the lowest‐level representation of range measurements and is applicable to different sensor types. We develop a quadrotor platform equipped with a three‐dimensional (3D) light detection and ranging (LiDAR) and an inertial measurement unit (IMU) for simultaneously estimating states of the vehicle and building point cloud maps of the environment. Based on the incrementally registered point clouds, we online generate and refine a flight corridor, which represents the free space that the trajectory of the quadrotor should lie in. We represent the trajectory as piecewise Bézier curves by using the Bernstein polynomial basis and formulate the trajectory generation problem as a convex program. By using Bézier curves, we can constrain the position and kinodynamics of the trajectory entirely within the flight corridor and given physical limits. The proposed approach is implemented to run onboard in real‐time and is integrated into an autonomous quadrotor platform. We demonstrate fully autonomous quadrotor flights in unknown, complex environments to validate the proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
18. Simulation and flight experiments of a quadrotor tail-sitter vertical take-off and landing unmanned aerial vehicle with wide flight envelope.
- Author
-
Lyu, Ximin, Gu, Haowei, Zhou, Jinni, Li, Zexiang, Shen, Shaojie, and Zhang, Fu
- Subjects
DRONE aircraft ,QUADROTOR helicopters ,AUTONOMOUS vehicles ,FLYING machines ,X-ray diffraction - Abstract
This paper presents the modeling, simulation, and control of a small-scale electric powered quadrotor tail-sitter vertical take-off and landing unmanned aerial vehicle. In the modeling part, a full attitude wind tunnel test is performed on the full-scale unmanned aerial vehicle to capture its aerodynamics over the flight envelope. To accurately capture the degradation of motor thrust and torque at the presence of the forward speed, a wind tunnel test on the motor and propeller is also carried out. The extensive wind tunnel tests, when combined with the unmanned aerial vehicle kinematics model, dynamics model and other practical constraints such as motor saturation and delay, lead to a complete flight simulator that can accurately reveal the actual aircraft dynamics as verified by actual flight experiments. Based on the developed model, a unified attitude controller and a stable transition controller are designed and verified. Both simulation and experiments show that the developed attitude controller can stabilize the unmanned aerial vehicle attitude over the entire flight envelope and the transition controller can successfully transit the unmanned aerial vehicle from vertical flight to level flight with negligible altitude dropping, a common and fundamental challenge for tail-sitter vertical take-off and landing aircrafts. Finally, when supplied with the designed controller, the tail-sitter unmanned aerial vehicle can achieve a wide flight speed envelope ranging from stationary hovering to fast level flight. This feature dramatically distinguishes our aircraft from conventional fixed-wing airplanes. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
19. A Survey on Aerial Swarm Robotics.
- Author
-
Chung, Soon-Jo, Paranjape, Aditya Avinash, Dames, Philip, Shen, Shaojie, and Kumar, Vijay
- Subjects
ROBOTICS ,DRONE aircraft ,TRAJECTORIES (Mechanics) ,ALGORITHMS ,DEGREES of freedom ,HARDWARE - Abstract
The use of aerial swarms to solve real-world problems has been increasing steadily, accompanied by falling prices and improving performance of communication, sensing, and processing hardware. The commoditization of hardware has reduced unit costs, thereby lowering the barriers to entry to the field of aerial swarm robotics. A key enabling technology for swarms is the family of algorithms that allow the individual members of the swarm to communicate and allocate tasks amongst themselves, plan their trajectories, and coordinate their flight in such a way that the overall objectives of the swarm are achieved efficiently. These algorithms, often organized in a hierarchical fashion, endow the swarm with autonomy at every level, and the role of a human operator can be reduced, in principle, to interactions at a higher level without direct intervention. This technology depends on the clever and innovative application of theoretical tools from control and estimation. This paper reviews the state of the art of these theoretical tools, specifically focusing on how they have been developed for, and applied to, aerial swarms. Aerial swarms differ from swarms of ground-based vehicles in two respects: they operate in a three-dimensional space and the dynamics of individual vehicles adds an extra layer of complexity. We review dynamic modeling and conditions for stability and controllability that are essential in order to achieve cooperative flight and distributed sensing. The main sections of this paper focus on major results covering trajectory generation, task allocation, adversarial control, distributed sensing, monitoring, and mapping. Wherever possible, we indicate how the physics and subsystem technologies of aerial robots are brought to bear on these individual areas. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
20. VINS-Mono: A Robust and Versatile Monocular Visual-Inertial State Estimator.
- Author
-
Qin, Tong, Li, Peiliang, and Shen, Shaojie
- Subjects
SLAM (Robotics) ,DEGREES of freedom ,ALGORITHMS ,MICRO air vehicles ,VIRTUAL reality ,AUGMENTED reality - Abstract
One camera and one low-cost inertial measurement unit (IMU) form a monocular visual-inertial system (VINS), which is the minimum sensor suite (in size, weight, and power) for the metric six degrees-of-freedom (DOF) state estimation. In this paper, we present VINS-Mono: a robust and versatile monocular visual-inertial state estimator. Our approach starts with a robust procedure for estimator initialization. A tightly coupled, nonlinear optimization-based method is used to obtain highly accurate visual-inertial odometry by fusing preintegrated IMU measurements and feature observations. A loop detection module, in combination with our tightly coupled formulation, enables relocalization with minimum computation. We additionally perform 4-DOF pose graph optimization to enforce the global consistency. Furthermore, the proposed system can reuse a map by saving and loading it in an efficient way. The current and previous maps can be merged together by the global pose graph optimization. We validate the performance of our system on public datasets and real-world experiments and compare against other state-of-the-art algorithms. We also perform an onboard closed-loop autonomous flight on the microaerial-vehicle platform and port the algorithm to an iOS-based demonstration. We highlight that the proposed work is a reliable, complete, and versatile system that is applicable for different applications that require high accuracy in localization. We open source our implementations for both PCs (https://github.com/HKUST-Aerial-Robotics/VINS-Mono) and iOS mobile devices (https://github.com/HKUST-Aerial-Robotics/VINS-Mobile). [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
21. Insight into the π‐hole···π‐electrons tetrel bonds between F2ZO (Z = C, Si, Ge) and unsaturated hydrocarbons.
- Author
-
Shen, Shaojie, Zeng, Yanli, Li, Xiaoyan, Meng, Lingpeng, and Zhang, Xueying
- Subjects
- *
INTERMOLECULAR interactions , *HYDROCARBONS , *ACETYLENE , *CHEMICAL decomposition , *ELECTRON density - Abstract
Abstract: The intermolecular π‐hole···π‐electrons interactions between F2ZO (Z = C, Si, Ge) molecules and unsaturated hydrocarbons including acetylene, ethylene, 1,3‐butadiene and benzene were constructed to reveal the differences of tetrel bonds forming by carbon and heavier tetrel atoms. The ab initio computation in association with topological analysis of electron density, natural bond orbital, and energy decomposition analysis demonstrate that the strength of Si···π and Ge···π tetrel bonds is much stronger than that of C···π tetrel bonds. The Si···π and Ge···π tetrel bonds exhibit covalent or partially covalent interaction nature, while the weak C···π tetrel bonds display the hallmarks of noncovalent interaction, the electrostatic interaction is the primary influencing factor. The Si···π and Ge···π interactions are determined by both the σ‐ and π‐electron densities, while the C···π interactions are dominated mainly by the π‐electron densities. The π‐hole···π‐electrons tetrel bonds are dominated by electrostatic interaction, and polarization has a comparable contribution in the Si···π and Ge···π tetrel bonds. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
22. Autonomous aerial navigation using monocular visual-inertial fusion.
- Author
-
Lin, Yi, Gao, Fei, Qin, Tong, Gao, Wenliang, Liu, Tianbo, Wu, William, Yang, Zhenfei, and Shen, Shaojie
- Subjects
MICRO air vehicles ,DRONE aircraft ,GRAPHICS processing units ,INERTIAL navigation systems ,QUADROTOR helicopters ,NAVIGATION - Abstract
Autonomous micro aerial vehicles (MAVs) have cost and mobility benefits, making them ideal robotic platforms for applications including aerial photography, surveillance, and search and rescue. As the platform scales down, MAVs become more capable of operating in confined environments, but it also introduces significant size and payload constraints. A monocular visual-inertial navigation system (VINS), consisting only of an inertial measurement unit (IMU) and a camera, becomes the most suitable sensor suite in this case, thanks to its light weight and small footprint. In fact, it is the minimum sensor suite allowing autonomous flight with sufficient environmental awareness. In this paper, we show that it is possible to achieve reliable online autonomous navigation using monocular VINS. Our system is built on a customized quadrotor testbed equipped with a fisheye camera, a low-cost IMU, and heterogeneous onboard computing resources. The backbone of our system is a highly accurate optimization-based monocular visual-inertial state estimator with online initialization and self-extrinsic calibration. An onboard GPU-based monocular dense mapping module that conditions on the estimated pose provides wide-angle situational awareness. Finally, an online trajectory planner that operates directly on the incrementally built three-dimensional map guarantees safe navigation through cluttered environments. Extensive experimental results are provided to validate individual system modules as well as the overall performance in both indoor and outdoor environments. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
23. Monocular Visual–Inertial State Estimation With Online Initialization and Camera–IMU Extrinsic Calibration.
- Author
-
Yang, Zhenfei and Shen, Shaojie
- Subjects
- *
CAMERAS , *ROBOT control systems , *CALIBRATION , *PROBLEM solving , *COMPUTER vision - Abstract
There have been increasing demands for developing microaerial vehicles with vision-based autonomy for search and rescue missions in complex environments. In particular, the monocular visual–inertial system (VINS), which consists of only an inertial measurement unit (IMU) and a camera, forms a great lightweight sensor suite due to its low weight and small footprint. In this paper, we address two challenges for rapid deployment of monocular VINS: 1) the initialization problem and 2) the calibration problem. We propose a methodology that is able to initialize velocity, gravity, visual scale, and camera–IMU extrinsic calibration on the fly. Our approach operates in natural environments and does not use any artificial markers. It also does not require any prior knowledge about the mechanical configuration of the system. It is a significant step toward plug-and-play and highly customizable visual navigation for mobile robots. We show through online experiments that our method leads to accurate calibration of camera–IMU transformation, with errors less than 0.02 m in translation and 1° in rotation. We compare out method with a state-of-the-art marker-based offline calibration method and show superior results. We also demonstrate the performance of the proposed approach in large-scale indoor and outdoor experiments. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
24. Obtaining Liftoff Indoors: Autonomous Navigation in Confined Indoor Environments.
- Author
-
Shen, Shaojie, Michael, Nathan, and Kumar, Vijay
- Subjects
ROBOTICS ,MICRO air vehicles ,EXPERIMENTAL design ,AIRPLANE lighting ,PERFORMANCE evaluation ,THREE-dimensional display systems - Abstract
In this article, we consider the problem of autonomous navigation with a microaerial vehicle (MAV) in threedimensional (3-D) confined indoor environments with multiple floors. We present experimental results with ground truth comparisons and performance analysis. We also highlight field experiments in multiple environments. [ABSTRACT FROM PUBLISHER]
- Published
- 2013
- Full Text
- View/download PDF
25. Stochastic differential equation-based exploration algorithm for autonomous indoor 3D exploration with a micro-aerial vehicle.
- Author
-
Shen, Shaojie, Michael, Nathan, and Kumar, Vijay
- Subjects
- *
STOCHASTIC difference equations , *ALGORITHMS , *ROBOTICS , *CARTOGRAPHY , *THREE-dimensional display systems , *DRONE aircraft - Abstract
In this paper, we propose a stochastic differential equation-based exploration algorithm to enable exploration in three-dimensional indoor environments with a payload-constrained micro-aerial vehicle (MAV). We are able to address computation, memory, and sensor limitations by using a map representation which is dense for the known occupied space but sparse for the free space. We determine regions for further exploration based on the evolution of a stochastic differential equation that simulates the expansion of a system of particles with Newtonian dynamics. The regions of most significant particle expansion correlate to unexplored space. After identifying and processing these regions, the autonomous MAV navigates to these locations to enable fully autonomous exploration. The performance of the approach is demonstrated through numerical simulations and experimental results in single- and multi-floor indoor experiments. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
26. Collaborative mapping of an earthquake-damaged building via ground and aerial robots.
- Author
-
Michael, Nathan, Shen, Shaojie, Mohta, Kartik, Mulgaonkar, Yash, Kumar, Vijay, Nagatani, Keiji, Okada, Yoshito, Kiribayashi, Seiga, Otake, Kazuki, Yoshida, Kazuya, Ohno, Kazunori, Takeuchi, Eijiro, and Tadokoro, Satoshi
- Subjects
EARTHQUAKE damage ,ROBOTS ,SENDAI Earthquake, Japan, 2011 ,QUADROTOR helicopters ,REMOTE control - Abstract
We report recent results from field experiments conducted with a team of ground and aerial robots engaged in the collaborative mapping of an earthquake-damaged building. The goal of the experimental exercise is the generation of three-dimensional maps that capture the layout of a multifloor environment. The experiments took place in the top three floors of a structurally compromised building at Tohoku University in Sendai, Japan that was damaged during the 2011 Tohoku earthquake. We provide details of the approach to the collaborative mapping and report results from the experiments in the form of maps generated by the individual robots and as a team. We conclude by discussing observations from the experiments and future research topics. © 2012 Wiley Periodicals, Inc. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
27. Learning whole-image descriptors for real-time loop detection and kidnap recovery under large viewpoint difference.
- Author
-
Kuse, Manohar and Shen, Shaojie
- Subjects
- *
KIDNAPPING , *STEREOPHONIC sound systems , *MAGNITUDE (Mathematics) , *DESCRIPTOR systems , *STEREO vision (Computer science) - Abstract
We present a real-time stereo visual-inertial-SLAM system which is able to recover from complicated kidnap scenarios and failures online in realtime. We propose to learn the whole-image-descriptor in a weakly supervised manner based on NetVLAD and decoupled convolutions. We analyze the training difficulties in using standard loss formulations and propose an allpairloss and show its effect through extensive experiments. Compared to standard NetVLAD, our network takes an order of magnitude fewer computations and model parameters, as a result runs about three times faster. We evaluate the representation power of our descriptor on standard datasets with precision–recall. Unlike previous loop detection methods which have been evaluated only on fronto-parallel revisits, we evaluate the performance of our method with competing methods on scenarios involving large viewpoint difference. Finally, we present the fully functional system with relative computation and handling of multiple world co-ordinate system which is able to reduce odometry drift, recover from complicated kidnap scenarios and random odometry failures. We open source our fully functional system as an add-on for the popular VINS-Fusion. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
28. Metric3D v2: A Versatile Monocular Geometric Foundation Model for Zero-shot Metric Depth and Surface Normal Estimation.
- Author
-
Hu M, Yin W, Zhang C, Cai Z, Long X, Chen H, Wang K, Yu G, Shen C, and Shen S
- Abstract
We introduce Metric3D v2, a geometric foundation model for zero-shot metric depth and surface normal estimation from a single image, which is crucial for metric 3D recovery. While depth and normal are geometrically related and highly complimentary, they present distinct challenges. State-of-the-art (SoTA) monocular depth methods achieve zero-shot generalization by learning affine-invariant depths, which cannot recover real-world metrics. Meanwhile, SoTA normal estimation methods have limited zero-shot performance due to the lack of large-scale labeled data. To tackle these issues, we propose solutions for both metric depth estimation and surface normal estimation. For metric depth estimation, we show that the key to a zero-shot single-view model lies in resolving the metric ambiguity from various camera models and large-scale data training. We propose a canonical camera space transformation module, which explicitly addresses the ambiguity problem and can be effortlessly plugged into existing monocular models. For surface normal estimation, we propose a joint depth-normal optimization module to distill diverse data knowledge from metric depth, enabling normal estimators to learn beyond normal labels. Equipped with these modules, our depth-normal models can be stably trained with over 16 million of images from thousands of camera models with different-type annotations, resulting in zero-shot generalization to in-the-wild images with unseen camera settings. Our method currently ranks the 1st on various zero-shot and non-zero-shot benchmarks for metric depth, affine-invariant-depth as well as surface-normal prediction, shown in Fig. 1. Notably, we surpassed the ultra-recent MarigoldDepth and DepthAnything on various depth benchmarks including NYUv2 and KITTI. Our method enables the accurate recovery of metric 3D structures on randomly collected internet images, paving the way for plausible single-image metrology. The potential benefits extend to downstream tasks, which can be significantly improved by simply plugging in our model. For example, our model relieves the scale drift issues of monocular-SLAM (Fig. 3), leading to high-quality metric scale dense mapping. These applications highlight the versatility of Metric3D v2 models as geometric foundation models. Our project page is at https://JUGGHM.github.io/Metric3Dv2.
- Published
- 2024
- Full Text
- View/download PDF
29. Microsaccade-inspired event camera for robotics.
- Author
-
He B, Wang Z, Zhou Y, Chen J, Singh CD, Li H, Gao Y, Shen S, Wang K, Cao Y, Xu C, Aloimonos Y, Gao F, and Fermüller C
- Subjects
- Humans, Motion, Software, Reaction Time physiology, Biomimetics instrumentation, Fixation, Ocular physiology, Eye Movements physiology, Vision, Ocular physiology, Robotics instrumentation, Saccades physiology, Algorithms, Equipment Design, Visual Perception physiology
- Abstract
Neuromorphic vision sensors or event cameras have made the visual perception of extremely low reaction time possible, opening new avenues for high-dynamic robotics applications. These event cameras' output is dependent on both motion and texture. However, the event camera fails to capture object edges that are parallel to the camera motion. This is a problem intrinsic to the sensor and therefore challenging to solve algorithmically. Human vision deals with perceptual fading using the active mechanism of small involuntary eye movements, the most prominent ones called microsaccades. By moving the eyes constantly and slightly during fixation, microsaccades can substantially maintain texture stability and persistence. Inspired by microsaccades, we designed an event-based perception system capable of simultaneously maintaining low reaction time and stable texture. In this design, a rotating wedge prism was mounted in front of the aperture of an event camera to redirect light and trigger events. The geometrical optics of the rotating wedge prism allows for algorithmic compensation of the additional rotational motion, resulting in a stable texture appearance and high informational output independent of external motion. The hardware device and software solution are integrated into a system, which we call artificial microsaccade-enhanced event camera (AMI-EV). Benchmark comparisons validated the superior data quality of AMI-EV recordings in scenarios where both standard cameras and event cameras fail to deliver. Various real-world experiments demonstrated the potential of the system to facilitate robotics perception both for low-level and high-level vision tasks.
- Published
- 2024
- Full Text
- View/download PDF
30. Event-Based Motion Segmentation With Spatio-Temporal Graph Cuts.
- Author
-
Zhou Y, Gallego G, Lu X, Liu S, and Shen S
- Abstract
Identifying independently moving objects is an essential task for dynamic scene understanding. However, traditional cameras used in dynamic scenes may suffer from motion blur or exposure artifacts due to their sampling principle. By contrast, event-based cameras are novel bio-inspired sensors that offer advantages to overcome such limitations. They report pixel-wise intensity changes asynchronously, which enables them to acquire visual information at exactly the same rate as the scene dynamics. We develop a method to identify independently moving objects acquired with an event-based camera, that is, to solve the event-based motion segmentation problem. We cast the problem as an energy minimization one involving the fitting of multiple motion models. We jointly solve two sub-problems, namely event-cluster assignment (labeling) and motion model fitting, in an iterative manner by exploiting the structure of the input event data in the form of a spatio-temporal graph. Experiments on available datasets demonstrate the versatility of the method in scenes with different motion patterns and number of moving objects. The evaluation shows state-of-the-art results without having to predetermine the number of expected moving objects. We release the software and dataset under an open source license to foster research in the emerging topic of event-based motion segmentation.
- Published
- 2023
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.