50 results on '"Fisheye camera"'
Search Results
2. High-accuracy people counting in large spaces using overhead fisheye cameras
- Author
-
Konrad, Janusz, Cokbas, Mertcan, Ishwar, Prakash, Little, Thomas D.C., and Gevelber, Michael
- Published
- 2024
- Full Text
- View/download PDF
3. A reliable NLOS error identification method based on LightGBM driven by multiple features of GNSS signals
- Author
-
Xiaohong Zhang, Xinyu Wang, Wanke Liu, Xianlu Tao, Yupeng Gu, Hailu Jia, and Chuanming Zhang
- Subjects
Urban environment ,GNSS signal feature ,Non-line-of-sight identification ,LightGBM ,Fisheye camera ,Technology (General) ,T1-995 - Abstract
Abstract In complicated urban environments, Global Navigation Satellite System (GNSS) signals are frequently affected by building reflection or refraction, resulting in Non-Line-of-Sight (NLOS) errors. In severe cases, NLOS errors can cause a ranging error of hundreds of meters, which has a substantial impact on the precision and dependability of GNSS positioning. To address this problem, we propose a reliable NLOS error identification method based on the Light Gradient Boosting Machine (LightGBM), which is driven by multiple features of GNSS signals. The sample data are first labeled using a fisheye camera to classify the signals from visible satellites as Line-of-Sight (LOS) or NLOS signals. We then analyzed the sample data to determine the correlation among multiple features, such as the signal-to-noise ratio, elevation angle, pseudorange consistency, phase consistency, Code Minus Carrier, and Multi-Path combined observations. Finally, we introduce the LightGBM model to establish an effective correlation between signal features and satellite visibility and adopt a multifeature-driven scheme to achieve reliable identification of NLOSs. The test results show that the proposed method is superior to other methods such as Extreme Gradient Boosting (XGBoost), in terms of accuracy and usability. The model demonstrates a potential classification accuracy of approximately 90% with minimal time consumption. Furthermore, the Standard Point Positioning results after excluding NLOSs show the Root Mean Squares are improved by 47.82%, 56.68%, and 36.68% in the east, north, and up directions, respectively, and the overall positioning performance is significantly improved.
- Published
- 2024
- Full Text
- View/download PDF
4. A reliable NLOS error identification method based on LightGBM driven by multiple features of GNSS signals.
- Author
-
Zhang, Xiaohong, Wang, Xinyu, Liu, Wanke, Tao, Xianlu, Gu, Yupeng, Jia, Hailu, and Zhang, Chuanming
- Subjects
GLOBAL Positioning System ,SIGNAL-to-noise ratio ,CAMERAS - Abstract
In complicated urban environments, Global Navigation Satellite System (GNSS) signals are frequently affected by building reflection or refraction, resulting in Non-Line-of-Sight (NLOS) errors. In severe cases, NLOS errors can cause a ranging error of hundreds of meters, which has a substantial impact on the precision and dependability of GNSS positioning. To address this problem, we propose a reliable NLOS error identification method based on the Light Gradient Boosting Machine (LightGBM), which is driven by multiple features of GNSS signals. The sample data are first labeled using a fisheye camera to classify the signals from visible satellites as Line-of-Sight (LOS) or NLOS signals. We then analyzed the sample data to determine the correlation among multiple features, such as the signal-to-noise ratio, elevation angle, pseudorange consistency, phase consistency, Code Minus Carrier, and Multi-Path combined observations. Finally, we introduce the LightGBM model to establish an effective correlation between signal features and satellite visibility and adopt a multifeature-driven scheme to achieve reliable identification of NLOSs. The test results show that the proposed method is superior to other methods such as Extreme Gradient Boosting (XGBoost), in terms of accuracy and usability. The model demonstrates a potential classification accuracy of approximately 90% with minimal time consumption. Furthermore, the Standard Point Positioning results after excluding NLOSs show the Root Mean Squares are improved by 47.82%, 56.68%, and 36.68% in the east, north, and up directions, respectively, and the overall positioning performance is significantly improved. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Neural Radiance Fields for Fisheye Driving Scenes Using Edge-Aware Integrated Depth Supervision.
- Author
-
Choi, Jiho and Lee, Sang Jun
- Subjects
- *
PINHOLE cameras , *RADIANCE , *CAMERAS , *LIDAR , *DETECTORS - Abstract
Neural radiance fields (NeRF) have become an effective method for encoding scenes into neural representations, allowing for the synthesis of photorealistic views of unseen views from given input images. However, the applicability of traditional NeRF is significantly limited by its assumption that images are captured for object-centric scenes with a pinhole camera. Expanding these boundaries, we focus on driving scenarios using a fisheye camera, which offers the advantage of capturing visual information from a wide field of view. To address the challenges due to the unbounded and distorted characteristics of fisheye images, we propose an edge-aware integration loss function. This approach leverages sparse LiDAR projections and dense depth maps estimated from a learning-based depth model. The proposed algorithm assigns larger weights to neighboring points that have depth values similar to the sensor data. Experiments were conducted on the KITTI-360 and JBNU-Depth360 datasets, which are public and real-world datasets of driving scenarios using fisheye cameras. Experimental results demonstrated that the proposed method is effective in synthesizing novel view images, outperforming existing approaches. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Gam360: sensing gaze activities of multi-persons in 360 degrees: GAM360: sensing gaze activities...
- Author
-
Cai, Zhuojiang, Wang, Haofei, Niu, Yuhao, and Lu, Feng
- Published
- 2025
- Full Text
- View/download PDF
7. A Leader-follower formation control of mobile robots by position-based visual servo method using fisheye camera
- Author
-
Shinsuke Oh-hara and Atsushi Fujimori
- Subjects
Fisheye camera ,Formation control ,Disturbance observer ,Position-based method ,Technology ,Mechanical engineering and machinery ,TJ1-1570 ,Control engineering systems. Automatic machinery (General) ,TJ212-225 ,Machine design and drawing ,TJ227-240 ,Technology (General) ,T1-995 ,Industrial engineering. Management engineering ,T55.4-60.8 ,Automation ,T59.5 ,Information technology ,T58.5-58.64 - Abstract
Abstract This paper presents a leader-follower formation control of multiple mobile robots by position-based method using a fisheye camera. A fisheye camera has a wide field of view and recognizes a wide range of objects. In this paper, the fisheye camera is first modeled on spherical coordinates and then a position estimation technique is proposed by using an AR marker based on the spherical model. This paper furthermore presents a method for estimating the velocity of a leader robot based on a disturbance observer using the obtained position information. The proposed techniques are combined with a formation control based on the virtual structure. In this paper, the formation controller and velocity estimator can be designed independently, and the stability analysis of the total system is performed by using Lyapunov theorem. The effectiveness of the proposed method is demonstrated by simulation and experiments using two real mobile robots.
- Published
- 2023
- Full Text
- View/download PDF
8. Omni-OTPE: Omnidirectional Optimal Real-Time Ground Target Position Estimation System for Moving Lightweight Unmanned Aerial Vehicle.
- Author
-
Ding, Yi, Che, Jiaxing, Zhou, Zhiming, and Bian, Jingyuan
- Subjects
- *
OMNIRANGE system , *AERIAL surveillance , *RECONNAISSANCE operations , *POINT cloud , *IMAGE reconstruction algorithms , *LIDAR , *DESIGN software - Abstract
Ground target detection and positioning systems based on lightweight unmanned aerial vehicles (UAVs) are increasing in value for aerial reconnaissance and surveillance. However, the current method for estimating the target's position is limited by the field of view angle, rendering it challenging to fulfill the demands of a real-time omnidirectional reconnaissance operation. To address this issue, we propose an Omnidirectional Optimal Real-Time Ground Target Position Estimation System (Omni-OTPE) that utilizes a fisheye camera and LiDAR sensors. The object of interest is first identified in the fisheye image, and then, the image-based target position is obtained by solving using the fisheye projection model and the target center extraction algorithm based on the detected edge information. Next, the LiDAR's real-time point cloud data are filtered based on position–direction constraints using the image-based target position information. This step allows for the determination of point cloud clusters that are relevant to the characterization of the target's position information. Finally, the target positions obtained from the two methods are fused using an optimal Kalman fuser to obtain the optimal target position information. In order to evaluate the positioning accuracy, we designed a hardware and software setup, mounted on a lightweight UAV, and tested it in a real scenario. The experimental results validate that our method exhibits significant advantages over traditional methods and achieves a real-time high-performance ground target position estimation function. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. LVIF: a lightweight tightly coupled stereo-inertial SLAM with fisheye camera.
- Author
-
Zhu, Hongwei, Zhang, Guobao, Ye, Zhiqi, and Zhou, Hongyi
- Subjects
STEREO vision (Computer science) ,CAMERAS ,UNITS of measurement ,MONOCULARS - Abstract
To enhance the real-time performance and reduce CPU usage in feature-based visual SLAM, this paper introduces a lightweight tightly coupled stereo-inertial SLAM with fisheye cameras, incorporating several key innovations. First, the stereo-fisheye camera is treated as two independent monocular cameras, and the SE(3) transformation is computed between them to minimize the CPU burden during stereo feature matching and eliminate the need for camera rectification. Another important innovation is the application of maximum-a-posteriori (MAP) estimation for the inertial measurement unit (IMU), which effectively reduces inertial bias and noise in a short time frame. By optimizing the system's parameters, the constant-velocity model is replaced from the beginning, resulting in improved tracking efficiency. Furthermore, the system incorporates the inertial data in the loop closure thread. The IMU data are employed to determine the translation direction relative to world coordinates and utilized as a necessary condition for loop detection. Experimental results demonstrate that the proposed system achieves superior real-time performance and lower CPU usage compared to the majority of other SLAM systems. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. LVIF: a lightweight tightly coupled stereo-inertial SLAM with fisheye camera
- Author
-
Hongwei Zhu, Guobao Zhang, Zhiqi Ye, and Hongyi Zhou
- Subjects
SLAM ,State estimation ,Graph optimization ,Fisheye camera ,Electronic computers. Computer science ,QA75.5-76.95 ,Information technology ,T58.5-58.64 - Abstract
Abstract To enhance the real-time performance and reduce CPU usage in feature-based visual SLAM, this paper introduces a lightweight tightly coupled stereo-inertial SLAM with fisheye cameras, incorporating several key innovations. First, the stereo-fisheye camera is treated as two independent monocular cameras, and the SE(3) transformation is computed between them to minimize the CPU burden during stereo feature matching and eliminate the need for camera rectification. Another important innovation is the application of maximum-a-posteriori (MAP) estimation for the inertial measurement unit (IMU), which effectively reduces inertial bias and noise in a short time frame. By optimizing the system’s parameters, the constant-velocity model is replaced from the beginning, resulting in improved tracking efficiency. Furthermore, the system incorporates the inertial data in the loop closure thread. The IMU data are employed to determine the translation direction relative to world coordinates and utilized as a necessary condition for loop detection. Experimental results demonstrate that the proposed system achieves superior real-time performance and lower CPU usage compared to the majority of other SLAM systems.
- Published
- 2023
- Full Text
- View/download PDF
11. A Leader-follower formation control of mobile robots by position-based visual servo method using fisheye camera.
- Author
-
Oh-hara, Shinsuke and Fujimori, Atsushi
- Subjects
MOBILE robots ,ROBOT control systems ,CAMERAS ,SPHERICAL coordinates - Abstract
This paper presents a leader-follower formation control of multiple mobile robots by position-based method using a fisheye camera. A fisheye camera has a wide field of view and recognizes a wide range of objects. In this paper, the fisheye camera is first modeled on spherical coordinates and then a position estimation technique is proposed by using an AR marker based on the spherical model. This paper furthermore presents a method for estimating the velocity of a leader robot based on a disturbance observer using the obtained position information. The proposed techniques are combined with a formation control based on the virtual structure. In this paper, the formation controller and velocity estimator can be designed independently, and the stability analysis of the total system is performed by using Lyapunov theorem. The effectiveness of the proposed method is demonstrated by simulation and experiments using two real mobile robots. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
12. Monocular Depth Estimation from a Fisheye Camera Based on Knowledge Distillation.
- Author
-
Son, Eunjin, Choi, Jiho, Song, Jimin, Jin, Yongsik, and Lee, Sang Jun
- Subjects
- *
PINHOLE cameras , *MONOCULARS , *CAMERAS , *LASER based sensors , *COLLISION broadening , *POINT cloud , *INFORMATION networks - Abstract
Monocular depth estimation is a task aimed at predicting pixel-level distances from a single RGB image. This task holds significance in various applications including autonomous driving and robotics. In particular, the recognition of surrounding environments is important to avoid collisions during autonomous parking. Fisheye cameras are adequate to acquire visual information from a wide field of view, reducing blind spots and preventing potential collisions. While there have been increasing demands for fisheye cameras in visual-recognition systems, existing research on depth estimation has primarily focused on pinhole camera images. Moreover, depth estimation from fisheye images poses additional challenges due to strong distortion and the lack of public datasets. In this work, we propose a novel underground parking lot dataset called JBNU-Depth360, which consists of fisheye camera images and their corresponding LiDAR projections. Our proposed dataset was composed of 4221 pairs of fisheye images and their corresponding LiDAR point clouds, which were obtained from six driving sequences. Furthermore, we employed a knowledge-distillation technique to improve the performance of the state-of-the-art depth-estimation models. The teacher–student learning framework allows the neural network to leverage the information in dense depth predictions and sparse LiDAR projections. Experiments were conducted on the KITTI-360 and JBNU-Depth360 datasets for analyzing the performance of existing depth-estimation models on fisheye camera images. By utilizing the self-distillation technique, the AbsRel and SILog error metrics were reduced by 1.81% and 1.55% on the JBNU-Depth360 dataset. The experimental results demonstrated that the self-distillation technique is beneficial to improve the performance of depth-estimation models. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
13. Complete solution for vehicle Re-ID in surround-view camera system.
- Author
-
Wu, Zizhang, Xu, Tianhao, Wang, Fan, Wang, Xiaoquan, and Song, Jing
- Abstract
Vehicle re-identification (Re-ID) is a critical component of the autonomous driving perception system, and research in this area has accelerated in recent years. However, there is yet no perfect solution to the vehicle re-identification issue associated with the car's surround-view camera system. Our analysis identifies two significant issues in the aforementioned scenario: (1) It is difficult to identify the same vehicle in many picture frames due to the unique construction of the fisheye camera. (2) The appearance of the same vehicle when seen via the surround vision system's several cameras is rather different. To overcome these issues, we suggest an integrative vehicle Re-ID solution method. On the one hand, we provide a technique for determining the consistency of the tracking box drift with respect to the target. On the other hand, we combine a Re-ID network based on the attention mechanism with spatial limitations to increase performance in situations involving multiple cameras. Finally, our approach combines state-of-the-art accuracy with real-time performance. We will soon make the source code and annotated fisheye dataset available. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
14. Orthorectification of Fisheye Image under Equidistant Projection Model.
- Author
-
Zhou, Guoqing, Li, Huanxu, Song, Ruhao, Wang, Qingyang, Xu, Jiasheng, and Song, Bo
- Subjects
- *
STANDARD deviations , *CAMERA calibration , *CORRECTION factors , *BACKLUND transformations , *DIGITAL maps - Abstract
The fisheye camera, with its large viewing angle, can acquire more spatial information in one shot and is widely used in many fields. However, a fisheye image contains large distortion, resulting in that many scholars have investigated its accuracy of orthorectification, i.e., generation of digital orthophoto map (DOM). This paper presents an orthorectification method, which first determines the transformation relationship between the fisheye image points and the perspective projection points according to the equidistant projection model, i.e., determines the spherical distortion of the fisheye image; then introduces the transformation relationship and the fisheye camera distortion model into the collinearity equation to derive the fisheye image orthorectification model. To verify the proposed method, high accuracy of the fisheye camera 3D calibration field is established to obtain the interior and exterior orientation parameters (IOPs/EOPs) and distortion parameters of the fisheye lens. Three experiments are used to verify the proposed orthorectification method. The root mean square errors (RMSEs) of the three DOMs are averagely 0.003 m, 0.29 m, and 0.61 m, respectively. The experimental results demonstrate that the proposed method is correct and effective. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
15. Near-Field Perception for Low-Speed Vehicle Automation Using Surround-View Fisheye Cameras.
- Author
-
Eising, Ciaran, Horgan, Jonathan, and Yogamani, Senthil
- Abstract
Cameras are the primary sensor in automated driving systems. They provide high information density and are optimal for detecting road infrastructure cues laid out for human vision. Surround-view camera systems typically comprise of four fisheye cameras with 190°+ field of view covering the entire 360° around the vehicle focused on near-field sensing. They are the principal sensors for low-speed, high accuracy, and close-range sensing applications, such as automated parking, traffic jam assistance, and low-speed emergency braking. In this work, we provide a detailed survey of such vision systems, setting up the survey in the context of an architecture that can be decomposed into four modular components namely Recognition, Reconstruction, Relocalization, and Reorganization. We jointly call this the 4R Architecture. We discuss how each component accomplishes a specific aspect and provide a positional argument that they can be synergized to form a complete perception system for low-speed automation. We support this argument by presenting results from previous works and by presenting architecture proposals for such a system. Qualitative results are presented in the video at https://youtu.be/ae8bCOF77uY. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
16. Localization of Aerial Robot Based on Fisheye Cameras in a Virtual Lab
- Author
-
MohammadAli Amiri Atashgah and Seyyed Mohammad-Jafar Tabib
- Subjects
virtual environment ,visual navigation ,fisheye calibration ,fisheye camera ,perspective-n-point ,aerial robot ,Technology ,Astronomy ,QB1-991 - Abstract
This research represents localization of an aerial robot using fisheye cameras on walls in a simulation environment. The virtual testbed in this work is a quadrotor that is simulated in MATLAB Simulink. Subsequently, the simulation outputs as flight records are used in a virtual lab, which is developed in 3DsMAX. Then, the virtual fisheye cameras (here two) are installed in some different points on the walls and the related images from the cameras are received offline. The gathered images will be processed by OpenCV in a C++ environment. For external calibration, each fisheye camera takes an image from a known pattern consist of some lights placed in the virtual lab. We execute Perspective-n-Point method on the images to obtain pierce direction/position of the camera. For more, the aerial robot is localized by computing the nearest point between two lines of sight. In brief, the outcomes exhibit an accuracy of 4cm in the center of the virtual-room room.
- Published
- 2021
- Full Text
- View/download PDF
17. Neural-Network-Based Model-Free Calibration Method for Stereo Fisheye Camera
- Author
-
Yuwei Cao, Hui Wang, Han Zhao, and Xu Yang
- Subjects
fisheye camera ,stereo calibration ,phase unwrapping ,neural-network ,large field of view ,Biotechnology ,TP248.13-248.65 - Abstract
The fisheye camera has a field of view (FOV) of over 180°, which has advantages in the fields of medicine and precision measurement. Ordinary pinhole models have difficulty in fitting the severe barrel distortion of the fisheye camera. Therefore, it is necessary to apply a nonlinear geometric model to model this distortion in measurement applications, while the process is computationally complex. To solve the problem, this paper proposes a model-free stereo calibration method for binocular fisheye camera based on neural-network. The neural-network can implicitly describe the nonlinear mapping relationship between image and spatial coordinates in the scene. We use a feature extraction method based on three-step phase-shift method. Compared with the conventional stereo calibration of fisheye cameras, our method does not require image correction and matching. The spatial coordinates of the points in the common field of view of binocular fisheye camera can all be calculated by the generalized fitting capability of the neural-network. Our method preserves the advantage of the broad field of view of the fisheye camera. The experimental results show that our method is more suitable for fisheye cameras with significant distortion.
- Published
- 2022
- Full Text
- View/download PDF
18. OmniPD: One-Step Person Detection in Top-View Omnidirectional Indoor Scenes
- Author
-
Yu Jingrui, Seidel Roman, and Hirtz Gangolf
- Subjects
convolutional neural networks (cnns) ,transfer learning ,omnidirectional images ,fisheye camera ,object detection ,active assisted living (aal) ,Medicine - Abstract
We propose a one-step person detector for topview omnidirectional indoor scenes based on convolutional neural networks (CNNs). While state of the art person detectors reach competitive results on perspective images, missing CNN architectures as well as training data that follows the distortion of omnidirectional images makes current approaches not applicable to our data. The method predicts bounding boxes of multiple persons directly in omnidirectional images without perspective transformation, which reduces overhead of pre- and post-processing and enables realtime performance. The basic idea is to utilize transfer learning to fine-tune CNNs trained on perspective images with data augmentation techniques for detection in omnidirectional images. We fine-tune two variants of Single Shot MultiBox detectors (SSDs). The first one uses Mobilenet v1 FPN as feature extractor (moSSD). The second one uses ResNet50 v1 FPN (resSSD). Both models are pre-trained on Microsoft Common Objects in Context (COCO) dataset. We fine-tune both models on PASCAL VOC07 and VOC12 datasets, specifically on class person. Random 90-degree rotation and random vertical flipping are used for data augmentation in addition to the methods proposed by original SSD. We reach an average precision (AP) of 67.3%with moSSD and 74.9%with resSSD on the evaluation dataset. To enhance the fine-tuning process, we add a subset of HDA Person dataset and a subset of PIROPO database and reduce the number of perspective images to PASCAL VOC07. The AP rises to 83.2% for moSSD and 86.3% for resSSD, respectively. The average inference speed is 28 ms per image for moSSD and 38 ms per image for resSSD using Nvidia Quadro P6000. Our method is applicable to other CNN-based object detectors and can potentially generalize for detecting other objects in omnidirectional images.
- Published
- 2019
- Full Text
- View/download PDF
19. Efficient Face Detection in the Fisheye Image Domain.
- Author
-
Yang, Cheng-Yun and Chen, Homer H.
- Subjects
- *
DETECTORS , *FEATURE extraction - Abstract
Significant progress has been made for face detection from normal images in recent years; however, accurate and fast face detection from fisheye images remains a challenging issue because of serious fisheye distortion in the peripheral region of the image. To improve face detection accuracy, we propose a light-weight location-aware network to distinguish the peripheral region from the central region in the feature learning stage. To match the face detector, the shape and scale of the anchor (bounding box) is made location dependent. The overall face detection system performs directly in the fisheye image domain without rectification and calibration and hence is agnostic of the fisheye projection parameters. Experiments on Wider-360 and real-world fisheye images using a single CPU core indeed show that our method is superior to the state-of-the-art real-time face detector RFB Net. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
20. Deep Face Rectification for 360° Dual-Fisheye Cameras.
- Author
-
Li, Yi-Hsin, Lo, I-Chan, and Chen, Homer H.
- Subjects
- *
HUMAN facial recognition software , *CAMERAS , *IMAGE reconstruction , *FEATURE extraction , *IMAGE recognition (Computer vision) - Abstract
Rectilinear face recognition models suffer from severe performance degradation when applied to fisheye images captured by 360° back-to-back dual fisheye cameras. We propose a novel face rectification method to combat the effect of fisheye image distortion on face recognition. The method consists of a classification network and a restoration network specifically designed to handle the non-linear property of fisheye projection. The classification network classifies an input fisheye image according to its distortion level. The restoration network takes a distorted image as input and restores the rectilinear geometric structure of the face. The performance of the proposed method is tested on an end-to-end face recognition system constructed by integrating the proposed rectification method with a conventional rectilinear face recognition system. The face verification accuracy of the integrated system is 99.18% when tested on images in the synthetic Labeled Faces in the Wild (LFW) dataset and 95.70% for images in a real image dataset, resulting in an average accuracy improvement of 6.57% over the conventional face recognition system. For face identification, the average improvement over the conventional face recognition system is 4.51%. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
21. Human position and head direction tracking in fisheye camera using randomized ferns and fisheye histograms of oriented gradients.
- Author
-
Srisamosorn, Veerachart, Kuwahara, Noriaki, Yamashita, Atsushi, Ogata, Taiki, Shirafuji, Shouhei, and Ota, Jun
- Subjects
- *
HUMAN body , *FERNS , *HEAD , *VIDEO surveillance , *HISTOGRAMS - Abstract
This paper proposes a system for tracking human position and head direction using fisheye camera mounted to the ceiling. This is believed to be the first system to estimate head direction from ceiling-mounted fisheye camera. Fisheye histograms of oriented gradients descriptor is developed as a substitute to the histograms of oriented gradients descriptor which has been widely used for human detection in perspective camera. Human body and head are detected by the proposed descriptor and tracked to extract head area for direction estimation. Direction estimation using randomized ferns is adapted to work with fisheye images by using the proposed descriptor, guided by the direction of movement. With experiments on available dataset and new dataset with ground truth, the direction can be estimated with average error below 40 ∘ , with head position error half of the head size. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
22. Panoramic SLAM from a multiple fisheye camera rig.
- Author
-
Ji, Shunping, Qin, Zijie, Shan, Jie, and Lu, Meng
- Subjects
- *
PANORAMIC cameras , *CAMERA calibration , *ERROR functions , *BACK propagation , *SOURCE code , *CAMERAS , *PANORAMIC radiography - Abstract
This paper presents a feature-based simultaneous localization and mapping (SLAM) system for panoramic image sequences obtained from a multiple fisheye camera rig in a wide baseline mobile mapping system (MMS). First, the developed fisheye camera calibration method combines an equidistance projection model and trigonometric polynomial to achieve high-accuracy calibration from the fisheye camera to an equivalent ideal frame camera, which warrants an accurate transform from the fisheye images to a corresponding panoramic image. Second, we developed a panoramic camera model, corresponding bundle adjustment with a specific back propagation error function, and linear pose initialization algorithm. Third, the implemented feature-based SLAM pipeline consists of several specific strategies and algorithms in initialization, feature matching, frame tracking, and loop closing to overcome the difficulties in tracking wide baseline panoramic image sequences. We conducted experiments on large-scale MMS datasets of more than 15 km trajectories and 14,000 panoramic images as well as small-scale public video datasets. Our results show that the developed panoramic SLAM system PAN-SLAM can achieve fully-automatic camera localization and sparse map reconstruction in both small-scale indoor and large-scale outdoor environments including challenging scenes (e.g., dark tunnel) without the aid of any other sensors. The localization accuracy, which was measured by the absolute trajectory error (ATE), resembled the high-accuracy GNSS/INS reference of 0.1 m. PAN-SLAM also outperformed several feature-based fisheye and monocular SLAM systems with incomparable robustness in various environments. The system could be considered as a robust complementary solution and an alternative to expensive commercial navigation systems, especially in urban environments where signal obstruction and multipath interference are common. Source code and demo are available at http://study.rsgis.whu.edu.cn/pages/download/. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
23. OmniPD: One-Step Person Detection in Top-View Omnidirectional Indoor Scenes.
- Author
-
Jingrui Yu, Seidel, Roman, and Hirtz, Gangolf
- Published
- 2019
- Full Text
- View/download PDF
24. On the design and implementation of a dual fisheye camera-based surveillance vision system.
- Author
-
Al-Harasis, Raghad and Sababha, Belal H.
- Subjects
DIGITAL image processing ,STEREO vision (Computer science) ,INDUSTRY 4.0 ,COMPUTER vision ,IMAGE processing ,LAPTOP computers ,STEREOPHONIC sound systems - Abstract
Image processing and computer vision have been a focus of researchers for decades in various application domains. This research is continuously rising with the rise of Artificial Intelligence in the fourth industrial revolution. One of the important digital image processing applications is to produce panorama images. The wide range of view a panorama image provides can be used in a variety range of applications which may include surveillance applications and remote robot operations. A panorama image is a combination of several individual natural looking images into a composite one to provide a wide field of view that may reach 360 degrees horizontally without any distortion. Wide-angle lenses provide a wide field of view, but using them alone does not necessarily make a panorama image. In this work the design and implementation of a wide-angle stereo vision system that suites many real-time applications is proposed. The system makes use of two wide-angle fisheye cameras where each camera covers around 170 degrees field of view. The horizontal angle between the cameras is 140 degrees. The cameras acquire the instantaneous overlapping images continuously and transmits them to a base station via a communication link. The base station calibrates, corrects, correlates and stitches the non-overlapping corrected images to a composite one. The resultant final image covers 310 degrees field of view. The system is of low computational complexity compared with previously implemented systems. It is tested on a laptop and on a standalone embedded computing device. The processing speed for the panorama image stitching including the correction of the fisheye barrel distortion on the laptop computer and the embedded computer is 11 fps, and 6 fps, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
25. Motion Estimation for Fisheye Video With an Application to Temporal Resolution Enhancement.
- Author
-
Eichenseer, Andrea, Batz, Michel, and Kaup, Andre
- Subjects
- *
TRANSLATIONAL motion , *MOTION , *VIDEO processing , *PINHOLE cameras , *IMAGE processing , *VIDEOS - Abstract
Surveying wide areas with only one camera is a typical scenario in surveillance and automotive applications. Ultra wide-angle fisheye cameras employed to that end produce video data with characteristics that differ significantly from conventional rectilinear imagery as obtained by perspective pinhole cameras. Those characteristics are not considered in typical image and video processing algorithms such as motion estimation, where translation is assumed to be the predominant kind of motion. This contribution introduces an adapted technique for use in block-based motion estimation that takes into the account the projection function of fisheye cameras and thus compensates for the non-perspective properties of fisheye videos. By including suitable projections, the translational motion model that would otherwise only hold for perspective material is exploited, leading to improved motion estimation results without altering the source material. In addition, we discuss extensions that allow for a better prediction of the peripheral image areas, where motion estimation falters due to spatial constraints, and further include calibration information to account for lens properties deviating from the theoretical function. Simulations and experiments are conducted on synthetic as well as real-world fisheye video sequences that are part of a data set created in the context of this paper. Average synthetic and real-world gains of 1.45 and 1.51 dB in luminance PSNR are achieved compared against conventional block matching. Furthermore, the proposed fisheye motion estimation method is successfully applied to motion compensated temporal resolution enhancement, where average gains amount to 0.79 and 0.76 dB. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
26. Park marking-based vehicle self-localization with a fisheye topview system.
- Author
-
Houben, Sebastian, Neuhausen, Marcel, Michael, Matthias, Kesten, Robert, Mickler, Florian, and Schuller, Florian
- Abstract
Accurately self-localizing a vehicle is of high importance as it allows to robustify nearly all modern driver assistance functionality, e.g., lane keeping and coordinated autonomous driving maneuvers. We examine vehicle self-localization relying only on video sensors, in particular, a system of four fisheye cameras providing a view surrounding the car, a setup currently growing popular in upper-class cars. The presented work aims at an autonomous parking scenario. The method is based on park markings as orientation marks since they can be found in nearly every parking deck and require only little additional preparation. Our contribution is twofold: (1) we present a new real-time capable image processing pipeline for topview systems extracting park markings and show how to obtain a reliable and accurate ego pose and ego motion estimation given a coarse pose as starting point. (2) The aptitude of this often neglected sensor array for vehicle self-localization is demonstrated. Experimental evaluation yields a precision of 0.15 ± 0.18 m and 2.01 ∘ ± 1.91 ∘ . [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
27. FSD-BRIEF: A Distorted BRIEF Descriptor for Fisheye Image Based on Spherical Perspective Model
- Author
-
Yutong Zhang, Jianmei Song, Yan Ding, Yating Yuan, and Hua-Liang Wei
- Subjects
fisheye camera ,spherical perspective model ,distorted BRIEF descriptor ,feature point attitude matrix ,Chemical technology ,TP1-1185 - Abstract
Fisheye images with a far larger Field of View (FOV) have severe radial distortion, with the result that the associated image feature matching process cannot achieve the best performance if the traditional feature descriptors are used. To address this challenge, this paper reports a novel distorted Binary Robust Independent Elementary Feature (BRIEF) descriptor for fisheye images based on a spherical perspective model. Firstly, the 3D gray centroid of feature points is designed, and the position and direction of the feature points on the spherical image are described by a constructed feature point attitude matrix. Then, based on the attitude matrix of feature points, the coordinate mapping relationship between the BRIEF descriptor template and the fisheye image is established to realize the computation associated with the distorted BRIEF descriptor. Four experiments are provided to test and verify the invariance and matching performance of the proposed descriptor for a fisheye image. The experimental results show that the proposed descriptor works well for distortion invariance and can significantly improve the matching performance in fisheye images.
- Published
- 2021
- Full Text
- View/download PDF
28. Object detection and localization in 3D environment by fusing raw fisheye image and attitude data.
- Author
-
Zhu, Jun, Zhu, Jiangcheng, Wan, Xudong, Wu, Chao, and Xu, Chao
- Subjects
- *
DATA fusion (Statistics) , *SOFTWARE architecture - Abstract
Highlights • Use single fisheye camera to cover a hemisphere FOV of MAV. • Use original fisheye images to implement object detection. • Propose a detector that is more accurate and faster than baselines on TX2. • Fuse fisheye model, detection results, attitude and height to localize objects. Abstract In robotic systems, the fisheye camera can provide a large field of view (FOV). Usually, the traditional restoring algorithms are needed, which are computational heavy and will introduce noise into original data, since the fisheye images are distorted. In this paper, we propose a framework to detect objects from the raw fisheye images without restoration, then locate objects in the real world coordinate by fusing attitude information. A deep neural network architecture based on the MobileNet and feature pyramid structure is designed to detect targets directly on the fisheye raw images. Then, the target can be located based on the fisheye visual model and the attitude of the camera. Compared to traditional approaches, this approach has advantages in computational efficiency and accuracy. This approach is validated by experiments with a fisheye camera and an onboard computer on a micro-aerial vehicle (MAV). [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
29. 一种基于水天线观测的舰船姿态确定算法.
- Author
-
蒲俊宇, 李崇辉, 郑勇, 龙俊宇, 詹银虎, and 陈张雷
- Subjects
EULERIAN graphs ,NAUTICAL astronomy ,ALGORITHMS ,HORIZON ,INFORMATION storage & retrieval systems - Abstract
Copyright of Hydrographic Surveying & Charting / Haiyang Cehui is the property of Hydrographic Surveying & Charting Editorial Board and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2019
- Full Text
- View/download PDF
30. Astronomical Vessel Heading Determination based on Simultaneously Imaging the Moon and the Horizon.
- Author
-
Pu, Jun-Yu, Li, Chong-Hui, Zheng, Yong, and Zhan, Yin-Hu
- Subjects
- *
HEMISPHERICAL photography , *ELECTROMAGNETISM , *MOON , *GEOMAGNETISM , *GLOBAL Positioning System , *GYRO compass - Abstract
Heading angle is a vital parameter in maintaining a vessel's track along a planned course and should be guaranteed in a stable and reliable way. An innovative method of heading determination based on a fisheye camera, which is almost totally unaffected by electromagnetism and geomagnetism, is proposed in this paper. In addition, unlike traditional astronomical methods, it also has a certain degree of adaptability to cloudy weather. Utilising the super wide Field Of View (FOV) of the camera, it is able to simultaneously image the Moon and the horizon. The Moon is treated as the observed celestial body and the horizon works as the horizontal datum. Two experiments were conducted at sea, successfully proving the feasibility of this method. The proposed heading determination system has the merits of automation, resistance to interference and could be miniaturised, making application viable. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
31. UAV Capability to Detect and Interpret Solar Radiation as a Potential Replacement Method to Hemispherical Photography.
- Author
-
Abdollahnejad, Azadeh, Panagiotidis, Dimitrios, Surový, Peter, and Ulbrichová, Iva
- Subjects
- *
HEMISPHERICAL photography , *SOLAR radiation , *REGENERATION (Biology) , *DRONE aircraft , *DATA encryption - Abstract
Solar radiation is one of the most significant environmental factors that regulates the rate of photosynthesis, and consequently, growth. Light intensity in the forest can vary both spatially and temporally, so precise assessment of canopy and potential solar radiation can significantly influence the success of forest management actions, for example, the establishment of natural regeneration. In this case study, we investigated the possibilities and perspectives of close-range photogrammetric approaches for modeling the amount of potential direct and diffuse solar radiation during the growing seasons (spring-summer), by comparing the performance of low-cost Unmanned Aerial Vehicle (UAV) RGB imagery vs. Hemispherical Photography (HP). Characterization of the solar environment based on hemispherical photography has already been widely used in botany and ecology for a few decades, while the UAV method is relatively new. Also, we compared the importance of several components of potential solar irradiation and their impact on the regeneration of Pinus sylvestris L. For this purpose, a circular fisheye objective was used to obtain hemispherical images to assess sky openness and direct/diffuse photosynthetically active flux density under canopy average for the growing season. Concerning the UAV, a Canopy Height Model (CHM) was constructed based on Structure from Motion (SfM) algorithms using Photoscan professional. Different layers such as potential direct and diffuse radiation, direct duration, etc., were extracted from CHM using ArcGIS 10.3.1 (Esri: California, CA, USA). A zonal statistics tool was used in order to extract the digital data in tree positions and, subsequently, the correlation between potential solar radiation layers and the number of seedlings was evaluated. The results of this study showed that there is a high relation between the two used approaches (HP and UAV) with R2 = 0.74. Finally, potential diffuse solar radiation derived from both methods had the highest significant relation (-8.06% bias) and highest impact in the modeling of pine regeneration. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
32. 3D visual perception for self-driving cars using a multi-camera system: Calibration, mapping, localization, and obstacle detection.
- Author
-
Häne, Christian, Heng, Lionel, Lee, Gim Hee, Fraundorfer, Friedrich, Furgale, Paul, Sattler, Torsten, and Pollefeys, Marc
- Subjects
- *
DRIVERLESS cars , *VISUAL perception , *CAMERAS , *LOCALIZATION problems (Robotics) , *IMAGE processing , *HEMISPHERICAL photography - Abstract
Cameras are a crucial exteroceptive sensor for self-driving cars as they are low-cost and small, provide appearance information about the environment, and work in various weather conditions. They can be used for multiple purposes such as visual navigation and obstacle detection. We can use a surround multi-camera system to cover the full 360-degree field-of-view around the car. In this way, we avoid blind spots which can otherwise lead to accidents. To minimize the number of cameras needed for surround perception, we utilize fisheye cameras. Consequently, standard vision pipelines for 3D mapping, visual localization, obstacle detection, etc. need to be adapted to take full advantage of the availability of multiple cameras rather than treat each camera individually. In addition, processing of fisheye images has to be supported. In this paper, we describe the camera calibration and subsequent processing pipeline for multi-fisheye-camera systems developed as part of the V-Charge project. This project seeks to enable automated valet parking for self-driving cars. Our pipeline is able to precisely calibrate multi-camera systems, build sparse 3D maps for visual navigation, visually localize the car with respect to these maps, generate accurate dense maps, as well as detect obstacles based on real-time depth map extraction. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
33. Vertical Optical Scanning with Panoramic Vision for Tree Trunk Reconstruction.
- Author
-
Berveglieri, Adilson, Tommaselli, Antonio M. G., Xinlian Liang, and Honkavaara, Eija
- Subjects
- *
OPTICAL scanners , *OPTICAL instruments , *PHOTOGRAMMETRY , *REMOTE sensing , *THREE-dimensional modeling , *OPTICAL radar - Abstract
This paper presents a practical application of a technique that uses a vertical optical flow with a fisheye camera to generate dense point clouds from a single planimetric station. Accurate data can be extracted to enable the measurement of tree trunks or branches. The images that are collected with this technique can be oriented in photogrammetric software (using fisheye models) and used to generate dense point clouds, provided that some constraints on the camera positions are adopted. A set of images was captured in a forest plot in the experiments. Weighted geometric constraints were imposed in the photogrammetric software to calculate the image orientation, perform dense image matching, and accurately generate a 3D point cloud. The tree trunks in the scenes were reconstructed and mapped in a local reference system. The accuracy assessment was based on differences between measured and estimated trunk diameters at different heights. Trunk sections from an image-based point cloud were also compared to the corresponding sections that were extracted from a dense terrestrial laser scanning (TLS) point cloud. Cylindrical fitting of the trunk sections allowed the assessment of the accuracies of the trunk geometric shapes in both clouds. The average difference between the cylinders that were fitted to the photogrammetric cloud and those to the TLS cloud was less than 1 cm, which indicates the potential of the proposed technique. The point densities that were obtained with vertical optical scanning were 1/3 less than those that were obtained with TLS. However, the point density can be improved by using higher resolution cameras. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
34. 利用鱼眼相机对人群进行运动估计.
- Author
-
胡学敏, 郑宏, 郭琳, and 熊饶饶51
- Abstract
人群运动估计是人群行为分析的重要步骤。特定场景的人群运动分析和监控,是维护公共安全和社会稳定的一个必要措施,也是视频监控领域的一个研究难点。利用鱼眼相机视场大、无视觉盲区的优点,提出了一种基于特征点光流的人群运动估计方法。首先,采用一种基于面积反馈机制的混合高斯背景差分方法,对原始视频图像进行预处理,并利用圆拟合的方法获取兴趣区域;其次,为了在保证准确描述人群目标的同时提高算法的实时性,提出一种基于边缘密度非均匀采样的人群特征点提取方法来描述运动的人群目标,并利用Lucas & Kanade光流法计算光流场;最后,为了解决远近人群的尺寸大小不一致的问题和鱼眼相机的畸变问题,采用鱼眼相机的透视加权模型,计算人群运动加权统计直方图,获取人群在鱼眼图像中的全局运动方向和速度。实验结果表明,针对密集的人群,该方法能有效、实时地估计人群的运动方向和速度,为人群行为分析提供有力的研究基础。 [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
35. Using vanishing points to estimate parameters of fisheye camera
- Author
-
Haijiang Zhu, Xiaobo Xu, Jinglin Zhou, and Xuejing Wang
- Subjects
vanishing points ,parameter estimation ,fisheye camera ,mutually orthogonal parallel lines ,single image ,constraint equations ,Computer applications to medicine. Medical informatics ,R858-859.7 ,Computer software ,QA76.75-76.765 - Abstract
This study presents an approach for estimating the fisheye camera parameters using three vanishing points corresponding to three sets of mutually orthogonal parallel lines in one single image. The authors first derive three constraint equations on the elements of the rotation matrix in proportion to the coordinates of the vanishing points. From these constraints, the rotation matrix is calculated under the assumption of the image centre known. The experimental results with synthetic images and real fisheye images validate this method. In contrast to the existing methods, the authors method needs less image information and does not know the three‐dimensional reference point coordinates.
- Published
- 2013
- Full Text
- View/download PDF
36. Activity Recognition Using Gazed Text and Viewpoint Information for User Support Systems
- Author
-
Shun Chiba, Tomo Miyazaki, Yoshihiro Sugaya, and Shinichiro Omachi
- Subjects
activity recognition ,eye tracker ,fisheye camera ,viewpoint information ,Technology - Abstract
The development of information technology has added many conveniences to our lives. On the other hand, however, we have to deal with various kinds of information, which can be a difficult task for elderly people or those who are not familiar with information devices. A technology to recognize each person’s activity and providing appropriate support based on that activity could be useful for such people. In this paper, we propose a novel fine-grained activity recognition method for user support systems that focuses on identifying the text at which a user is gazing, based on the idea that the content of the text is related to the activity of the user. It is necessary to keep in mind that the meaning of the text depends on its location. To tackle this problem, we propose the simultaneous use of a wearable device and fixed camera. To obtain the global location of the text, we perform image matching using the local features of the images obtained by these two devices. Then, we generate a feature vector based on this information and the content of the text. To show the effectiveness of the proposed approach, we performed activity recognition experiments with six subjects in a laboratory environment.
- Published
- 2018
- Full Text
- View/download PDF
37. 鱼眼相机与PTZ相机相结合的主从目标监控系统.
- Author
-
吴健辉, 商橙, 张国云, and 李交杰
- Abstract
We propose a master-slave object surveillance system based on fisheye camera and PTZ camera which has hemisphere space view field. The fisheye camera that has a super big field of view about 180 degree works as the master camera, and the PTZ camera as the slave camera which can shot high resolution images of the objects which controlled by PTZ parameters. Firstly, we use the moving blobs model to detect the moving object in the fisheye image , and calculate the object parameters of azimuth angle P′,elevation angle T′and distance Z′ in the fisheye image space. Then the P′T′Z′ parameters are mapped from the fisheye image space to the PTZ image space and output to control the PTZ camera. Calculation of P and T parameters can be achieved with the help of the distortion coefficient of fisheye lens, and Z parameter can be calculated according to the relative size of the object in the fisheye image and PTZ image. Test results of PTZ parameters show that the relative errors meet the requirement of this system. In real outdoor experiments, the PTZ camera can point to the moving object and shot the image stably wherever the object locates at the fisheye image, and the object has an appropriate size and a high resolution. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
38. A new calibrator providing easy detection of feature points for calibrating fisheye cameras in vehicle AVM systems.
- Author
-
Ha, Soo-Young, Jeong, Uitae, Choi, Il, and Sohng, Kyu-Ik
- Subjects
- *
CAMERA calibration , *HEMISPHERICAL photography , *THREE-dimensional display systems , *RADIAL distortion , *CURVE fitting - Abstract
This paper proposes a new planar calibrator, which is suitable for calibrating the fisheye camera in a vehicle around view monitoring system. To facilitate easy feature point detection using the proposed calibrator, we ensure that the shapes of the features and the distances between the adjacent feature points on the calibrator's image acquired by the fisheye camera are identical squares and equal, respectively, unlike those on the conventional chessboard calibrator. Further, the optimum number and locations of feature points on the proposed calibrator are experimentally determined for ensuring the best camera calibration performance. Compared with the conventional chessboard calibrator and the H-pattern calibrator, the proposed calibrator shows superior performance in terms of feature point detection and camera calibration. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
39. Visual mapping for natural gas pipe inspection.
- Author
-
Hansen, Peter, Alismail, Hatem, Rander, Peter, and Browning, Brett
- Subjects
- *
NATURAL gas pipelines , *NATURAL gas production , *MOBILE robots , *AUTOMATIC cameras , *ROBOTICS , *MOTION detectors - Abstract
Validating the integrity of pipes is an important task for safe natural gas production and many other operations (e.g. refineries, sewers, etc.). Indeed, there is a growing industry of actuated, actively driven mobile robots that are used to inspect pipes. Many rely on a remote operator to inspect data from a fisheye camera to perform manual inspection and provide no localization or mapping capability. In this work, we introduce a visual odometry-based system using calibrated fisheye imagery and sparse structured lighting to produce high-resolution 3D textured surface models of the inner pipe wall. Our work extends state-of-the-art visual odometry and mapping for fisheye systems to incorporate weak geometric constraints based on prior knowledge of the pipe components into a sparse bundle adjustment framework. These constraints prove essential for obtaining high-accuracy solutions given the limited spatial resolution of the fisheye system and challenging raw imagery. We show that sub-millimeter resolution modeling is viable even in pipes which are 400 mm (16”) in diameter, and that sparse range measurements from a structured lighting solution can be used to avoid the inevitable monocular scale drift. Our results show that practical, high-accuracy pipe mapping from a single fisheye camera is within reach. [ABSTRACT FROM PUBLISHER]
- Published
- 2015
- Full Text
- View/download PDF
40. A monocular wide-field speed sensor inspired by the crabs' visual system for traffic analysis.
- Author
-
Guimaraynz HD, Arroyo SI, Ibáñez SA, and Oliva DE
- Subjects
- Animals, Vision, Ocular, Vision, Monocular, Movement, Brachyura, Optical Devices
- Abstract
The development of visual sensors for traffic analysis can benefit from mimicking two fundamental aspects of the visual system of crabs: their panoramic vision and their visual processing strategy adapted to a flat world. First, the use of omnidirectional cameras in urban environments allows for analyzing the simultaneous movement of many objects of interest over broad areas. This would reduce the costs and complications associated with infrastructure: installation, synchronization, maintenance, and operation of traditional vision systems that use multiple cameras with a limited field of view. Second, in urban traffic analysis, the objects of interest (e.g. vehicles and pedestrians) move on the ground surface. This constraint allows the calculation of the 3D trajectory of the vehicles using a single camera without the need to use binocular vision techniques.The main contribution of this work is to show that the strategy used by crabs to visually analyze their habitat (monocular omnidirectional vision with the assumption of a flat world ) is useful for developing a simple and effective method to estimate the speed of vehicles on long trajectories in urban environments. It is shown that the proposed method estimates the speed with a root mean squared error of 2.7 km h
-1 ., (© 2023 IOP Publishing Ltd.)- Published
- 2023
- Full Text
- View/download PDF
41. Astronomical Vessel Position Determination Utilizing the Optical Super Wide Angle Lens Camera.
- Author
-
Li, Chong-hui, Zheng, Yong, Zhang, Chao, Yuan, Yu-Lei, Lian, Yue-Yong, and Zhou, Pei-Yuan
- Subjects
- *
ASTRONOMICAL observations , *CAMERAS , *OPTICAL instruments , *NAUTICAL astronomy , *GLOBAL Positioning System , *SCIENTIFIC observation , *MINIATURE electronic equipment - Abstract
Celestial navigation is an important type of autonomous navigation technology which could be used as an alternative to Global Navigation Satellite Systems (GNSS) when a vessel is at sea. After several centuries of development, a variety of astronomical vessel position (AVP) determination methods have been invented, but the basic concepts of these methods are all based on angular observations with a device such as a sextant, which has disadvantages including low accuracy, manual operation, and a limited period of observation. This paper proposes a new method that utilises a fisheye camera to image the celestial bodies and horizon simultaneously. Then, we calculate the obliquity of the fisheye camera's principal optical axis according to the image coordinates of the horizon. Next, we calculate the altitude of the celestial bodies according to the image coordinates of the celestial bodies and the obliquity. Finally, the AVP is determined by the altitudes according to the robust estimation method. Experimental results indicate that this method not only could realize automation and miniaturization of the AVP determination system, but could also greatly improve the efficiency of celestial navigation. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
42. Multi-person 3D pose estimation from a single image captured by a fisheye camera.
- Author
-
Zhang, Yahui, You, Shaodi, Karaoglu, Sezer, and Gevers, Theo
- Subjects
VIDEO surveillance ,CAMERAS ,POSE estimation (Computer vision) ,THREE-dimensional imaging - Abstract
Multi-person 3D pose estimation with absolute depths for a fisheye camera is a challenging task but with valuable applications in daily life, especially for video surveillance. However, to the best of our knowledge, such problem has not been explored so far, leaving a gap in practical applications. In this work, we first propose a method for multi-person 3D pose estimation from a single image taken by a fisheye camera. Our method consists of two branches to estimate absolute 3D human poses: (1) a 2D-to-3D lifting module to predict root-relative 3D human poses (HPoseNet); (2) a root regression module to estimate absolute root locations in the camera coordinate (HRootNet). Finally, we propose a fisheye re-projection module without using ground-truth camera parameters to connect two branches, alleviating the impact of image distortions on 3D pose estimation and further regularizing prediction absolute 3D poses. Experimental results demonstrate that our method achieves the state-of-the-art performance on two public multi-person 3D pose datasets with synthetic fisheye images and our newly collected dataset with real fisheye images. The code and new dataset will be made publicly available. • We propose a novel method for multi-person 3D pose estimation from a fisheye image. • A re-projection module is introduced to alleviate the negative impact of distortions. • Absolute 3D poses are obtained by our method without using ground-truth camera parameters. • We collect a new dataset taken by fisheye cameras for multi-person 3D pose estimation. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
43. Using vanishing points to estimate parameters of fisheye camera.
- Author
-
Zhu, Haijiang, Xu, Xiaobo, Zhou, Jinglin, and Wang, Xuejing
- Subjects
ORTHOGONALIZATION ,MATRICES (Mathematics) ,IMAGE processing ,CAMERAS ,ROTATIONAL motion ,PARAMETER estimation - Abstract
This study presents an approach for estimating the fisheye camera parameters using three vanishing points corresponding to three sets of mutually orthogonal parallel lines in one single image. The authors first derive three constraint equations on the elements of the rotation matrix in proportion to the coordinates of the vanishing points. From these constraints, the rotation matrix is calculated under the assumption of the image centre known. The experimental results with synthetic images and real fisheye images validate this method. In contrast to the existing methods, the authors method needs less image information and does not know the three‐dimensional reference point coordinates. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
44. Estimating fisheye camera parameters from homography.
- Author
-
Zhu, HaiJiang, Yang, Ping, and Li, ShiGang
- Abstract
This paper presents a method to linearly estimate fisheye camera model parameters from the homography induced by the space plane between two fisheye images. The homography is firstly calculated by using four feature points, instead of three points on the same line, in the fisheye image. And the constraint on the model parameters of fisheye camera can be derived from the homography under the assumption that fisheye camera model is a polynomial model. Then the model parameters for different order polynomials are computed. The proposed technique requires only multiple fisheye images to include a planar scene and need not a priori knowledge of 3D coordinate of the planar scene. Experimental results with synthetic data and real fisheye images demonstrate the validity of our method. The method can also be extended to fisheye image of other planar scene excluding the planar calibration object. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
45. Evaluation of the Unified Model of the Sphere for Fisheye Cameras in Robotic Applications.
- Author
-
Courbon, Jonathan, Mezouar, Youcef, and Martinet, Philippe
- Subjects
- *
UNIFIED modeling language , *ROBOTICS , *CAMERAS , *IMAGE converters , *CATADIOPTRIC systems , *APPROXIMATION theory , *MATHEMATICAL models - Abstract
A wide field of view is required for many robotic vision tasks. Such an aperture may be acquired by a fisheye camera, which provides a full image compared to catadioptric visual sensors, and does not increase the size and the weakness of the imaging system with respect to perspective cameras. While a unified model exists for all central catadioptric systems, many different models, approximating the radial distortions, exist for fisheye cameras. It is shown in this paper that the unified projection model proposed for central catadioptric cameras is also valid for fisheye cameras in the context of robotic applications. This model consists of a projection onto a virtual unitary sphere followed by a perspective projection onto an image plane. This model is shown equivalent to almost all the fisheye models. Calibration with four cameras and partial Euclidean reconstruction are done using this model, and lead to persuasive results. Finally, an application to a mobile robot navigation task is proposed and correctly executed along a 200-m trajectory. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
46. Easy Calibration of a Blind-Spot-Free Fisheye Camera System Using a Scene of a Parking Space.
- Author
-
Li, Shigang and Hai, Ying
- Abstract
Mounting three fisheye cameras, on the sides and rear of a vehicle, can help a driver maneuver his/her vehicle in restricted environments by providing a top view that is generated from these three fisheye cameras. To generate the top view, the pose of each fisheye camera must first be calibrated. In this paper, we propose an easy method of calibrating such a fisheye camera system by observing the scene of a parking space. First, each camera pose relative to the ground is estimated from the typical line pattern of a parking space. Then, the relative pose among the three cameras is refined using the overlapping region of the ground between the neighboring cameras. Finally, if necessary, any small deviation of the pose of the camera system can manually be adjusted by an interactive interface. Since the calibration of the fisheye camera system can be performed without preparing a specific calibration pattern for this particular purpose beforehand, the proposed method can reduce the workload on the user. Experimental results reveal the effectiveness of the proposed method. [ABSTRACT FROM PUBLISHER]
- Published
- 2011
- Full Text
- View/download PDF
47. Monitoring Around a Vehicle by a Spherical Image Sensor.
- Author
-
Shigang Li
- Abstract
This paper describes a prototype of a full-view spherical image sensor, gives a method for sensor calibration, and discusses display modalities of the captured full-view image for monitoring around a vehicle. To monitor the whole surrounding situation of a dynamic environment by a single camera, a spherical field of view (FOV) is divided into two hemispherical views. Each hemispherical FOV is imaged by a fisheye lens. Both of the hemispherical views are fused by a mirror to acquire them on a single image plane. To calibrate the full-view spherical image sensor, a three-dimensional (3-D) calibration pattern is used to compute the internal parameters of each fisheye lens and their relative orientation based upon a spherical camera model. Finally, several display modalities are discussed to show drivers the relevant spherical image information on planar displays for monitoring around a vehicle [ABSTRACT FROM PUBLISHER]
- Published
- 2006
- Full Text
- View/download PDF
48. FSD-BRIEF: A Distorted BRIEF Descriptor for Fisheye Image Based on Spherical Perspective Model.
- Author
-
Zhang, Yutong, Song, Jianmei, Ding, Yan, Yuan, Yating, Wei, Hua-Liang, and Ait Aider, Omar
- Subjects
IMAGE registration ,CENTROID ,IMAGE quality analysis - Abstract
Fisheye images with a far larger Field of View (FOV) have severe radial distortion, with the result that the associated image feature matching process cannot achieve the best performance if the traditional feature descriptors are used. To address this challenge, this paper reports a novel distorted Binary Robust Independent Elementary Feature (BRIEF) descriptor for fisheye images based on a spherical perspective model. Firstly, the 3D gray centroid of feature points is designed, and the position and direction of the feature points on the spherical image are described by a constructed feature point attitude matrix. Then, based on the attitude matrix of feature points, the coordinate mapping relationship between the BRIEF descriptor template and the fisheye image is established to realize the computation associated with the distorted BRIEF descriptor. Four experiments are provided to test and verify the invariance and matching performance of the proposed descriptor for a fisheye image. The experimental results show that the proposed descriptor works well for distortion invariance and can significantly improve the matching performance in fisheye images. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
49. Intelligent video analysis: A Pedestrian trajectory extraction method for the whole indoor space without blind areas.
- Author
-
Yang, Lie, Hu, Guanghua, Song, Yonghao, Li, Guofeng, and Xie, Longhan
- Subjects
PEDESTRIANS ,CAMERAS ,CONVOLUTIONAL neural networks ,BEHAVIORAL assessment ,PEDESTRIAN areas ,KALMAN filtering - Abstract
Pedestrian trajectory extraction is an important part of intelligent monitoring, which is of great significance to many fields such as statistics on pedestrian flow and density, population behavior analysis, abnormal behavior detection, etc. However, it is quite challenging to extract pedestrian trajectory without blind areas in the whole space due to the limited view angle of ordinary cameras. So far, no efficient method has been proposed to deal with this problem. In this paper, we propose a pedestrian trajectory extraction method based on a single fisheye camera, which can realize no blind areas pedestrian trajectory extraction in the whole interior space. First, the fisheye camera with a perspective of 18 0 ∘ is adopted in our work which can realize the entire space monitoring without blind areas and avoid object matching among multiple cameras. Then, the deep convolutional neural network, the Kalman Filter algorithm, and the Hungarian algorithm are combined for pedestrian head detection and tracking. In order to calculate the coordinates of the trajectory points according to the obtained head position, we propose a novel pedestrian height estimation method for fisheye cameras. Finally, the pedestrian trajectory points are calculated based on the detected head position and the estimated height. The performance of the proposed pedestrian trajectory extraction method has been evaluated by a variety of experiments. The experimental results show that the trajectories of multiple pedestrians can be extracted simultaneously through the method proposed in this paper, and the average error of the trajectory points is less than 5.07 pixels in the 512 × 512 images. • In this paper, a pedestrian trajectory extraction method based on a single fisheye camera is proposed, which can realize pedestrian trajectory extraction in the whole interior space without blind areas. • An efficient pedestrian height estimation method based on the mathematical model of fisheye cameras is proposed in this paper. Based on the estimated height, the trajectory points of pedestrian can be calculated according to the detected head position. • The calculation methods of pedestrian trajectory points and bounding boxes for fisheye cameras are proposed in this paper. Based on this method, we can obtain the trajectory of pedestrians according to the results of pedestrian head and height estimation. • The pedestrian head detection dataset based on fisheye cameras (PHDF) is created specifically for the overlooking scene of the fisheye camera, which is available to all researchers. • Furthermore, head detection instead of body detection is adopted for pedestrian detection in this paper and a pedestrian head detection and tracking method based on the SORT algorithm is adopted for the overlooking scene of the fisheye camera. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
50. Activity Recognition Using Gazed Text and Viewpoint Information for User Support Systems.
- Author
-
Chiba, Shun, Miyazaki, Tomo, Sugaya, Yoshihiro, and Omachi, Shinichiro
- Subjects
HUMAN activity recognition ,INFORMATION technology ,ELDER care ,WEARABLE technology ,IMAGE processing - Abstract
The development of information technology has added many conveniences to our lives. On the other hand, however, we have to deal with various kinds of information, which can be a difficult task for elderly people or those who are not familiar with information devices. A technology to recognize each person's activity and providing appropriate support based on that activity could be useful for such people. In this paper, we propose a novel fine-grained activity recognition method for user support systems that focuses on identifying the text at which a user is gazing, based on the idea that the content of the text is related to the activity of the user. It is necessary to keep in mind that the meaning of the text depends on its location. To tackle this problem, we propose the simultaneous use of a wearable device and fixed camera. To obtain the global location of the text, we perform image matching using the local features of the images obtained by these two devices. Then, we generate a feature vector based on this information and the content of the text. To show the effectiveness of the proposed approach, we performed activity recognition experiments with six subjects in a laboratory environment. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.