2,641 results on '"Video camera"'
Search Results
2. Aerial Robotics for Precision Agriculture: Weeds Detection Through UAV and Machine Vision
- Author
-
Menshchikov, Alexander, Somov, Andrey, and Sergiyenko, Oleg, editor
- Published
- 2022
- Full Text
- View/download PDF
3. Parking detection method using quadtree decomposition analysis
- Author
-
Houweida Tounsi and Khaled Shaaban
- Subjects
Quadtree decomposition ,business.industry ,Computer science ,Transportation ,Video camera ,Space (mathematics) ,Grayscale ,law.invention ,Cost savings ,law ,Computer vision ,Artificial intelligence ,Detection rate ,business ,SIMPLE algorithm ,Civil and Structural Engineering - Abstract
Searching for available parking spaces can be a painful experience for drivers due to driving around until finding a vacant space. This study proposes a new method to automatically detect available parking spaces. The proposed system identifies empty parking spaces using grayscale images obtained from any type of video camera. The method was found to successfully identify parking availability under different conditions and scenarios. The method was tested using real-life data and achieved a detection rate of 99.7%. This method can be applied in real-time to monitor parking availability and guide drivers to empty spaces. The method has several advantages, including simple algorithms, the use of low-quality black and white images, and simple configuration. Therefore, the system can provide enormous cost savings for locations with existing black and white surveillance cameras instead of replacing existing cameras with new high-quality cameras.
- Published
- 2022
- Full Text
- View/download PDF
4. Accurate Identification of the Trabecular Meshwork under Gonioscopic View in Real Time Using Deep Learning
- Author
-
Gregor Urban, Michael C. Yang, Da-Wen Lu, Lung-Chi Lee, Pierre Baldi, Wallace L.M. Alward, and Ken Y. Lin
- Subjects
Adult ,medicine.diagnostic_test ,business.industry ,Deep learning ,Gonioscopy ,Video camera ,General Medicine ,Frame rate ,Trabeculotomy ,law.invention ,Data set ,Cross-Sectional Studies ,Deep Learning ,Trabecular Meshwork ,law ,Test set ,Humans ,Medicine ,Trabectome ,Computer vision ,Artificial intelligence ,business ,Intraocular Pressure - Abstract
Objective Accurate identification of iridocorneal structures on gonioscopy is difficult to master and errors can lead to grave surgical complications. This study aimed to develop and train convolutional neural networks (CNNs) to accurately identify the trabecular meshwork (TM) in gonioscopic videos in real-time for eventual clinical integrations. Design Cross-sectional study Subjects, Participants, and/or Controls Adult patients with open angle were identified in academic glaucoma clinics in both Taipei, Taiwan and Irvine, California, USA. Methods Neural Encoder-Decoder CNNs (U-nets) were trained to predict a curve marking the TM using an expert-annotated data set of 378 gonioscopy images. The model was trained and evaluated with stratified cross-validation – grouped by patients to ensure uncorrelated training and testing sets, as well as on a separate test set and three intraoperative gonioscopic videos of ab interno trabeculotomy with Trabectome (totaling 90 seconds long, 30 frames per second). We also evaluated our model’s performance by comparing its accuracy against ophthalmologists. Main Outcome Measures Successful development of real-time capable CNNs that are accurate in predicting and marking the trabecular meshwork’s position in video frames of gonioscopic views. Models were evaluated in comparison to human expert annotations of static images and video data. Results The best CNN model produced test set predictions with a median deviation of 0.8% of the video frame’s height (15.25 microns) from the human experts’ annotations. This error is less than the average vertical height of the TM. The worst test frame prediction of this model had an average deviation of 4% of the frame height (76.28 microns), which is still considered a successful prediction. When challenged with unseen images, the CNN model scored greater than two standard deviations above the mean performance of the surveyed general ophthalmologists. Conclusion Our CNN model can identify the TM in gonioscopy videos in real time with remarkable accuracy, allowing it to be used in connection with a video camera intraoperatively. This model can have applications in surgical training, automated screenings, and intraoperative guidance. The dataset developed in this study is the first publicly available gonioscopy image bank which may encourage future investigations in this topic.
- Published
- 2022
- Full Text
- View/download PDF
5. RFID-Pose: Vision-Aided Three-Dimensional Human Pose Estimation With Radio-Frequency Identification
- Author
-
Chao Yang, Xuyu Wang, and Shiwen Mao
- Subjects
021103 operations research ,Artificial neural network ,business.industry ,Computer science ,Phase distortion ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,0211 other engineering and technologies ,Wearable computer ,Video camera ,02 engineering and technology ,law.invention ,Identification (information) ,law ,Benchmark (computing) ,Radio-frequency identification ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,Safety, Risk, Reliability and Quality ,business ,Pose - Abstract
In recent years, human pose tracking has become an important topic in computer vision (CV). To improve the privacy of human pose tracking, there is considerable interest in techniques without using a video camera. To this end, radio-frequency identification (RFID) tags, as a low-cost wearable sensor, provide an effective solution for 3-D human pose tracking. In this article, we propose RFID-Pose, a vision-aided realtime 3-D human pose estimation system, which is based on deep learning assisted by CV. The RFID phase data are calibrated to effectively mitigate the severe phase distortion, and high accuracy low rank tensor completion is employed to impute the missing RFID data. The system then estimates the spatial rotation angle of each human limb, and utilizes the rotation angles to reconstruct human pose in realtime with the forward kinematic technique. A prototype is developed with commodity RFID devices. High pose estimation accuracy and realtime operation of RFID-Pose are demonstrated in our experiments using Kinect 2.0 as a benchmark.
- Published
- 2021
- Full Text
- View/download PDF
6. VFDHSOG: Copy-Move Video Forgery Detection Using Histogram of Second Order Gradients
- Author
-
Punam Sunil Raskar and Sanjeevani K. Shah
- Subjects
Authentication ,business.industry ,Computer science ,Binary image ,Frame (networking) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Video camera ,Intra-frame ,Computer Science Applications ,law.invention ,Feature (computer vision) ,law ,Histogram ,Benchmark (computing) ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,business - Abstract
It is a generic belief that digital video can be proffered as visual evidence in areas like politics, criminal litigation, journalism and military intelligence services. Multicamera smartphones with megapixels of resolution are a common hand-held device used by everyone. This has made the task of video recording very easy. At the same time a variety of applications available on smart phones have made this indispensable source of information vulnerable to deliberate manipulations. Hence, content authentication of video evidence becomes essential. Copy-move forgery or Copy-paste forgery is consequential forgery done by forgers for changing the basic understanding of the scene. Removal or addition of frames in a video clip can also be managed by advanced apps on smartphones. In case of surveillance, the video camera and the background are stable which makes forgery easy and imperceptible. Therefore, accurate Video forgery detection is crucial. This paper proposes an efficient method—VFDHSOG based on Histograms of the second order gradient to locate ‘suspicious’ frames and then localize the CMF within the frame. A ‘suspicious’ frame is located by computing correlation coefficients of the HSOG feature after obtaining a binary image of a frame. Performance evaluation is done using the benchmark datasets Surrey university library for forensic analysis (SULFA), the Video tampering dataset (VTD) and SYSU-OBJFORGED dataset. SULFA has video files of different quality like q10, q20 etc., which represents high compression. The VTD dataset provides both, i.e. inter and intra frame forgery. The SYSU dataset covers different attacks like scaling and rotation. An overall accuracy of 92.26% is achieved with the capability to identify attacks like scale up/down and rotation.
- Published
- 2021
- Full Text
- View/download PDF
7. Robust Foot Motion Recognition Using Stride Detection and Weak Supervision-Based Fast Labeling
- Author
-
Rui Hua and Ya Wang
- Subjects
Foot (prosody) ,Ground truth ,Computer science ,business.industry ,Feature extraction ,STRIDE ,Video camera ,Accelerometer ,law.invention ,law ,Computer vision ,Artificial intelligence ,Road map ,Electrical and Electronic Engineering ,business ,Instrumentation ,Video game - Abstract
Foot motion recognition in daily life faces two challenges imposed by traditional machine learning frameworks: how to robustly recognize various foot motions from continuous movements in uncontrolled environments, and how to accurately extract ground truths. To address these challenges, in this paper, we propose a stride detection method to robustly identify each stride (over 99% accuracy). We then investigate two weak supervision-based fast labeling frameworks to automatically label the stride segmentations. Finally, we use these two frameworks to identify foot motions from continuous movements integrated on a route map. The route map can be replaced by a virtual-reality video game to play in daily life so that the user’s long-term foot functionality can be profiled and evaluated. We test our proposed approaches using a smart insole with twenty-two subjects whose movement data are collected through the route map setting while video camera recordings serve as the ground truth. The route map integrates seven foot motions in one complete play, which includes three continuous motions resulted from continuous full-body movements (named as continuous motions) and four intermittent foot motions that are produced repetitively (named as repetitive motions). Compared to the best traditional machine learning methods, our proposed approach improves the leave-one-subject-out cross-validation accuracy of all subjects by 6.12% for the three continuous motions, 2.71% for the four repetitive motions and 4.90% for the total of seven foot motions. In addition, our proposed method saves 25% to 50% time in data labeling.
- Published
- 2021
- Full Text
- View/download PDF
8. A Three-Range Panoramic Catadioptric Navigation Video Camera System for Unmanned Miniature Drones
- Author
-
V. S. Efremov and M. P. Egorenko
- Subjects
Atmospheric Science ,business.industry ,Computer science ,Video camera ,Oceanography ,Atomic and Molecular Physics, and Optics ,Drone ,law.invention ,Lens (optics) ,Catadioptric system ,law ,Range (aeronautics) ,Computer vision ,Artificial intelligence ,business ,Earth-Surface Processes - Abstract
Computer simulation of a panoramic three-range catadioptric system (lens) of a navigation video camera for unmanned miniature drones is performed. The optical system uses a three-channel radiation receiver sensitive in three spectral regions.
- Published
- 2021
- Full Text
- View/download PDF
9. Algorithmic support of the system of external observation and routing of autonomous mobile robots
- Author
-
M. V. Egortsev, S. A. K. Diane, and N. D. Kaz
- Subjects
autonomous mobile robots ,Information theory ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,A* search algorithm ,Image processing ,Video camera ,02 engineering and technology ,a-star ,01 natural sciences ,law.invention ,law ,0103 physical sciences ,Computer vision ,Q350-390 ,General Environmental Science ,010302 applied physics ,surf detection ,business.industry ,Frame (networking) ,Mobile robot ,021001 nanoscience & nanotechnology ,video monitoring ,udp protocol ,routing ,Path (graph theory) ,General Earth and Planetary Sciences ,Robot ,Artificial intelligence ,0210 nano-technology ,business ,Dijkstra's algorithm - Abstract
This article presents the algorithmic support of the external monitoring and routing system of autonomous mobile robots. In some cases, the practical usage of mobile robots is related to the solution of navigation problems. In particular, the position of ground robots can be secured using unmanned aerial vehicles. In the proposed approach based on the video image obtained from an external video camera located above the working area of mobile robots, the location of both robots and nearby obstacles is recognized. The optimal route to the target point of the selected robot is built, and changes in its working area are monitored. Information about the allowed routes of the robot is transmitted to third-party applications via network communication channels. Primary image processing from the camera includes distortion correction, contouring and binarization, which allows to separate image fragments containing robots and obstacles from background surfaces and objects. Recognition of robots in a video frame is based on the use of a SURF detector. This technology extracts key points in the video frame and compares them with key points of reference images of robots. Trajectory planning is implemented using Dijkstra’s algorithm. The discreteness of the trajectories obtained using the algorithm for finding a path on the graph can be compensated for on board autonomous mobile robots by using spline approximation. Experimental studies have confirmed the efficiency of the proposed approach both in the problem of recognition and localization of mobile robots and in the problem of planning safe trajectories.
- Published
- 2021
10. Method for Automatic Determination of a 3D Trajectory of Vehicles in a Video Image
- Author
-
I. G. Zubov and N. A. Obukhova
- Subjects
TK7800-8360 ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Video camera ,02 engineering and technology ,Convolutional neural network ,law.invention ,law ,convolutional neural networks ,0202 electrical engineering, electronic engineering, information engineering ,Segmentation ,Computer vision ,Pose ,image segmentation ,Artificial neural network ,business.industry ,020206 networking & telecommunications ,Image segmentation ,analysis of activation maps ,pattern matching ,detection of key points ,Scalability ,Trajectory ,020201 artificial intelligence & image processing ,Artificial intelligence ,Electronics ,business - Abstract
Introduction. An important part of an automotive unmanned vehicle (UV) control system is the environment analysis module. This module is based on various types of sensors, e.g. video cameras, lidars and radars. The development of computer and video technologies makes it possible to implement an environment analysis module using a single video camera as a sensor. This approach is expected to reduce the cost of the entire module. The main task in video image processing is to analyse the environment as a 3D scene. The 3D trajectory of an object, which takes into account its dimensions, angle of view and movement vector, as well as the vehicle pose in a video image, provides sufficient information for assessing the real interaction of objects. A basis for constructing a 3D trajectory is vehicle pose estimation.Aim. To develop an automatic method for estimating vehicle pose based on video data analysis from a single video camera.Materials and methods. An automatic method for vehicle pose estimation from a video image was proposed based on a cascade approach. The method includes vehicle detection, key points determination, segmentation and vehicle pose estimation. Vehicle detection and determination of its key points were resolved via a neural network. The segmentation of a vehicle video image and its mask preparation were implemented by transforming it into a polar coordinate system and searching for the outer contour using graph theory.Results. The estimation of vehicle pose was implemented by matching the Fourier image of vehicle mask signatures and the templates obtained based on 3D models. The correctness of the obtained vehicle pose and angle of view estimation was confirmed by experiments based on the proposed method. The vehicle pose estimation had an accuracy of 89 % on an open Carvana image dataset.Conclusion. A new approach for vehicle pose estimation was proposed, involving the transition from end-to-end learning of neural networks to resolve several problems at once, e.g., localization, classification, segmentation, and angle of view, towards cascade analysis of information. The accuracy level of end-to-end learning requires large sets of representative data, which complicates the scalability of solutions for road environments in Russia. The proposed method makes it possible to estimate the vehicle pose with a high accuracy level, at the same time as involving no large costs for manual data annotation and training.
- Published
- 2021
11. TagAttention: Mobile Object Tracing With Zero Appearance Knowledge by Vision-RFID Fusion
- Author
-
Minmei Wang, Chen Qian, Baiwen Huang, Haofan Cai, Xiaofeng Shi, Junjie Xie, and Ge Wang
- Subjects
Computer Networks and Communications ,business.industry ,Computer science ,Cognitive neuroscience of visual object recognition ,020206 networking & telecommunications ,Video camera ,02 engineering and technology ,Tracing ,Object (computer science) ,Computer Science Applications ,Visualization ,law.invention ,law ,0202 electrical engineering, electronic engineering, information engineering ,Trajectory ,Wireless ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Software ,Multipath propagation - Abstract
We propose to study mobile object tracing, which allows a mobile system to report the shape, location, and trajectory of the mobile objects appearing in a video camera and identifies each of them with its cyber-identity (ID), even if the appearances of the objects are not known to the system. Existing tracking methods either cannot match objects with their cyber-IDs or rely on complex vision modules pre-learned from vast and well-annotated datasets including the appearances of the target objects, which may not exist in practice. We design and implement TagAttention, a vision-RFID fusion system that achieves mobile object tracing without the knowledge of the target object appearances and hence can be used in many applications that need to track arbitrary un-registered objects. TagAttention adopts the visual attention mechanism, through which RF signals can direct the visual system to detect and track target objects with unknown appearances. Experiments show TagAttention can actively discover, identify, and track the target objects while matching them with their cyber-IDs by using commercial sensing devices in complex environments with various multipath reflectors. It only requires around one second to detect and localize a new mobile target appearing in the video and keeps tracking it accurately over time.
- Published
- 2021
- Full Text
- View/download PDF
12. DETERMINING THE LOCATION OF AN UNMANNED AERIAL VEHICLE BASED ON VIDEO CAMERA IMAGES
- Author
-
Elkhan Sabziev
- Subjects
linear functional mapping ,bijective mapping ,Information theory ,orientation angles ,business.industry ,Computer science ,General Engineering ,mercator projection ,Video camera ,law.invention ,Computer Science::Robotics ,QA76.75-76.765 ,law ,unmanned aerial vehicle ,Computer vision ,Artificial intelligence ,Computer software ,Mercator projection ,Q350-390 ,Geographic coordinate system ,business ,identification problem ,geographical coordinates - Abstract
The paper deals with the problem of determining the location of an unmanned aerial vehicle from video and photo images taken by surveillance cameras. As the observation area is considered to be sufficiently limited, the area under consideration can be taken as part of a plane. The solution to the problem is based on the construction of a bijective mapping between the known geographic coordinates of three different objects recognized in the images and their coordinates relative to the monitor plane. To this end, the geographical coordinates of the objects (latitude and longitude) are first converted to the Mercator projection, and a bijective mapping is built between the coordinates of the objects calculated in the Mercator projection and the coordinates relative to the camera monitor plane. Then, based on the current orientation angles of the unmanned aerial vehicle, the coordinates of the projection of its position on the monitor plane are calculated, and the geographical coordinates are found by applying the inverse of the constructed bijective mapping.
- Published
- 2021
13. C3D data based on 2-dimensional images from video camera
- Author
-
Mehdi Shafieian, Iman Sahafnejad-Mohammadi, Mina Abdollahzadekan, and Ali Sharifnezhad
- Subjects
law ,Computer science ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Computer vision ,Video camera ,Artificial intelligence ,business ,law.invention - Abstract
The Human three-dimensional (3D) musculoskeletal model is based on motion analysis methods and can be obtained by particular motion capture systems that export 3D data with coordinate 3D (C3D) format. Unique cameras and specific software are essential for analyzing the data. This equipment is quite expensive, and using them is time-consuming. This research intends to use ordinary video cameras and open source systems to get 3D data and create a C3D format due to these problems. By capturing movements with two video cameras, marker coordination is obtainable using Skill-Spector. To create C3D data from 3D coordinates of the body points, MATLAB functions were used. The subject was captured simultaneously with both the Cortex system and two video cameras during each validation test. The mean correlation coefficient of datasets is 0.7. This method can be used as an alternative method for motion analysis due to a more detailed comparison. The C3D data collection, which we presented in this research, is more accessible and cost-efficient than other systems. In this method, only two cameras have been used.
- Published
- 2021
- Full Text
- View/download PDF
14. INVESTIGATION OF EFFICIENCY OF DETECTION AND RECOGNITION OF DRONE IMAGES FROM VIDEO STREAM OF STATIONARY VIDEO CAMERA
- Author
-
Oleh Zubkov, V. M. Oleynikov, S. O. Sheiko, V. M. Kartashov, and S. I. Babkin
- Subjects
Background subtraction ,law ,business.industry ,Computer science ,Pattern recognition (psychology) ,Video camera ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Drone ,law.invention - Published
- 2021
- Full Text
- View/download PDF
15. Video Camera Latency Analysis and Measurement
- Author
-
Sven Ubik and Jiri Pospisilik
- Subjects
Computer science ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Latency (audio) ,Video camera ,02 engineering and technology ,Signal ,law.invention ,Transmission (telecommunications) ,Timecode ,law ,0202 electrical engineering, electronic engineering, information engineering ,Media Technology ,Waveform ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,Latency (engineering) ,business - Abstract
All modern video cameras exhibit some latency between the scene being captured and the output video signal. This camera latency is significant to the overall latency of a video network transmission chain. Some real-time applications based on video sharing require very low latency and selecting the right camera is then crucial. We observed how video cameras operate regarding their latency. We proposed three methods to measure camera latency: timecode view, waveform shift, and screen photodetector methods. All methods can achieve a subframe resolution and precision of approximately 1 ms although with varying levels of automation, convenience and suitability for particular cameras. We discuss arrangements that affect measurement precision. We applied the proposed methods to a sample camera, and we show measurements results for several other cameras. The measurement results showed that the fastest cameras provide latencies lower than 5 ms, which should be fast enough even for demanding real-time applications. However, most cameras still exhibit latencies at the range of 1—3 video frames.
- Published
- 2021
- Full Text
- View/download PDF
16. Driver drowsiness detection system using hybrid approach of convolutional neural network and bidirectional long short term memory (CNN_BILSTM)
- Author
-
S. P. Rajamohana, S. Priya, S. Sangeetha, and E. G. Radhika
- Subjects
010302 applied physics ,business.industry ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Video camera ,02 engineering and technology ,021001 nanoscience & nanotechnology ,Hybrid approach ,01 natural sciences ,Convolutional neural network ,Term (time) ,law.invention ,Closed state ,Long short term memory ,law ,Face (geometry) ,0103 physical sciences ,Computer vision ,Artificial intelligence ,State (computer science) ,0210 nano-technology ,business - Abstract
In today’s world driver drowsiness is a major reason for fatal accidents of on road vehicles. Developing an automated, real-time drowsiness detection system is essential to provide accurate and timely alerts to the driver. In the proposed system, hybrid approach of CNN (Convolutional Neural Network) and BiLSTM (Bidirectional Long Term Dependencies) is used to detect the driver’s drowsiness. Video camera is used to track the facial image and eye blinks of the driver. The proposed system works in three main phases: In the First phase, driver's face image is Identified and observed using a web camera. In the Second phase, the eye image features are extracted using the Euclidean algorithm. During the third phase, the eye blinks are continually monitored. The final stage decides whether the measure in eye square is closed state or open state. When a driver falls asleep, there will be a warning message to alert the driver to prevent road accidents.
- Published
- 2021
- Full Text
- View/download PDF
17. Evaluating tool point dynamics using smartphone-based visual vibrometry
- Author
-
Pulkit Gupta and Mohit Law
- Subjects
Pixel ,business.industry ,Computer science ,Logarithmic decrement ,Modal analysis ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Video camera ,Image processing ,Frame rate ,law.invention ,law ,General Earth and Planetary Sciences ,Computer vision ,Point (geometry) ,Artificial intelligence ,Time series ,business ,ComputingMethodologies_COMPUTERGRAPHICS ,General Environmental Science - Abstract
Modern smartphones can record high megapixel videos at high frame rates. This paper leverages these capabilities and presents a new method to evaluate dynamics of a representative grooving tool using a smartphone’s video camera. Pixels within images are treated as vibration sensors measuring response to impact-based excitation. Motion is detected and tracked using image processing techniques to give pixel-displacement time series data. Spectra of these data give natural frequencies, and damping is estimated using the logarithmic decrement method. These parameters compare well with those extracted using experimental modal analysis procedures. Unscaled tool vibration shapes are visualized using motion magnification techniques.
- Published
- 2021
- Full Text
- View/download PDF
18. Robot Position Control Using Force Information for Cooperative Work in Remote Robot Systems with Force Feedback
- Author
-
Satoru Ishikawa, Pingguo Huang, Yutaka Ishibashi, and Yuichiro Tateiwa
- Subjects
Computer science ,business.industry ,Work (physics) ,Control (management) ,Video camera ,Object (computer science) ,law.invention ,law ,Position (vector) ,Robot ,Computer vision ,Artificial intelligence ,business ,Robotic arm ,Haptic technology - Abstract
This paper proposes robot position control using force information for cooperative work between two remote robot systems with force feedback in each of which a user operates a remote robot by using a haptic interface device while observing work of the robot with a video camera. We also investigate the effect of the proposed control by experiment. As cooperative work, we deal with work in which two robots carry an object together. The robot position control using force information finely adjusts the position of the robot arm to reduce the force applied to the object. Thus, the purpose of the control is to avoid large force so that the object is not broken. In our experiment, we make a comparison among the following three cases in order to clarify how to carry out the control effectively. In the first case, the two robots are operated manually by a user with his/her both hands. In the second case, one robot is operated manually by a user, and the other robot is moved automatically under the proposed control. In the last case, the object is carried directly by a human instead of the robot which is operated by the user in the second case. As a result, experimental results demonstrate that the control can help each system operated manually by the user to carry the object smoothly.
- Published
- 2021
- Full Text
- View/download PDF
19. Investigation of saturation flow rate using video camera at signalized intersections in Jordan
- Author
-
Mohamad S. Al Zoubi, Bara’ W. Al-Mistarehi, and Ahmad Alomari
- Subjects
Environmental Engineering ,saturation flow rate ,0211 other engineering and technologies ,Aerospace Engineering ,Video camera ,02 engineering and technology ,traffic signal ,law.invention ,Traffic signal ,Intersection ,law ,0502 economics and business ,General Materials Science ,Computer vision ,Electrical and Electronic Engineering ,Civil and Structural Engineering ,050210 logistics & transportation ,intersection ,business.industry ,Mechanical Engineering ,05 social sciences ,turning movements ,021107 urban & regional planning ,Saturation flow rate ,Engineering (General). Civil engineering (General) ,Environmental science ,Artificial intelligence ,TA1-2040 ,business - Abstract
This study aimed to investigate a potential list of variables that may have an impact on the saturation flow rate (SFR) associated with different turning movements at signalized intersections in Jordan. Direct visits to locations were conducted, and a video camera was used. Highway capacity manual standard procedure was followed to collect the necessary traffic data. Multiple linear regression was performed to classify the factors that impact the SFR and to find the optimal model to foretell the SFR. Results showed that turning radius, presence of camera enforcement, and the speed limit are the significant factors that influence SFR for shared left- and U-turning movements (LUTM) with R2 = 76.9%. Furthermore, the presence of camera enforcement, number of lanes, speed limit, city, traffic volume, and area type are the factors that impact SFR for through movements only (THMO) with R2 = 69.6%. Also, it was found that the SFR for LUTM is 1611 vehicles per hour per lane (VPHPL),which is less than the SFR for THMO that equals to 1840 VPHPL. Calibration and validation of SFR based on local conditions can improve the efficiency of infrastructure operation and planning activities because vehicles’ characteristics and drivers’ behavior change over time.
- Published
- 2020
20. Video Camera-Based Vibration Measurement for Civil Infrastructure Applications.
- Author
-
Chen, Justin G., Davis, Abe, Wadhwa, Neal, Durand, Frédo, Freeman, William T., and Büyüköztürk, Oral
- Subjects
VIBRATION measurements ,CAMCORDERS - Abstract
Visual testing, as one of the oldest methods for nondestructive testing (NDT), plays a large role in the inspection of civil infrastructure. As NDT has evolved, more quantitative techniques have emerged such as vibration analysis. New computer vision techniques for analyzing the small motions in videos, collectively called motion magnification, have been recently developed, allowing quantitative measurement of the vibration behavior of structures from videos. Video cameras offer the benefit of long range measurement and can collect a large amount of data at once because each pixel is effectively a sensor. This paper presents a video camera-based vibration measurement methodology for civil infrastructure. As a proof of concept, measurements are made of an antenna tower on top of the Green Building on the campus of the Massachusetts Institute of Technology (MIT) from a distance of over 175 m, and the resonant frequency of the antenna tower on the roof is identified with an amplitude of 0.21 mm, which was less than 1=170th of a pixel. Methods for improving the noise floor of the measurement are discussed, especially for motion compensation and the effects of video downsampling, and suggestions are given for implementing the methodology into a structural health monitoring (SHM) scheme for existing and new structures. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
21. Automatic quality evaluation of parade by variance of postures of a platoon on single video camera logs
- Author
-
Hiroshi Sato, Yohei Okugawa, and Masao Kubo
- Subjects
Computer science ,business.industry ,media_common.quotation_subject ,Deep learning ,Swarm behaviour ,Robotics ,Video camera ,GeneralLiterature_MISCELLANEOUS ,General Biochemistry, Genetics and Molecular Biology ,law.invention ,Artificial Intelligence ,law ,Parade ,Quality (business) ,Platoon ,Computer vision ,Artificial intelligence ,business ,Pose ,media_common - Abstract
The highly synchronized parade can impress audiences how strong the troops seem to be, whereas it is difficult to train for the good parade because of its complex collective behavior. However, there is no scientific research about the important factor to train and produce a good parade is. One of the bottlenecks to the scientific approach is the difficulty of measurement of the quality of a group as same as other swarm researches. In this paper, we measured the posture data of members in the parade with OpenPose, which is a cutting-edge pose estimation technology of deep learning. By this measurement, we propose a numerical evaluation for the quality of the parade, and it is confirmed by the questionnaire. In conclusion, our evaluation method is applicable for the quantitative evaluation, and it was suggested that the variation level of the arm swing angles was related to the quality of the parade. This paper is based on the paper presented at the proceedings of the 3rd International Symposium on Swarm Behavior and Bio-Inspired Robotics [1].
- Published
- 2020
- Full Text
- View/download PDF
22. Vision-Based Moving Mass Detection by Time-Varying Structure Vibration Monitoring
- Author
-
Qingbo He, Zhike Peng, Wen-Ming Zhang, Zhen Liu, and Zhanwei Li
- Subjects
Computer science ,business.industry ,010401 analytical chemistry ,Video camera ,Fault (power engineering) ,01 natural sciences ,Automation ,Finite element method ,0104 chemical sciences ,law.invention ,Vibration ,law ,Vibration measurement ,Computer vision ,Structural health monitoring ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Focus (optics) ,Instrumentation ,Beam (structure) - Abstract
In recent years, vision-based vibration measurement methods have been widely used in the field of mechanical engineering for structural health monitoring and fault diagnosis. However, these current methods mainly focus on monitoring the stationary vibrations of time-invariant systems or structures, which cannot deal with the time-varying conditions. In this paper, a video camera is used to monitor the non-stationary vibrations of a time-varying structure for an unexplored usage of moving mass detection. In the study, the laboratory time-varying structure is a clamped beam with one or more masses sliding on. The general parameterized time-frequency transform is first introduced to extract the time-dependent instantaneous frequencies (IFs) from the video motions. Then a parameterized mathematical model is proposed to estimate the weight of the moving mass based on the extracted IFs. By optimizing the parameters in this model, the moving mass can be estimated with high precision. Besides, multi-moving masses can be also detected for abnormal mass classification and identification. Both the finite element method (FEM) and experiments are performed to demonstrate the performance of the proposed vision-based moving mass detection (VMMD) technique. The VMMD technique has valuable potential for online detection, localization and classification of inferior products with abnormal mass in industrial automation.
- Published
- 2020
- Full Text
- View/download PDF
23. Person Tracking and Reidentification for Multicamera Indoor Video Surveillance Systems
- Author
-
Shiping Ye, I. Yu. Zakharava, Rykhard Bohush, Huafeng Chen, and Sergey Ablameyko
- Subjects
Channel (digital image) ,Computer science ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Video camera ,02 engineering and technology ,01 natural sciences ,Computer Graphics and Computer-Aided Design ,Convolutional neural network ,law.invention ,010309 optics ,law ,Spatial reference system ,Histogram ,Face (geometry) ,0103 physical sciences ,Pattern recognition (psychology) ,0202 electrical engineering, electronic engineering, information engineering ,Human multitasking ,020201 artificial intelligence & image processing ,Computer vision ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business - Abstract
For practical use, the relevance of indoor surveillance from multiple cameras to track the movement of people and reidentify them in video sequences is constantly increasing. This is a complex task due to the effect of uneven illumination, background inhomogeneity, overlap, uncertainty of the trajectories of people, and the similarity of their visual features. The article presents an approach to track people by video sequences and reidentify them in multicamera video surveillance systems that are used indoors. At the first step, people are detected using a YOLO v4 convolution neural network (CNN) and described by a rectangular area. Further, the search for the face area and the calculation of its features are carried out, which in the developed method are used when accompanying a person in a video sequence and during his intercamera reidentification. This approach improves the accuracy of tracking with a complex movement trajectory and multiple intersections of people with similar characteristics. The search for faces is carried out on the detected areas based on the multitasking MTCNN, and the MobileFaceNetwork model is used to form the vector of the features of the face. Human features are generated using a modified CNN based on ResNet34 and an HSV color tone channel histogram. The correspondence between people on different frames is established based on the analysis of the spatial coordinates of faces and people, as well as their CNN features, using the Hungarian algorithm. To ensure the accuracy of intercamera tracking, reidentification is performed based on the facial features. Five test video sequences of different numbers of people captured indoors with a fixed video camera were used to test and compare different approaches. The obtained experimental results confirmed the strength of the characteristics of the proposed approach.
- Published
- 2020
- Full Text
- View/download PDF
24. Signal processing for direction finding and range determining to small UAVs in the optical and infrared ranges
- Subjects
business.industry ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Image processing ,Video camera ,General Medicine ,Video processing ,Tracking (particle physics) ,law.invention ,Azimuth ,Stereopsis ,Approximation error ,law ,Calibration ,Computer vision ,Artificial intelligence ,business - Abstract
Detection and assessment of UAV coordinates is critical to protect against their unauthorized use in protected areas. The paper considers the problem of choosing the algorithm and parameters of stereo pair video processing in the visible, near-infrared and far-infrared ranges for reliable determination of small UAVs coordinates, further tracking of them and evaluation of UAVs motion parameters. Theoretical analysis of the optical method possibilities for two-channel stereo-video observation is carried out. The paper presents the results of field experiments aimed to determine the coordinates of a small UAV DJI Phantom 4 using a stereo-video observation system based on IP cameras. The external and internal parameters of the stereo-video observation system were calibrated taking into account the nonlinear distortions of the lenses. The cameras were calibrated in OpenCV using a function based on Zhang and Bouguet methods. The theoretical and practical errors in measuring the range to test objects at their different positions were determined. An algorithm for image processing of a stereo-video observation system for detection, recognition and measurement of UAV coordinates is described. The results of measurements of UAV coordinates for two test flights are presented. The measurement of the true coordinates of the UAV was carried out according to the data of the onboard GPS receiver. The results of measuring the azimuth and elevation of the UAV by the stereo-video observation system were matched with the data of the GPS receiver. This fact can be explained by high resolution of the cameras and the precise calibration of their internal parameters. The root-mean-square relative error in measuring the range was about 10%. Ways for improving the accuracy of UAV stereo-video observation systems are shown.
- Published
- 2020
- Full Text
- View/download PDF
25. VR‐based dataset for autonomous‐driving system
- Author
-
Yu Wang, Shouwen Yao, and Jiahao Zhang
- Subjects
0209 industrial biotechnology ,Pixel ,Positioning system ,Computer science ,business.industry ,Feature extraction ,General Engineering ,Driving simulator ,Energy Engineering and Power Technology ,Video camera ,02 engineering and technology ,Virtual reality ,Object (computer science) ,law.invention ,020901 industrial engineering & automation ,law ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,business ,Software ,Block (data storage) - Abstract
At present, visual recognition systems have acquired wide employment in the autonomous-driving area. The lack of fully featured benchmarks that mimic scenarios faced by autonomous-driving system is the core factor limiting the visual understanding of complex urban traffic scenes. However, to establish a dataset adequately captures the complexity of real-world urban traffics consuming time and effort. In order to solve these difficulties, authors involve virtual reality to develop a large-scale dataset, which trains and tests approaches for autonomous-driving vehicles. Using the label of the object in virtual scenes, the coordinate transformation of a 3D object to a 2D plane is calculated, which makes the label of the pixel block corresponding to the object in the 2D plane accessible. Their recording platform is equipped with video camera models, LiDAR model and positioning system. By using the pilot-in-the-loop method with driving simulator hardware and VR devices, the authors acquire and establish a large, diverse dataset comprising stereo video sequences recorded in streets and mountain roads from several different environments. Their pioneering method of using VR technology significantly mitigates the costs of acquisition of training data. Crucially, their effort exceeds previous attempts in terms of dataset size, scene variability and complexity.
- Published
- 2020
- Full Text
- View/download PDF
26. Solution of problem of returning to the starting point of autonomously flying UAV by visual navigation
- Author
-
R. S. Zhuk, B. A. Zalesky, and Ph. S. Trotski
- Subjects
0209 industrial biotechnology ,Quadcopter ,Computer science ,autonomous visual navigation ,navigation signals ,Video camera ,02 engineering and technology ,Simultaneous localization and mapping ,law.invention ,03 medical and health sciences ,020901 industrial engineering & automation ,0302 clinical medicine ,visual odometry ,law ,Computer vision ,Visual odometry ,return to the starting point ,Inertial navigation system ,business.industry ,QA75.5-76.95 ,Electronic computers. Computer science ,030220 oncology & carcinogenesis ,Global Positioning System ,GLONASS ,Artificial intelligence ,unmanned aerial vehicles ,business ,Geographic coordinate system - Abstract
An autonomous visual navigation algorithm is considered, designed for “home“ return of unmanned aerial vehicle (UAV) equipped with on-board video camera and on-board computer, out of GPS and GLONASS navigation signals. The proposed algorithm is similar to the well-known visual navigation algorithms such as V-SLAM (simultaneous localization and mapping) and visual odometry, however, it differs in separate implementation of mapping and localization processes. It calculates the geographical coordinates of the features on the frames taken by on-board video camera during the flight from the start point until the moment of GPS and GLONASS signals loss. After the loss of the signal the return mission is launched, which provides estimation of the position of UAV relatively the map created by previously found features. Proposed approach does not require such complex calculations as V-SLAM and does not accumulate errors over time, in contrast to visual odometry and traditional methods of inertial navigation. The algorithm was implemented and tested with use of DJI Phantom 3 Pro quadcopter.
- Published
- 2020
- Full Text
- View/download PDF
27. Non-invasive high-speed blinking kinematics characterization
- Author
-
Santiago García-Lázaro, Cristian Talens-Estarelles, Álvaro Pons, José J. Esteve-Taboada, and Vicent Sanchis-Jurado
- Subjects
genetic structures ,Image processing ,Video camera ,Kinematics ,law.invention ,03 medical and health sciences ,Cellular and Molecular Neuroscience ,0302 clinical medicine ,law ,Image Processing, Computer-Assisted ,Humans ,Computer vision ,Closing (morphology) ,Mathematics ,Blinking ,business.industry ,Non invasive ,Frame rate ,Sensory Systems ,Biomechanical Phenomena ,Ophthalmology ,Research Design ,Fixation (visual) ,030221 ophthalmology & optometry ,Artificial intelligence ,business ,030217 neurology & neurosurgery - Abstract
The objective of this study was to analyze the differences in blinking kinematics of spontaneous and voluntary blinks using for the first time a self-developed, non-invasive, and image processing–based method. The blinks of 30 subjects were recorded for 1 min with the support of an eye-tracking device based on a high-speed infrared video camera, working at 250 frames per second, under two different experimental conditions. For the first condition, subjects were ordered to look in the straightforward position at a fixation target placed 1 m in front of them, with no further instructions. For the second, subjects were additionally asked to blink only following a sound signal every 6 s. Mean complete blinks increased by a factor of 1.7 from the spontaneous to the voluntary condition while mean incomplete blinks reduced significantly by a factor of 0.4. In both conditions, closing mean and peak velocities were always significantly greater and durations significantly lower than opening ones. When comparing the values for each condition, velocities and amplitudes for the voluntary condition were always greater than the corresponding values for spontaneous. Voluntary blinks revealed significant kinematic differences compared to spontaneous, thus supporting a different supranuclear pathway organization. This study presents a new method, based on image analysis, for the non-invasive kinematic characterization of blinking.
- Published
- 2020
- Full Text
- View/download PDF
28. Simultaneous Utilization of Inertial and Video Sensing for Action Detection and Recognition in Continuous Action Streams
- Author
-
Nasser Kehtarnavaz and Haoran Wei
- Subjects
Modality (human–computer interaction) ,Inertial frame of reference ,Computer science ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Wearable computer ,Video camera ,Image segmentation ,Convolutional neural network ,law.invention ,Action (philosophy) ,law ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Instrumentation ,Gesture - Abstract
This paper describes the simultaneous utilization of inertial and video sensing for the purpose of achieving human action detection and recognition in continuous action streams. Continuous action streams mean that actions of interest are performed randomly among actions of non-interest in a continuous manner. The inertial and video data are captured simultaneously via a wearable inertial sensor and a video camera, which are turned into 2D and 3D images. These images are then fed into a 2D and a 3D convolutional neural network with their decisions fused in order to detect and recognize a specified set of actions of interest from continuous action streams. The developed fusion approach is applied to two sets of actions of interest consisting of smart TV gestures and sports actions. The results obtained indicate the fusion approach is more effective than when each sensing modality is used individually. The average accuracy of the fusion approach is found to be 5.8% above inertial and 14.3% above video for the TV gesture actions of interest, and 23.2% above inertial and 1.9% above video for the sports actions of interest.
- Published
- 2020
- Full Text
- View/download PDF
29. Determination of coordinates of terrestrial targets with the use of small UAVs on the basis of an improved pseudolongdimensional method
- Author
-
М. V. Burdeinyi, S. I. Stehura, О. V. Maistrenko, and S. V. Stetsiv
- Subjects
Basis (linear algebra) ,business.industry ,Computer science ,ComputerApplications_COMPUTERSINOTHERSYSTEMS ,Video camera ,Terrain ,Least squares ,law.invention ,law ,Range (statistics) ,General Materials Science ,Point (geometry) ,Satellite navigation ,Computer vision ,Artificial intelligence ,business ,Scale (map) - Abstract
In this article the feasibility of using an improved pseudolongdimensional method to determine the coordinates of terrestrial targets using small-scale unmanned aerial vehicles (UAVs), which is used to identify the coordinates of an object on the ground in real-time without involving of powerful computable technique, is justified.The analysis of existing methods for determining the coordinates of terrestrial objects using unmanned aerial vehicles is carried out and an algorithm for determining them based on an improved pseudolongdimensional method using small- scale unmanned aerial vehicles is proposed.The determining the coordinates of terrestrial objects using existing unmanned aerial vehicles is carried out by comparing the image, which was obtained from a located on the unmanned aerial vehicles video camera, with an electronic territory map, on which the observation takes place. The coordinates of the point on the terrain are determined after reaching of maximum coincidence of specified images. The accuracy and timing the coordinates of the point depends on the quality of the image, the number of images, the angle and other factors.An algorithm for determining the coordinates of terrestrial targets using small-scale unmanned aerial vehicles based on an improved pseudolongdimensional method was proposed based on the analysis of the unmanned aerial vehicles work to identify the coordinates of the point on the terrain. The pseudolongdimensional method is used in modern satellite navigation receivers, which is solved by using methods of "least squares", "successive approximations" and etc.The analysis of the accuracy estimation shows that the errors of the coordinate’s determination of the target by the proposed method will depend only on the accuracy of determining the current coordinates of small- scale the unmanned aerial vehicles at the points, where the picture was taken, and the accuracy of the measuring of the corresponding values of the range to the target. To determine the coordinates of a target there is no need for large amounts of memory and powerful computing resources.
- Published
- 2020
- Full Text
- View/download PDF
30. Object tracking algorithm by moving video camera
- Subjects
Rest (physics) ,Pixel ,business.industry ,BitTorrent tracker ,Computer science ,Frame (networking) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,020206 networking & telecommunications ,Video camera ,02 engineering and technology ,Object (computer science) ,law.invention ,Data set ,Parallelepiped ,law ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,business - Abstract
The algorithm ACT (Adaptive Color Tracker) to track objects by a moving video camera is presented. One of the features of the algorithm is the adaptation of the feature set of the tracked object to the background of the current frame. At each step, the algorithm extracts from the object features those that are more specific to the object and at the same time are at least specific to the current frame background, since the rest of the object features not only do not contribute to the separation of the tracked object from the background, but also impede its correct detection. The features of the object and background are formed based on the color representations of scenes. They can be computed in two ways. The first way is 3D-color vectors of the clustered image of the object and the background by a fast version of the well-known k-means algorithm. The second way consists in simpler and faster partitioning of the RGB-color space into 3D-parallelepipeds and subsequent replacement of the color of each pixel with the average value of all colors belonging to the same parallelepiped as the pixel color. Another specificity of the algorithm is its simplicity, which allows it to be used on small mobile computers, such as the Jetson TXT1 or TXT2.The algorithm was tested on video sequences captured by various camcorders, as well as by using the well-known TV77 data set, containing 77 different tagged video sequences. The tests have shown the efficiency of the algorithm. On the test images, its accuracy and speed overcome the characteristics of the trackers implemented in the computer vision library OpenCV 4.1.
- Published
- 2020
- Full Text
- View/download PDF
31. Automation of the Timed-Up-and-Go Test Using a Conventional Video Camera
- Author
-
Mary E. Kaye, Erik Scheme, Patrick Savoie, and James A. D. Cameron
- Subjects
Adult ,Male ,Adolescent ,Heuristic (computer science) ,Computer science ,Video Recording ,Video camera ,Image processing ,02 engineering and technology ,01 natural sciences ,law.invention ,Automation ,Young Adult ,Deep Learning ,Health Information Management ,Match moving ,law ,Image Processing, Computer-Assisted ,0202 electrical engineering, electronic engineering, information engineering ,Humans ,Computer vision ,Electrical and Electronic Engineering ,Ground truth ,business.industry ,Deep learning ,010401 analytical chemistry ,0104 chemical sciences ,Computer Science Applications ,Exercise Test ,RGB color model ,Female ,020201 artificial intelligence & image processing ,Artificial intelligence ,Gait Analysis ,business ,Algorithms ,Biotechnology - Abstract
The Timed-Up-and-Go (TUG) test is a simple clinical tool commonly used to quickly assess the mobility of patients. Researchers have endeavored to automate the test using sensors or motion tracking systems to improve its accuracy and to extract more resolved information about its sub-phases. While some approaches have shown promise, they often require the donning of sensors or the use of specialized hardware, such as the now discontinued Microsoft Kinect, which combines video information with depth sensors (RGBD). In this work, we leverage recent advances in computer vision to automate the TUG test using a regular RGB video camera without the need for custom hardware or additional depth sensors. Thirty healthy participants were recorded using a Kinect V2 and a standard video feed while performing multiple trials of 3 and 1.5 meter versions of the TUG test. A Mask Regional Convolutional Neural Net (R-CNN) algorithm and a Deep Multitask Architecture for Human Sensing (DMHS) were then used together to extract global 3D poses of the participants. The timing of transitions between the six key movement phases of the TUG test were then extracted using heuristic features extracted from the time series of these 3D poses. The proposed video-based vTUG system yielded the same error as the standard Kinect-based system for all six key transitions points, and average errors of less than 0.15 seconds from a multi-observer hand labeled ground truth. This work describes a novel method of video-based automation of the TUG test using a single standard camera, removing the need for specialized equipment and facilitating the extraction of additional meaningful information for clinical use.
- Published
- 2020
- Full Text
- View/download PDF
32. Subpixel Motion Detection Using a Commercial Video Camera
- Author
-
Minas Benyamin and Geoffrey H. Goldman
- Subjects
Pixel ,Physics::Instrumentation and Detectors ,Bar (music) ,Computer science ,business.industry ,Monte Carlo method ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Video camera ,Motion detection ,Subpixel rendering ,Constant false alarm rate ,law.invention ,law ,Position (vector) ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Instrumentation - Abstract
Performance metrics for detecting subpixel motion are calculated for a commercial video camera using image difference data processed with three Neyman–Pearson-based algorithms. High-signal-to-noise-ratio data are collected and analyzed for a thin black bar that slowly oscillates against a white background. The position and velocity of the bar are estimated using Fourier-based processing techniques. The probability of detecting subpixel motion as a function of false alarm rate, number of pixels tested, subpixel shift, and detection algorithm are calculated with Monte Carlo simulations using the experimental data. The results characterize the best performance curves for detecting subpixel motion for most commercial video cameras and targets.
- Published
- 2020
- Full Text
- View/download PDF
33. Improved traffic sign recognition for in-car cameras
- Author
-
Van Luan Tran, Jian-He Shi, Huei-Yung Lin, and Chin-Chen Chang
- Subjects
0209 industrial biotechnology ,Artificial neural network ,business.industry ,Computer science ,020208 electrical & electronic engineering ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,General Engineering ,Video camera ,02 engineering and technology ,law.invention ,Support vector machine ,020901 industrial engineering & automation ,Histogram of oriented gradients ,law ,0202 electrical engineering, electronic engineering, information engineering ,Traffic sign recognition ,Computer vision ,Artificial intelligence ,business - Abstract
We propose an enhanced approach for recognizing traffic signs; images are captured using a video camera installed in a car. Support vector machines detect the edges and gradients of each captured i...
- Published
- 2020
- Full Text
- View/download PDF
34. Color-based registration of point cloud data by video camera for electromagnetic simulation
- Author
-
Kentaro Saito, Yukiko Kishiki, Jun-ichi Takada, Wataru Okamura, and Zhihang Chen
- Subjects
law ,Computer science ,business.industry ,Point cloud ,Video camera ,Computer vision ,Artificial intelligence ,business ,Electromagnetic simulation ,law.invention - Published
- 2020
- Full Text
- View/download PDF
35. The algorithm development for operation of a computer vision system via the OpenCV library
- Author
-
Sergey Vasiliev, Stepan Sivkov, Marat Valitov, Galina Romanova, Leonid Novikov, Anastasia Romanova, and Denis Vaganov
- Subjects
Calibration (statistics) ,business.industry ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,020206 networking & telecommunications ,Video camera ,Image processing ,02 engineering and technology ,law.invention ,Development (topology) ,Register (music) ,law ,0202 electrical engineering, electronic engineering, information engineering ,General Earth and Planetary Sciences ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,business ,Algorithm ,General Environmental Science - Abstract
The article is devoted to the effective way of getting images from the information board of the counting register for utility metering. This method is based on the use of the open source library OpenCV. The image processing algorithm has been represented. The method of calibration of the video camera allows us to reduceerror probability in text recognition. Here are some results of the experiment.
- Published
- 2020
- Full Text
- View/download PDF
36. Surgical Phase Recognition with Wearable Video Camera for Computer-aided Orthopaedic Surgery-AI Navigation System
- Author
-
Syoji Kobashi, Shoichi Nishio, Takafumi Hiranaka, Naomi Yagi, Belayat Hossain, and Manabu Nii
- Subjects
medicine.medical_specialty ,business.industry ,Computer science ,Deep learning ,Wearable computer ,Navigation system ,Video camera ,Phase (combat) ,law.invention ,law ,Orthopedic surgery ,medicine ,Computer-aided ,Computer vision ,Artificial intelligence ,business - Published
- 2020
- Full Text
- View/download PDF
37. Image-based system and artificial neural network to automate a quality control system for cherries pitting process
- Author
-
Stefano Guarino, Flaviana Tagliaferri, Vincenzo Tagliaferri, Daniele Almonti, Gabriele Baiocco, Nadia Ucciardello, Roberto Teti, Doriana M. D'Addona, Baiocco, G., Almonti, D., Guarino, S., Tagliaferri, F., Tagliaferri, V., and Ucciardello, N.
- Subjects
0209 industrial biotechnology ,Process automation ,Computer science ,Video camera ,02 engineering and technology ,010501 environmental sciences ,01 natural sciences ,law.invention ,Reduction (complexity) ,020901 industrial engineering & automation ,law ,Histogram ,Computer vision ,MATLAB ,0105 earth and related environmental sciences ,General Environmental Science ,computer.programming_language ,Artificial neural network ,business.industry ,Feed forward ,Process (computing) ,Quality control ,Settore ING-IND/16 ,Neural network ,General Earth and Planetary Sciences ,Image analysi ,Artificial intelligence ,State (computer science) ,business ,computer - Abstract
This work proposes a non-destructive quality control for a pitting process of cherries. A system composed of a video camera and a light source records pictures of backlit cherries. The images processing in MATLAB environment provides the dynamic histograms of the pictures, which are analysed to state the presence of the pit. A feedforward artificial neural network was implemented and trained with the histograms obtained. The network developed allows a fast detection of stone fractions not visible by human inspection and the reduction of the accidental reject of properly manufactured products.
- Published
- 2020
- Full Text
- View/download PDF
38. Modal analysis of machine tools using visual vibrometry and output-only methods
- Author
-
Mohit Law, Suparno Mukhopadhyay, and Pulkit Gupta
- Subjects
0209 industrial biotechnology ,business.product_category ,Pixel ,Bar (music) ,Computer science ,business.industry ,Mechanical Engineering ,Modal analysis ,Video camera ,02 engineering and technology ,Industrial and Manufacturing Engineering ,Machine tool ,law.invention ,Visualization ,Vibration ,020303 mechanical engineering & transports ,020901 industrial engineering & automation ,Modal ,0203 mechanical engineering ,law ,Computer vision ,Artificial intelligence ,business ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
This paper presents visual vibrometry as a new video camera-based vibration measuring technique for machine tools. Pixels within images from recordings of the vibrating machine are treated as motion sensors to detect and track vibrating edges. Modal parameters are extracted from the measured pixel-displacement time series data using output-only mass-change methods. Methods are experimentally demonstrated on a small milling machine, on a slender boring bar, and on a regular end mill. Modal parameters extracted using visual vibrometry agree with those extracted using experimental modal analysis procedures. Moreover, visual vibrometry aids shape visualization and maintains advantages of other non-contact measurement techniques.
- Published
- 2020
- Full Text
- View/download PDF
39. Robust Imaging Photoplethysmography in Long-Distance Motion
- Author
-
Yuejin Zhao, Mei Hui, Liquan Dong, Wu Yuheng, Ming Liu, Lingqin Kong, and Xiaohua Liu
- Subjects
lcsh:Applied optics. Photonics ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Video camera ,Signal ,Motion (physics) ,law.invention ,imaging blur ,law ,Photoplethysmogram ,lcsh:QC350-467 ,Personal health ,Computer vision ,Electrical and Electronic Engineering ,Zoom lens ,business.industry ,Pulse (signal processing) ,lcsh:TA1501-1820 ,long-distance motion ,Atomic and Molecular Physics, and Optics ,Intensity (physics) ,Artificial intelligence ,non-contact heart rate monitoring ,business ,lcsh:Optics. Light ,Imaging photoplethysmography - Abstract
Imaging photoplethysmography (IPPG) enables contactless physiological parameters monitoring using a regular video camera, which led to an increasing interest in video health monitoring. However, classical IPPG is unable to accurately measure heart rate over a long-distance motion. In this paper, an IPPG heart rate detection framework is proposed based on an adaptive-zoom system called adaptive-zoom IPPG (AZIPPG). It uses an automatic zoom lens to maintain the size of the sensitive area during long-distance motion. The data gathered from AZIPPG were compared with the output of blood volume pulse (BVP) devices and a classical IPPG system. The experimental results showed that AZIPPG achieved high accuracy and correlation in different environments featuring subjects engaged in static and long-distance walking. Actually, AZIPPG eliminates dramatic changes in the ROI, it also introduces image blurring. In this report, the presented theoretical and experimental results indicate that defocused image blurring dose not significantly affect the IPPG's intensity. The pulse data extracted from AZIPPG were comparable to those obtained from the PPG signal of a conventional BVP device. These findings may potentially advance progress in personal health care and telemedicine.
- Published
- 2020
40. The use of video camera to create metric 3D model of engineering objects
- Author
-
T. Lipecki and Nicholas Batakanwa
- Subjects
business.industry ,law ,Computer science ,Metric (mathematics) ,Computer vision ,Video camera ,3d model ,Artificial intelligence ,business ,law.invention - Abstract
The article presents the possibilities of using a video camera to create a 3D metric model of engineering objects using Agisoft and CloudCompare software. Traditional photogrammetry technique does not always match up with production urgency needed by the market. Complexity is seen when used in huge objects leading to rise of cost, time and tediousness of the work. The use of Video Camera technique here termed as videogrammetry technique is comparable to taking pictures, however, it allows to speed up the process of obtaining data, which in many cases is a key element in anyb any project or research. The analysis of the quality of 3D modelling of the three filmed objects was performed, which allowed the authors to refine the procedure for acquiring images for spatial analyses. The applied technique of “videogrammetry” is comparable to taking pictures, but allows the data acquisition process to speed up, which in many cases is a key element in field research. 3D objects videos from no-metric camera were processed by Agisoft Metashape. To be able to assess the accuracy of the videogrammetry data, a well-established Laser scanner technique’s data was used for comparison. The laser scanner data were pre-processed in Autodesk Recap. Manual registration was performed utilizing 14 points from the three scans. The two 3D models were exported to CloudCompare software for comparison and further analysis. An analysis of the quality of 3D modelling of the three objects filmed was performed, which allowed refining the procedure for obtaining images for spatial analysis. The article presents the possibilities of using a non-metric mobile phone video camera “videogrammetry” to create a metric 3D model of engineering objects using Agisoft and CloudCompare software. In CloudCompare a registration, cloud to cloud (C2C) and profile to profile analysis was performed to determine the uncertainty of the 3D model produced from videogrammetry data determined as distance of separation between the two models. Results show average distance of separation between laser scanner and videogrammetry derived 3D model point cloud to be 34cm, the average profile separation was 25 cm in XY plane and 1.9 cm in Z-plane. Using Cloud to Cloud PCV the average difference of 84 cm was determined.
- Published
- 2020
- Full Text
- View/download PDF
41. Dynamic Visual Motion Estimation
- Author
-
Volker Willert and Julian Eggert
- Subjects
Visual perception ,Computer science ,business.industry ,Probabilistic logic ,Optical flow ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Field of view ,Video camera ,law.invention ,Computer Science::Graphics ,law ,Video tracking ,Obstacle avoidance ,Structure from motion ,Computer vision ,Artificial intelligence ,business ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
Visual motion is the projection of scene movements on a visual sensor. It is a rich source of information for the analysis of a visual scene. Especially for dynamic vision systems the estimation of visual motion is important because it allows to deduce the motion of objects as well as the self-motion of the system relative to the environment. Therefore, visual motion serves as a basic information for navigation and exploration tasks, like obstacle avoidance, object tracking or visual scene decomposition into static and moving parts. Despite many years of progress, visual motion processing continues to puzzle the mind of researchers involved in understanding the principles of visual perception. Basic aspects such as measuring motions of spatially local entities have been widely studied. But what is most striking about motion processing is its temporal dynamics. This is obvious, because the environment perceived by a visual observer like a video camera or the human eye is highly dynamic. Moving objects enter and leave the field of view and also change the way they move, e.g. change the direction or speed. Hence, suitable assumptions about the dynamics of the visual scene and about the correlations between local moving entities are beneficial for the estimation of the scene motion as a whole. Probabilistic machine learning techniques have become very popular for early vision problems like binocular depth and optical flow computation. The reason for that popularity is because of the possibility to explicitly consider inherent uncertainties in the measurement processes and to incorporate prior knowledge about the state to be estimated. Along this line of argumentation, we present a general approach to visual motion estimation based on a probabilistic generative model that allows to infer visual motion from visual data. We start with a definition of visual motion and point out the basic problems that come along with visual motion estimation. Then, we summaries common ideas that can be found in different state-of-the-art optical flow estimation techniques and stress the need for taking uncertainty into account. Based on the ideas of already existing models we introduce a general Bayesian framework for dynamic optical flow estimation that comprises several different aspects for solving the optical flow estimation problem into one common approach. So far, the research on optical flow has mainly concentrated on motion estimations using the observation of two frames of an image sequence isolated in time. Our main concern is to stress that visual motion is a dynamic feature of an image input stream and the more visual data has been observed the more precise and detailed we can estimate and predict the motion contained in this visual
- Published
- 2022
- Full Text
- View/download PDF
42. A low-cost 2-D video system can accurately and reliably assess adaptive gait kinematics in healthy and low vision subjects
- Author
-
Tjerk Zult, Juan Tabernero, Shahina Pardhan, and Jonathan Allsop
- Subjects
Adult ,Male ,Computer science ,Video Recording ,Visual Acuity ,Vision, Low ,lcsh:Medicine ,Video camera ,Adaptation (eye) ,Kinematics ,Walking ,Motion capture ,Article ,law.invention ,03 medical and health sciences ,Macular Degeneration ,0302 clinical medicine ,law ,Humans ,Computer vision ,Object vision ,lcsh:Science ,Gait ,Reliability (statistics) ,Aged ,Multidisciplinary ,business.industry ,lcsh:R ,030229 sport sciences ,Gold standard (test) ,Adaptation, Physiological ,Navigation ,Bone quality and biomechanics ,Biomechanical Phenomena ,Low vision ,Ageing ,Gait analysis ,Obstacle ,Case-Control Studies ,Female ,lcsh:Q ,Artificial intelligence ,business ,Gait Analysis ,030217 neurology & neurosurgery - Abstract
3-D gait analysis is the gold standard but many healthcare clinics and research institutes would benefit from a system that is inexpensive and simple but just as accurate. The present study examines whether a low-cost 2-D motion capture system can accurately and reliably assess adaptive gait kinematics in subjects with central vision loss, older controls, and younger controls. Subjects were requested to walk up and step over a 10 cm high obstacle that was positioned in the middle of a 4.5 m walkway. Four trials were simultaneously recorded with the Vicon motion capture system (3-D system) and a video camera that was positioned perpendicular to the obstacle (2-D system). The kinematic parameters (crossing height, crossing velocity, foot placement, single support time) were calculated offline. Strong Pearson’s correlations were found between the two systems for all parameters (average r = 0.944, all p
- Published
- 2019
- Full Text
- View/download PDF
43. Video-Based Analysis and Reporting of Riding Behavior in Cyclocross Segments
- Author
-
Jelle De Bock and Steven Verstockt
- Subjects
Technology and Engineering ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Video camera ,ComputerApplications_COMPUTERSINOTHERSYSTEMS ,TP1-1185 ,pose estimation ,Biochemistry ,Article ,Analytical Chemistry ,law.invention ,law ,Humans ,Computer vision ,Electrical and Electronic Engineering ,Cluster analysis ,Instrumentation ,Pose ,business.industry ,Communication ,Chemical technology ,object detection ,sports data analysis ,Video processing ,Automatic summarization ,Pipeline (software) ,Atomic and Molecular Physics, and Optics ,Object detection ,Anomaly detection ,Artificial intelligence ,sports ,business - Abstract
Video-based trajectory analysis might be rather well discussed in sports, such as soccer or basketball, but in cycling, this is far less common. In this paper, a video processing pipeline to extract riding lines in cyclocross races is presented. The pipeline consists of a stepwise analysis process to extract riding behavior from a region (i.e., the fence) in a video camera feed. In the first step, the riders are identified by an Alphapose skeleton detector and tracked with a spatiotemporally aware pose tracker. Next, each detected pose is enriched with additional meta-information, such as rider modus (e.g., sitting on the saddle or standing on the pedals) and detected team (based on the worn jerseys). Finally, a post-processor brings all the information together and proposes ride lines with meta-information for the riders in the fence. The presented methodology can provide interesting insights, such as intra-athlete ride line clustering, anomaly detection, and detailed breakdowns of riding and running durations within the segment. Such detailed rider info can be very valuable for performance analysis, storytelling, and automatic summarization.
- Published
- 2021
44. Color-coded smoke PIV for wind tunnel experiments improved by eliminating optical and digital color contamination
- Author
-
Yuji Tasaka, Hyun Jin Park, Takeaki Yumoto, and Yuichi Murai
- Subjects
Fluid Flow and Transfer Processes ,Colored smoke ,Angle of attack ,business.industry ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Computational Mechanics ,General Physics and Astronomy ,Video camera ,law.invention ,Physics::Fluid Dynamics ,Matrix (mathematics) ,Particle image velocimetry ,Mechanics of Materials ,law ,Calibration ,RGB color model ,Computer vision ,Artificial intelligence ,business ,ComputingMethodologies_COMPUTERGRAPHICS ,Wind tunnel - Abstract
A single-camera color particle image velocimetry (PIV) system that can acquire the PIV data of three separated layers has been redesigned to make it more suitable for wind tunnel applications. We target smoke images that have particle-per-pixel values higher than unity. The system consists of a high-power color-coding illuminator and a digital color high-speed video camera. RGB values in the recorded image include severe color contaminations due to five optical and digital sequences. To quantify this, a snapshot calibration method is proposed to describe the contamination matrix equation. Taking the inverse matrix allows the in-plane PIV in each color layer to be accurately implemented. We also derive the mathematical limits in the operation of colored smoke PIV, which are explained by the matrix properties. The feasibility of the proposed method was demonstrated by application to a turbulent wake behind a delta wing at a quasi-stall angle of attack.
- Published
- 2021
- Full Text
- View/download PDF
45. High Accuracy High Integrity Train Positioning based on GNSS and Image Processing Integration
- Author
-
Michele Brizzi, Luca Pallotta, Agostino Ruggeri, Federica Battisti, Alessandro Neri, Gianluigi Lauro, Sara Baldoni, ION GNSS+ 2021, Neri, A., Battisti, F., Baldoni, S., Brizzi, M., Pallotta, L., Ruggeri, A., and Lauro, G.
- Subjects
Time-of-flight camera ,business.industry ,Computer science ,Video camera ,Satellite system ,Image processing ,Signal ,law.invention ,Odometry ,law ,Inertial measurement unit ,GNSS applications ,Computer vision ,Artificial intelligence ,business - Abstract
One of the major challenges in the design of high accuracy, high integrity localization procedures for rail applications based on Global Navigation Satellite Systems is represented by the local hazards that cannot be mitigated by resorting to augmentation networks. By fact, combining smoothed code pseudoranges with (differential) carrier phase and/or with Inertial Measurement Unit's outputs is ineffective against multipath low frequency components. These issues can be mitigated by processing images, depth maps and/or pointclouds provided by imaging sensors placed on board. The absolute position of the train can be determined by combining its relative position with respect to georeferenced rail infrastructure elements (e.g., panels, signals, signal gantries) provided by the visual localization processing unit with the landmark absolute position. In addition, the visual input can be exploited for determining on which track the train is located and can be used as complementary odometry source. Moreover, the information provided by the visual localization processing unit can be used to monitor integrity and compute the protection levels. In this contribution we present a localization system that integrates a Global Navigation Satellite System receiver, Inertial Measurement Units, and video sensors (such as monocular and stereo video camera, Time of Flight camera and LIDAR), and has the potential to overcome some of the operational and economical limitations of the current train localization system employed in the European Railway Traffic Management System.
- Published
- 2021
- Full Text
- View/download PDF
46. Distance Measurement Method Based on Average Interpupillary Distance
- Author
-
Mihaela Dodon, Marinel Temneanu, Ionut Gheorghitanu, and Codrin Donciu
- Subjects
Measurement method ,business.industry ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Video camera ,Python (programming language) ,law.invention ,Eye position ,Distance measurement ,law ,Computer vision ,Interpupillary distance ,Artificial intelligence ,business ,computer ,computer.programming_language - Abstract
This paper presents a method of measuring the distance using a video camera designed to assess the distance the user is from the camera and be exploited industrially for applications such as Human-Computer Interaction (HCI). The measurement method uses a statistically determined average interpupillary distance for both men and women and has been implemented in Python. The eye position detection was performed using the OpenCV library.
- Published
- 2021
- Full Text
- View/download PDF
47. Mathematical Modeling of the Process of Forming Video Frames by the Computer Vision System of an Underwater Vehicle
- Author
-
Sergey Yu. Sakovich and Yuri L. Siek
- Subjects
Pixel ,Machine vision ,business.industry ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Process (computing) ,Schematic ,Video camera ,Visualization ,law.invention ,Software ,law ,Computer vision ,Artificial intelligence ,Underwater ,business - Abstract
Underwater vehicles used for surveys and inspections are equipped with computer vision systems based on video cameras. The efficiency of such underwater vehicles depends on the onboard software used, the development of which must take into account the peculiarities of operating conditions. The separate system’s test of the underwater vehicle can be used as a simulation of the conditions for its operation. The model used for verification of the video system should take into account the environment, lighting, characteristics of on-board video cameras and the dynamics of the underwater vehicle. In the proposed model, the virtual seabed environment is created from a schematic image based on visualization. The calculation of the lighting of the simulated scene has a ray-tracing method. The camcorder model takes into account geometric parameters such as focus, pixel sizes, etc. That model uses perspective projection rules to describe the operation of the video camera. Practical implementation of the mathematical model can simulate two on-board video cameras at the same time. The developed model takes into account the spatial movement of the underwater vehicle. The developed model takes into account the spatial movement of the underwater vehicle. This feature of the model allows you to control linear and angular velocities. Considering these characteristics makes it possible to form video frames from the position of the underwater vehicle. This article presents the mathematical model of forming video frames of the seabed by an underwater vehicle.
- Published
- 2021
- Full Text
- View/download PDF
48. Integrated Video Analysis Framework for Vision-Based Comparison Study on Structural Displacement and Tilt Measurements
- Author
-
Harry W. Shenton, Dian Mo, Zheng Yi Wu, and Maadh Hmosze
- Subjects
Vision based ,business.industry ,Computer science ,Mechanical Engineering ,Digital video ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Video camera ,Building and Construction ,Structural engineering ,Displacement (vector) ,law.invention ,Mechanics of Materials ,law ,Comparison study ,General Materials Science ,Computer vision ,Structural health monitoring ,Artificial intelligence ,business ,Tilt (camera) ,Civil and Structural Engineering - Abstract
With the advancement and wide availability of digital video cameras, compounded with the development and enhancement of computer vision methods, vision-based sensing using a video camera as...
- Published
- 2021
- Full Text
- View/download PDF
49. Functional cortical localization of the tongue using corticokinematic coherence with a deep learning-assisted motion capture system
- Author
-
Hitoshi Maezawa, Yutaka Hata, Hideki Kashioka, Masayuki Hirata, Masao Matsuhashi, Toshio Yanagida, Hiroaki Hashimoto, and Momoka Fujimoto
- Subjects
Magnetic noise ,business.industry ,Computer science ,Deep learning ,Video camera ,Accelerometer ,Motion capture ,law.invention ,medicine.anatomical_structure ,law ,Tongue ,medicine ,Coherence (signal processing) ,Computer vision ,Artificial intelligence ,Videography ,business - Abstract
Measuring the corticokinematic coherence (CKC) between magnetoencephalographic and movement signals using an accelerometer can evaluate the functional localization of the primary sensorimotor cortex (SM1) of the upper limbs. However, it is difficult to determine the tongue CKC because an accelerometer yields excessive magnetic artifacts. We introduce and validate a novel approach for measuring the tongue CKC using a deep learning-assisted motion capture system with videography, and compare it with an accelerometer in a control task measuring finger movement. Twelve healthy volunteers performed rhythmical side-to-side tongue movements in the whole-head magnetoencephalographic system, which were simultaneously recorded using a video camera and examined offline using a deep learning-assisted motion capture system. In the control task, right finger CKC measurements were simultaneously evaluated via motion capture and an accelerometer. The right finger CKC with motion capture was significant at the movement frequency peaks or its harmonics over the contralateral hemisphere; the motion-captured CKC was 84.9% similar to that with the accelerometer. The tongue CKC was significant at the movement frequency peaks or its harmonics over both hemispheres, with no difference between the left and right hemispheres. The CKC sources of the tongue were considerably lateral and inferior to those of the finger. Thus, the CKC based on deep learning-assisted motion capture can evaluate the functional localization of the tongue SM1. In this approach, because no devices are placed on the tongue, magnetic noise, disturbances due to tongue movements, risk of aspiration of the device, and risk of infection to the experimenter are eliminated.
- Published
- 2021
- Full Text
- View/download PDF
50. Determination of average vehicle speed utilizing reverse projection
- Author
-
Walter E. Bruehs and Dorothy Stout
- Subjects
Computer science ,business.industry ,Doppler radar ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Extrapolation ,Video camera ,Speedometer ,Pathology and Forensic Medicine ,law.invention ,Forensic video analysis ,Photogrammetry ,law ,Genetics ,Computer vision ,Artificial intelligence ,Radar ,Projection (set theory) ,business - Abstract
Reverse projection photogrammetry has long been used to estimate the height of an individual in forensic video examinations. A natural extrapolation would be to apply the same technique on a video to estimate the speed of an object by determining the distance traveled between two points over a set amount of time. To test this theory, five digital video recorders (DVRs) were connected to a single fixed camera to record a vehicle traveling down a track. The vehicle's speed was measured through Doppler radar by a trained operator and the speedometer of the vehicle was also recorded with a video camera. The recorded video was examined and the frames that best depict the beginning and end of the vehicles course were selected. Two reverse projection photogrammetric examinations were performed on the selected frames to establish the position of the vehicle. The distance between the two points was measured, and the time elapsed between the two points was examined. The outcome provided an accurate speed result with a standard degree of uncertainty. This study proves the feasibility of using video data and reverse projection photogrammetry to determine the speed of a vehicle with a limited set of variables. Further research is needed to determine how additional variables would impact the standard degree of uncertainty.
- Published
- 2021
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.