2,767 results on '"Vision based"'
Search Results
2. A Comprehensive Study on Feature Extraction Techniques for Indian Sign Language Recognition System
- Author
-
Aziz, Shafaque, Amjad, Mohammad, Rannenberg, Kai, Editor-in-Chief, Soares Barbosa, Luís, Editorial Board Member, Goedicke, Michael, Editorial Board Member, Tatnall, Arthur, Editorial Board Member, Neuhold, Erich J., Editorial Board Member, Stiller, Burkhard, Editorial Board Member, Stettner, Lukasz, Editorial Board Member, Pries-Heje, Jan, Editorial Board Member, Kreps, David, Editorial Board Member, Rettberg, Achim, Editorial Board Member, Furnell, Steven, Editorial Board Member, Mercier-Laurent, Eunika, Editorial Board Member, Winckler, Marco, Editorial Board Member, Malaka, Rainer, Editorial Board Member, Chandran K R, Sarath, editor, N, Sujaudeen, editor, A, Beulah, editor, and Hamead H, Shahul, editor
- Published
- 2023
- Full Text
- View/download PDF
3. Human Fall Detection Using Motion History Image and SVM
- Author
-
Lakshmi, K., Devendran, T., Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Prates, Raquel Oliveira, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Luhach, Ashish Kumar, editor, Jat, Dharm Singh, editor, Bin Ghazali, Kamarul Hawari, editor, Gao, Xiao-Zhi, editor, and Lingras, Pawan, editor
- Published
- 2021
- Full Text
- View/download PDF
4. Real time sign language detection system using deep learning techniques.
- Author
-
Padmaja, N., Raja, B. Nikhil Sai, and Kumar, B. Pavan
- Subjects
- *
SIGN language , *DEEP learning , *HEARING impaired , *DEAF people - Abstract
Humans require communication in order to survive. It is a fundamental and effective method for communicating thoughts, feelings, and points of view. A greater number of babies are being born with hearing abnormalities, which puts them at a communication disadvantage with the rest of the world, according to data on physically challenged children during the last decade. People who are deaf or hard of hearing typically use sign languages to communicate. Hand gestures are used by Deaf or Mute persons to communicate; as a result, non-Deaf people have a hard time understanding their messages. Systems that can detect different signs and provide information to common people are thus necessary. To address this issue, we have developed an automated sign language detection system using deep learning, which helps deaf/mute people can communicate with normal people. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
5. A Literature Review of Current Vision Based Fall Detection Methods
- Author
-
Biswas, Amrita, Dey, Barnali, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Jiming, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Hirche, Sandra, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Möller, Sebastian, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zhang, Junjie James, Series Editor, Bera, Rabindranath, editor, Pradhan, Prashant Chandra, editor, Liu, Chuan-Ming, editor, Dhar, Sourav, editor, and Sur, Samarendra Nath, editor
- Published
- 2020
- Full Text
- View/download PDF
6. Automatic Alignment of Multi-scale Aerial and Underwater Photogrammetric Point Clouds: A Case Study in the Maldivian Coral Reef
- Author
-
Foresti, GL, Fusiello, A, Hancock, E, Di Lauro, F, Fallati, L, Fontana, S, Savini, A, Sorrenti, D, Di Lauro F., Fallati L., Fontana S., Savini A., Sorrenti D. G., Foresti, GL, Fusiello, A, Hancock, E, Di Lauro, F, Fallati, L, Fontana, S, Savini, A, Sorrenti, D, Di Lauro F., Fallati L., Fontana S., Savini A., and Sorrenti D. G.
- Abstract
The research question that the paper investigates is whether the usage of state of the art algorithms for point clouds registration solves the problem of multi-scale vision-based point clouds registration in mixed aerial and underwater environments. This paper reports very preliminary results on the data we have been able to procure, in the context of a coral reef restoration project nearby Magoodhoo Island (Maldives). The results obtained by exploiting state of the art algorithms are promising, considering that those data presents hard samples, in particular for their multi-scale nature (noise in captured 3D points increases with depth). However, further investigation on larger data-sets is needed to confirm the overall applicability of the current algorithms to this problem.
- Published
- 2024
7. Two-step approach for fatigue crack detection in steel bridges using convolutional neural networks.
- Author
-
Quqa, Said, Martakis, Panagiotis, Movsessian, Artur, Pai, Sai, Reuland, Yves, and Chatzi, Eleni
- Abstract
The advent of parallel computing capabilities, further boosted through the exploitation of graphics processing units, has resulted in the surge of new, previously infeasible, algorithmic schemes for structural health monitoring (SHM) tasks, such as the use of convolutional neural networks (CNNs) for vision-based SHM. This work proposes a novel approach for crack recognition in digital images based on coupling of CNNs and suited image processing techniques. The proposed method is applied on a dataset comprising images of the welding joints of a long-span steel bridge, collected via high-resolution consumer-grade digital cameras. The studied dataset includes photos taken in sub-optimal light and exposure conditions, with several noise contamination sources such as handwriting scripts, varying material textures, and, in some cases, under presence of external objects. The reference pixels representing the cracks, together with the crack width and length, are available and used for training and validating the proposed model. Although the proposed framework requires some knowledge of the "damaged areas", it alleviates the need for precise labeling of the cracks in the training dataset. Validation of the model by means of application on an unlabeled image set reveals promising results in terms of accuracy and robustness to noise sources. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
8. A Real-Time Performance Recovery Framework for Vision-Based Control Systems.
- Author
-
Xu, Yunsong, Ding, Steven X., Luo, Hao, and Yin, Shen
- Subjects
- *
DYNAMICAL systems , *ONLINE algorithms , *IMAGE processing , *MACHINE learning , *ALGORITHMS , *VISION - Abstract
The demand for high performance of vision-based control systems is ever increasing. This article, thus, proposes, for the first time, a real-time performance recovery framework for the vision-based control of highly dynamic systems. The framework enables functionalized nominal controller and performance recovery, balanced image processing, and performance recovery, as well as enhanced real-time efficiency, which is based on the observer-based realization of the Youla parameterization of all stabilizing controllers. An online learning algorithm for real-time performance recovery against the unexpected performance degradation is developed as well. The verification of the proposed framework and algorithm promises their applications to classes of vision-based highly dynamic control systems. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
9. An Online and Vision-Based Method for Fixtured Pose Measurement of Nondatum Complex Component.
- Author
-
Hou, Dongxiang, Mei, Xuesong, Huang, Wangwang, Li, Jiang, Wang, Chunjie, and Wang, Xu
- Subjects
- *
POSE estimation (Computer vision) , *FEATURE extraction , *AEROSPACE industries , *MEASUREMENT - Abstract
The nondatum complex component is a complex component and does not have reliable positioning/measurement datum during the modification process. With the rapid development of the aerospace industry, the manual manner of shape trimming for the nondatum complex component needs to be replaced by laser automatic shape trimming. This study develops a vision-based, online, and high-precision method for measuring the fixtured pose error of the component. Limited by burrs, deformation, and complex structure, the feature-based method is not applicable to this component. The present method is based on iterative optimization and not required feature extraction. To improve pose measurement efficiency and accuracy, the optimization of the reference model can effectively reduce the error caused by deformation. The experiment results showed that the proposed method is fast and accurate, and satisfied the measurement requirement of the fixtured component’s pose in six degrees-of-freedom. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
10. Going Deeper than Tracking : A Survey of Computer-Vision Based Recognition of Animal Pain and Emotions
- Author
-
Broomé, Sara, Feighelstein, M., Zamansky, A., Carreira Lencioni, G., Haubro Andersen, P., Pessanha, F., Mahmoud, M., Kjellström, Hedvig, Salah, A. A., Broomé, Sara, Feighelstein, M., Zamansky, A., Carreira Lencioni, G., Haubro Andersen, P., Pessanha, F., Mahmoud, M., Kjellström, Hedvig, and Salah, A. A.
- Abstract
Advances in animal motion tracking and pose recognition have been a game changer in the study of animal behavior. Recently, an increasing number of works go ‘deeper’ than tracking, and address automated recognition of animals’ internal states such as emotions and pain with the aim of improving animal welfare, making this a timely moment for a systematization of the field. This paper provides a comprehensive survey of computer vision-based research on recognition of pain and emotional states in animals, addressing both facial and bodily behavior analysis. We summarize the efforts that have been presented so far within this topic—classifying them across different dimensions, highlight challenges and research gaps, and provide best practice recommendations for advancing the field, and some future directions for research., QC 20230613
- Published
- 2023
- Full Text
- View/download PDF
11. Vision-Based Autonomous Landing for Unmanned Aerial and Ground Vehicles Cooperative Systems
- Author
-
Man-On Pun, Guanchong Niu, Qingkai Yang, and Yunfan Gao
- Subjects
Scheme (programming language) ,Control and Optimization ,Vision based ,business.industry ,Computer science ,Mechanical Engineering ,Real-time computing ,Biomedical Engineering ,Ground vehicles ,Velocity controller ,Computer Science Applications ,Human-Computer Interaction ,Artificial Intelligence ,Control and Systems Engineering ,Global Positioning System ,Computer Vision and Pattern Recognition ,State (computer science) ,Quadratic programming ,business ,computer ,computer.programming_language ,Control-Lyapunov function - Abstract
In this work, we consider a cooperative system in which UAVs perform long-distance missions with assistance of unmanned ground vehicles (UGVs) for battery charging. We propose an autonomous landing scheme for the UAV to land on a mobile UGV with high precision by leveraging multiple-scale Quick Response (QR)-codes for different altitudes. These QR-codes support the acquisition of the relative distance and direction between the UGV and UAV. As a result, the UGV is not required to report its accurate state to the UAV. In contrast to the conventional landing algorithms based on the global positioning system (GPS), the proposed vision-based autonomous landing algorithm is available for both outdoor and GPS-denied scenarios. To cope with the challenge on the moving platform landing, a quadratic programming (QP) problem is formulated to design the velocity controller by combining a control barrier function (CBF) and a control Lyapunov function (CLF). Finally, both extensive computer simulation and a prototype of the proposed system are shown to confirm the feasibility of our proposed system and landing scheme.
- Published
- 2022
- Full Text
- View/download PDF
12. General Geometry Calibration Using Arbitrary Free-Form Surface in a Vision-Based Robot System
- Author
-
Wenlong Li, He Xie, and Hui Liu
- Subjects
Surface (mathematics) ,Robotic systems ,Vision based ,Control and Systems Engineering ,business.industry ,Calibration (statistics) ,Computer science ,Free form ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,business - Published
- 2022
- Full Text
- View/download PDF
13. Effective Meta-Attention Dehazing Networks for Vision-Based Outdoor Industrial Systems
- Author
-
Jia Tongyao, Guoqiang Li, Li Zhuo, and Jiafeng Li
- Subjects
Vision based ,Computer science ,Reliability (computer networking) ,Real-time computing ,Network structure ,Computer Science Applications ,law.invention ,Resource (project management) ,Control and Systems Engineering ,law ,Feature (computer vision) ,Autopilot ,Convergence (routing) ,Industrial systems ,Electrical and Electronic Engineering ,Information Systems - Abstract
Haze seriously affects the reliability of industrial systems, especially vision-based outdoor industrial systems such as autopilot systems. A majority of existing dehazing methods are not specififically designed for industrial systems and do not consider the reliability and resource cost of industrial system implementation. In this study, a novel meta-attention dehazing network (MADN) is proposed for direct restoration of clear images from hazy images without using the physical scattering model. Combined with parallel operation and enhancement modules, the meta-network automatically selects the most suitable dehazing network structure based on the current input hazy image by a meta-attention module. In addition, a novel feature loss calculated by the meta-network is proposed, which can accelerate the convergence of the dehazing network to meet the application requirements of practical industrial systems. A large number of experimental results on synthetic and real-world datasets show that the proposed MADN satisfifies the needs of industrial systems.
- Published
- 2022
- Full Text
- View/download PDF
14. Reliable Vision-Based Grasping Target Recognition for Upper Limb Prostheses
- Author
-
Boxuan Zhong, Edgar Lobaton, and He Huang
- Subjects
Computer science ,Artificial Limbs ,Context (language use) ,02 engineering and technology ,Upper Extremity ,Wearable robot ,0202 electrical engineering, electronic engineering, information engineering ,Humans ,Computer vision ,Electrical and Electronic Engineering ,Hand Strength ,Vision based ,business.industry ,Deep learning ,Bayes Theorem ,020206 networking & telecommunications ,Robotics ,Computer Science Applications ,Human-Computer Interaction ,Control and Systems Engineering ,Arm ,020201 artificial intelligence & image processing ,Artificial intelligence ,Noise (video) ,business ,Software ,Information Systems - Abstract
Computer vision has shown promising potential in wearable robotics applications (e.g., human grasping target prediction and context understanding). However, in practice, the performance of computer vision algorithms is challenged by insufficient or biased training, observation noise, cluttered background, etc. By leveraging Bayesian deep learning (BDL), we have developed a novel, reliable vision-based framework to assist upper limb prosthesis grasping during arm reaching. This framework can measure different types of uncertainties from the model and data for grasping target recognition in realistic and challenging scenarios. A probability calibration network was developed to fuse the uncertainty measures into one calibrated probability for online decision making. We formulated the problem as the prediction of grasping target while arm reaching. Specifically, we developed a 3-D simulation platform to simulate and analyze the performance of vision algorithms under several common challenging scenarios in practice. In addition, we integrated our approach into a shared control framework of a prosthetic arm and demonstrated its potential at assisting human participants with fluent target reaching and grasping tasks.
- Published
- 2022
- Full Text
- View/download PDF
15. A Review on Recent Advances in Vision-based Defect Recognition towards Industrial Intelligence
- Author
-
Xinyu Li, Liang Gao, Xi Vincent Wang, Lihui Wang, and Yiping Gao
- Subjects
0209 industrial biotechnology ,Vision based ,Computer science ,business.industry ,media_common.quotation_subject ,Deep learning ,Big data ,Perspective (graphical) ,02 engineering and technology ,Data science ,Industrial and Manufacturing Engineering ,020901 industrial engineering & automation ,Hardware and Architecture ,Control and Systems Engineering ,0202 electrical engineering, electronic engineering, information engineering ,Feature (machine learning) ,020201 artificial intelligence & image processing ,Quality (business) ,Product (category theory) ,Artificial intelligence ,business ,Software ,media_common - Abstract
In modern manufacturing, vision-based defect recognition is an essential technology to guarantee product quality, and it plays an important role in industrial intelligence. With the developments of industrial big data, defect images can be captured by ubiquitous sensors. And, how to realize accuracy recognition has become a research hotspot. In the past several years, many vision-based defect recognition methods have been proposed, and some newly-emerged techniques, such as deep learning, have become increasingly popular and have addressed many challenging problems effectively. Hence, a comprehensive review is urgently needed, and it can promote the development and bring some insights in this area. This paper surveys the recent advances in vision-based defect recognition and presents a systematical review from a feature perspective. This review divides the recent methods into designed-feature based methods and learned-feature based methods, and summarizes the advantages, disadvantages and application scenarios. Furthermore, this paper also summarizes the performance metrics for vision-based defect recognition methods. And some challenges and development trends are also discussed.
- Published
- 2022
- Full Text
- View/download PDF
16. Ceiling-Vision-Based Mobile Object Self-Localization: A Composite Framework
- Author
-
Alfredo Cuzzocrea, Enzo Mumolo, and Luca Camilotti
- Subjects
Vision based ,business.industry ,Computer science ,Self localization ,Computer vision ,Artificial intelligence ,Ceiling (cloud) ,Mobile object ,business - Published
- 2021
- Full Text
- View/download PDF
17. Laser and Vision-Based Obstacle Avoidance for A Semi-Autonomous ROV
- Author
-
Gil Silva, Paulo Lopes, Francisco Curado, and Nuno Lau
- Subjects
Polymers and Plastics ,Vision based ,law ,business.industry ,Computer science ,Obstacle avoidance ,Computer vision ,Artificial intelligence ,Remotely operated underwater vehicle ,Laser ,business ,General Environmental Science ,law.invention - Published
- 2021
- Full Text
- View/download PDF
18. Particle Filter Approach to Vision-Based Navigation with Aerial Image Segmentation
- Author
-
Junwoo Park, Sungjoong Kim, Hyochoong Bang, and Kyungwoo Hong
- Subjects
Vision based ,Computer science ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Aerospace Engineering ,Convolutional neural network ,Computer Science Applications ,Computer Science::Robotics ,Computer Science::Computer Vision and Pattern Recognition ,Segmentation ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,Particle filter ,business ,Aerial image - Abstract
This study proposes a novel approach for a vision-based navigation problem using semantically segmented aerial images generated by a convolutional neural network. Vision-based navigation provides a...
- Published
- 2021
- Full Text
- View/download PDF
19. Migration of the Oirats in the first quarter of the 17th century on the eve of returning to Dzungaria
- Author
-
Baatr Uchaevich Kitinov
- Subjects
altan khan ,History ,Vision based ,elets ,media_common.quotation_subject ,derbets ,Victory ,altyn khan ,Effective management ,yamysh ,General Medicine ,Ancient history ,History (General) ,Quarter (United States coin) ,irtysh ,Civil strife ,oirats ,khoshuts ,hoyts ,Identity (philosophy) ,D1-2009 ,dzungars ,torguts ,media_common - Abstract
In 1541 the Oirats managed to form the Middle Confederation, which was led by the Khoshuts as the most powerful people. In the second half of the same XVI century the Oirats, suffering from attacks of their neighbors - the Turkic peoples from the west and south and the eastern Mongols from the east, began to move towards southern Siberia. Earlier they used to roam along the Black Irtysh river and north of the lake Zaysan, but now they began to move below the lake Yamysh. Opinions on the migration routes of the Oirats, existing in the literature, need clarification. The author offers his vision based on the archival materials and the Mongolian sources: the Hoyt Oirats, driven out of Kharakhoto by the Tumat Altan Khan, were the first to go towards the Altai Mountains. The next were the Torgut Oirats, who crossed the Altai, and then, together with the Derbets, they moved down the Irtysh river. The Elelets, the future Dzungars, left Western Mongolia for the Yenisei river sources. Already in the second decade of the 17th century the Oirats wandered along Om, Kamyshlov, Tobol and Ishim rivers, that is, they were roaming along the middle reaches of the Irtysh river. In 1623, at lake Yamysh, they defeated the troops of the Hotogoit Altyn Khan Sholoi Ubashi-Khuntaiji, but this victory did not exclude an internal struggle in the ruling house of the Khoshuts, which resulted in weakening of this people. Further civil strife forced the Torguts to move towards west, and in the early 1630s they reached the Volga river. Migrations over such long distances were possible only if there was an effective management apparatus, while maintaining traditions and identity.
- Published
- 2021
20. Study on Collision avoidance of UAV with Vision-based Sensor in Outdoor Environments
- Author
-
Jinyoung Suk, Seungkeun Kim, Seungbin Lee, and Hoijo Jeaong
- Subjects
Vision based ,Control and Systems Engineering ,Computer science ,business.industry ,Applied Mathematics ,Computer vision ,Artificial intelligence ,business ,Software ,Collision avoidance - Published
- 2021
- Full Text
- View/download PDF
21. Vision-based pose estimation of craniocervical region: experimental setup and saw bone-based study
- Author
-
Sudipto Mukherjee, Sachin Kansal, and Mohammad Zubair
- Subjects
Vision based ,Control and Systems Engineering ,Craniocervical region ,Computer science ,business.industry ,General Mathematics ,Computer vision ,Artificial intelligence ,business ,Pose ,Software ,Computer Science Applications - Abstract
SummaryThis article discusses the intervertebral motion present in the craniovertebral junction (CVJ) region. The CVJ region is bounded by the first three vertebras from the spinal column. It helps in bringing most of the neck motion. Intervention in this region requires surgery in which an implant is placed to stabilize the whole system. The various available implants need to undergo performance evaluation as their performance varies from region and anatomical diversity. For the Indian population, we are targeting to evaluate the performance of such an implant, testing it into a cadaver. The region of interest will be loaded as per the loading condition of an average human. Motion in these regions is evaluated using the camera. A preliminary test was done on a saw bone model of CVJ to assess the performance of segmentation methods. Multiple such ArUco markers are used to increase pose accuracy further, and the pose of the entire board of multiple tags provides us with reliable pose estimation. The absolute error ranged from a minimum of 0.1 mm to a maximum of 16 mm. At the same time, the mean and median absolute errors were 3.8961 mm and 3.35 mm. By considering the absolute lengths, the percentage error showed the following trends. The percentage error was between 3.9168% and 0.0230%.
- Published
- 2021
- Full Text
- View/download PDF
22. A vision-based real-time traffic flow monitoring system for road intersections
- Author
-
Jahongir Azimjonov, Ahmet Özmen, and Metin Varan
- Subjects
Vehicles detection ,Computer Networks and Communications ,Road intersections ,Image based traffic flow monitoring ,Vehicles ,Deep neural network ,Data association ,Flow monitoring ,Traffic flow ,Image-based ,Hardware and Architecture ,Deep neural networks ,Media Technology ,Monitoring system ,Realtime traffic ,Vehicle tracking ,Vehicle detection ,Kalman filters ,Software ,Vision based - Abstract
In this study, a vision based real-time traffic flow monitoring system has been developed to extract statistics passes through the intersections. A novel object tracking and data association algorithms have been developed using the bounding-box properties to estimate the vehicle trajectories. Then, rich traffic flow information such as directional and total counting, instantaneous and average speed of vehicles are calculated from the predicted trajectories. During the study, various parameters that affect the accuracy of vision based systems are examined such as camera locations and angles that may cause occlusion or illusion problems. In the last part, sample video streams are processed using both Kalman filter and new centroid-based algorithm for comparative study. The results show that the new algorithm performs 9.18% better than Kalman filter approach in general. © 2023, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature. Türkiye Bilimsel ve Teknolojik Araştırma Kurumu, TÜBİTAK: 119E077 This research was supported by The Scientific and Technological Research Council of Turkey (TUBITAK) under Grant No: 119E077 and Title: “Development of a Customized Traffic Planning System for Sakarya City by Processing Multiple Camera Images with Convolutional Neural Networks (CNN) and Machine Learning Techniques”.
- Published
- 2023
23. Vision‐based vibration monitoring using existing cameras installed within a building.
- Author
-
Harvey, Jr., P. Scott and Elisha, Guy
- Subjects
- *
VIBRATION (Mechanics) , *STRUCTURAL health monitoring , *CAMERAS , *INSTALLATION of equipment , *DIGITAL images - Abstract
Summary: A new methodology for the monitoring of dynamic characteristics of buildings has been proposed. It is based on digital image analysis leveraging pre‐existing cameras installed within a building, such as surveillance cameras. The method is used for real‐time measurement of interstory drifts, from which dynamic characteristics (e.g., fundamental period) can easily be extracted to detect damage to the structural system. It has applications particularly in the case of post‐earthquake damage estimation and when the installation of traditional sensors and monitoring systems is cost prohibitive. The methodology is demonstrated through both lab‐scale and full‐scale shake table tests. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
24. A New Approach to Vision-based Fire and its Intensity Computation Using SPATIO-Temporal Features
- Author
-
Muhammad Masood Rafi, Mirza Adnan Baig, and Najeed Ahmed Khan
- Subjects
Background subtraction ,Vision based ,Fire detection ,business.industry ,Computer science ,Computation ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Image segmentation ,Computer Science Applications ,Theoretical Computer Science ,Feature (computer vision) ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Intensity (heat transfer) - Abstract
Currently, fire detection systems based on computer vision techniques are highly appreciated for their intelligent detections at earliest. These systems use surveillance cameras to capture high-lev...
- Published
- 2021
- Full Text
- View/download PDF
25. Vision-based kinematic analysis of the Delta robot for object catching
- Author
-
Sudipto Mukherjee and Sachin Kansal
- Subjects
Vision based ,Control and Systems Engineering ,Computer science ,business.industry ,General Mathematics ,Computer vision ,Kinematics ,Artificial intelligence ,Object (computer science) ,business ,Software ,Delta robot ,Computer Science Applications - Abstract
SUMMARYThis paper proposes a vision-based kinematic analysis and kinematic parameters identification of the proposed architecture, designed to perform the object catching in the real-time scenario. For performing the inverse kinematics, precise estimation of the link lengths and other parameters needs to be present. Kinematic identification of Delta based upon Model10 implicit model with ten parameters using the iterative least square method is implemented. The loop closure implicit equations have been modelled. In this paper, a vision-based kinematic analysis of the Delta robots to do the catching is discussed. A predefined library of ArUco is used to get a unique solution of the kinematics of the moving platform with respect to the fixed base. The re-projection error while doing the calibration in the vision sensor module is 0.10 pixels. Proposed architecture interfaced with the hardware using the PID controller. Encoders are quadrature and have a resolution of 0.15 degrees embedded in the experimental setup to make the system closed-loop (acting as feedback unit).
- Published
- 2021
- Full Text
- View/download PDF
26. Computer Vision for Autonomous UAV Flight Safety: An Overview and a Vision-based Safe Landing Pipeline Example
- Author
-
TzelepiMaria, KakaletsisEfstratios, PitasIoannis, SymeonidisCharalampos, MademlisIoannis, TefasAnastasios, and NikolaidisNikos
- Subjects
General Computer Science ,Semantic mapping ,Aeronautics ,Vision based ,Computer science ,Obstacle avoidance ,Flight safety ,ComputerApplications_COMPUTERSINOTHERSYSTEMS ,Motion planning ,Pipeline (software) ,Drone ,Theoretical Computer Science - Abstract
Recent years have seen an unprecedented spread of Unmanned Aerial Vehicles (UAVs, or "drones"), which are highly useful for both civilian and military applications. Flight safety is a crucial issue in UAV navigation, having to ensure accurate compliance with recently legislated rules and regulations. The emerging use of autonomous drones and UAV swarms raises additional issues, making it necessary to transfuse safety- and regulations-awareness to relevant algorithms and architectures. Computer vision plays a pivotal role in such autonomous functionalities. Although the main aspects of autonomous UAV technologies (e.g., path planning, navigation control, landing control, mapping and localization, target detection/tracking) are already mature and well-covered, ensuring safe flying in the vicinity of crowds, avoidance of passing over persons or guaranteed emergency landing capabilities in case of malfunctions, are generally treated as an afterthought when designing autonomous UAV platforms for unstructured environments. This fact is reflected in the fragmentary coverage of the above issues in current literature. This overview attempts to remedy this situation, from the point of view of computer vision. It examines the field from multiple aspects, including regulations across the world and relevant current technologies. Finally, since very few attempts have been made so far towards a complete UAV safety flight and landing pipeline, an example computer vision-based UAV flight safety pipeline is introduced, taking into account all issues present in current autonomous drones. The content is relevant to any kind of autonomous drone flight (e.g., for movie/TV production, news-gathering, search and rescue, surveillance, inspection, mapping, wildlife monitoring, crowd monitoring/management, etc.), making this a topic of broad interest.
- Published
- 2021
- Full Text
- View/download PDF
27. Partidos y líderes en las elecciones generales de 2016 y 2019: Una visión emocional
- Author
-
María Pereira López, Paulo Carlos López López, and Nieves Lagares Díez
- Subjects
Politics ,Sociology and Political Science ,Vision based ,Field (Bourdieu) ,General election ,Political science ,Political Science and International Relations ,Empirical evidence ,Humanities - Abstract
Hace ya casi treinta años que ha comenzado el denominado giro afectivo en el ámbito de las ciencias sociales y con él han aparecido los primeros trabajos que profundizan en la importancia de la inclusión del componente emocional para el análisis del comportamiento, y en el caso que nos ocupa del comportamiento político. Partiendo de una revisión de estas aportaciones desde los años noventa del siglo xx en adelante, este trabajo pretende aportar evidencia empírica respecto de la importancia de la consideración de las emociones expresadas hacia los partidos políticos y los líderes a través de una visión basada en tres dimensiones de la emoción: presencia, intensidad y persistencia. Para llevar a cabo el análisis se utilizarán dos estudios demoscópicos propios realizados tras las elecciones generales de 2016 y 2019 en España.
- Published
- 2021
- Full Text
- View/download PDF
28. Adaptive Fertigation System Using Hybrid Vision-Based Lettuce Phenotyping and Fuzzy Logic Valve Controller Towards Sustainable Aquaponics
- Author
-
Sandy Lauguico, Edwin Sybingco, Elmer P. Dadios, Ryan Rhay P. Vicerra, Joel L. Cuello, Argel A. Bandala, Jonnel Alejandrino, and Ronnie Concepcion
- Subjects
Human-Computer Interaction ,Fertigation ,Vision based ,Artificial Intelligence ,Control theory ,Computer science ,Control engineering ,Aquaponics ,Computer Vision and Pattern Recognition ,Precision agriculture ,Fuzzy logic - Abstract
Sustainability is a major challenge in any plant factory, particularly those involving precision agriculture. In this study, an adaptive fertigation system in a three-tier nutrient film technique aquaponic system was developed using a non-destructive vision-based lettuce phenotype (VIPHLET) model integrated with an 18-rule Mamdani fuzzy inference system for nutrient valve control. Four lettuce phenes, that is, fresh weight, chlorophylls a and b, and vitamin C concentrations as outputted by the genetic programming-based VIPHLET model were optimized for each growth stage by injecting NPK nutrients into the mixing tank, as determined based on leaf canopy signatures. This novel adaptive fertigation system resulted in higher nutrient use efficiency (99.678%) and lower chemical waste emission (14.108 mg L-1) than that by manual fertigation (92.468%, 178.88 mg L-1). Overall, it can improve agricultural malpractices in relation to sustainable agriculture.
- Published
- 2021
- Full Text
- View/download PDF
29. Autonomous landing of a quadrotor on a moving platform using vision-based FOFPID control
- Author
-
Ali Ghasemi, Serajeddin Ebrahimian, and Farhad Parivash
- Subjects
Vision based ,Control and Systems Engineering ,business.industry ,Computer science ,General Mathematics ,Control (management) ,Computer vision ,Artificial intelligence ,business ,Software ,Computer Science Applications - Abstract
This research deals with the autonomous landing maneuver of a quadrotor unmanned aerial vehicle (UAV) on an unmanned ground vehicle (UGV). It is assumed that the UGV moves independently, and there is no communication and collaboration between the two vehicles. This paper aims at the design of a closed-loop vision-based control system for quadrotor UAV to perform autonomous landing maneuvers in the possible minimum time despite the wind-induced disturbance force. In this way, a fractional-order fuzzy proportional-integral-derivative controller is introduced for the nonlinear under-actuated system of a quadrotor. Also, a feedback linearization term is included in the control law to compensate model nonlinearities. A supervisory control algorithm is proposed as an autonomous landing path generator to perform fast, smooth, and accurate landings. On the other hand, a compound AprilTag fiducial marker is employed as the target of a vision positioning system, enabling high precision relative positioning in the range between 10 and 350 cm height. A software-in-the-loop simulation testbed is realized on the windows platform. Numerical simulations with the proposed control system are carried out, while the quadrotor system is exposed to different disturbance conditions and actuator dynamics with saturated thrust output are considered.
- Published
- 2021
- Full Text
- View/download PDF
30. Vision-based target point tracking and aiming method
- Author
-
Qu Xinghua, Zhang Yuanjun, Zhang Fumin, Lianyin Xu, and Xiao Bo Liang
- Subjects
Point tracking ,Vision based ,Control and Systems Engineering ,Computer science ,business.industry ,Correlation filter ,Computer vision ,Artificial intelligence ,business ,Industrial and Manufacturing Engineering ,Feature matching ,Computer Science Applications - Abstract
Purpose Laser absolute distance measurement has the characteristics of high precision, wide range and non-contact. In laser ranging system, tracking and aiming measurement point is the precondition of automatic measurement. To solve this problem, this paper aims to propose a novel method. Design/methodology/approach For the central point of the hollow angle coupled mirror, this paper proposes a method based on correlation filtering and ellipse fitting. For non-cooperative target points, this paper proposes an extraction method based on correlation filtering and feature matching. Finally, a visual tracking and aiming system was constructed by combining the two-axis turntable, and experiments were carried out. Findings The target tracking algorithm has an accuracy of 91.15% and a speed of 19.5 frames per second. The algorithm can adapt to the change of target scale and short-term occlusion. The mean error and standard deviation of the center point extraction of the hollow Angle coupling mirror are 0.20 and 0.09 mm. The mean error and standard deviation of feature points matching for non-cooperative target were 0.06 mm and 0.16 mm. The visual tracking and aiming system can track a target running at a speed of 0.7 m/s, aiming error mean is 1.74 pixels and standard deviation is 0.67 pixel. Originality/value The results show that this method can achieve fast and high precision target tracking and aiming and has great application value in laser ranging.
- Published
- 2021
- Full Text
- View/download PDF
31. SyncUp
- Author
-
Zhongyi Zhou, Koji Yatani, and Anran Xu
- Subjects
FOS: Computer and information sciences ,Dance ,Vision based ,Computer Networks and Communications ,Computer science ,media_common.quotation_subject ,Computer Science - Human-Computer Interaction ,Process (computing) ,Human-Computer Interaction (cs.HC) ,Visualization ,Human-Computer Interaction ,Upload ,Hardware and Architecture ,Human–computer interaction ,Beauty ,Synchronization (computer science) ,Similarity (psychology) ,media_common - Abstract
The beauty of synchronized dancing lies in the synchronization of body movements among multiple dancers. While dancers utilize camera recordings for their practice, standard video interfaces do not efficiently support their activities of identifying segments where they are not well synchronized. This thus fails to close a tight loop of an iterative practice process (i.e., capturing a practice, reviewing the video, and practicing again). We present SyncUp, a system that provides multiple interactive visualizations to support the practice of synchronized dancing and liberate users from manual inspection of recorded practice videos. By analyzing videos uploaded by users, SyncUp quantifies two aspects of synchronization in dancing: pose similarity among multiple dancers and temporal alignment of their movements. The system then highlights which body parts and which portions of the dance routine require further practice to achieve better synchronization. The results of our system evaluations show that our pose similarity estimation and temporal alignment predictions were correlated well with human ratings. Participants in our qualitative user evaluation expressed the benefits and its potential use of SyncUp, confirming that it would enable quick iterative practice., Comment: In IMWUT '21 (Distinguished Paper Award); UbiComp '21 Best Presentation Award
- Published
- 2021
- Full Text
- View/download PDF
32. Improved Short-Term Speed Prediction Using Spatiotemporal-Vision-Based Deep Neural Network for Intelligent Fuel Cell Vehicles
- Author
-
Zhiyu Huang, Chenghao Deng, Caizhi Zhang, Chen Lv, Jinrui Chen, Dong Hao, Yuanzhi Zhang, Hongxu Ran, and School of Mechanical and Aerospace Engineering
- Subjects
Neural Networks ,Vision based ,Artificial neural network ,Computer science ,Energy management ,Attenuation ,020208 electrical & electronic engineering ,Real-time computing ,02 engineering and technology ,Energy consumption ,Motion (physics) ,Computer Science Applications ,Term (time) ,Lithium-Ion Batteries ,Control and Systems Engineering ,Mechanical engineering [Engineering] ,0202 electrical engineering, electronic engineering, information engineering ,Fuel cells ,Electrical and Electronic Engineering ,Information Systems - Abstract
In this article, an improved short-term speed prediction method is proposed to predict short-term future speed and analyze future energy consumption of intelligent fuel cell vehicles. The short-term future speed is predicted by the proposed Inflated 3-D Inception long short-term memory (LSTM) network, which takes the spatiotemporal-vision information and vehicle motion states. Specifically, the spatiotemporal-vision-based deep neural network utilizes image sequences captured by a front-facing camera as environmental information and historical speed series as motion information to improve the prediction accuracy. Then, a case study of the proposed speed prediction method, with rule-based energy management strategy to calculate future energy consumption, is presented. The simulation results show that short-term speed prediction based on the Inflated 3-D Inception LSTM network can achieve high accuracy of speed prediction in various traffic densities, as well as low prediction errors of future energy consumption including the hydrogen consumption and state-of-charge attenuation. This work was supported in part by the National Key Research and Development Program under Grant 2018YFB0105402 and Grant 2018YFB0105703, in part by the Fundamental Research Funds for the Central Universities under Grant 2019CDXYQC0003, Grant 244005202014, 2019, and Grant 2018CDXYTW0031, in part by the National Natural Science Foundation of China under Grant 51806024, in part by the Chongqing Research Program of Foundation and Advanced Technology under Grant cstc2017jcyjAX0276, and in part by the Venture and Innovation Support Program for Chongqing Overseas Returnees under Grant cx2018051.
- Published
- 2021
- Full Text
- View/download PDF
33. Vision-based damage detection of aircraft engine’s compressor blades
- Author
-
Krzysztof Holak and Wojciech Obrocki
- Subjects
Damage detection ,Vision based ,Computer science ,Mechanical Engineering ,Compressor blade ,Image processing ,Electrical and Electronic Engineering ,Software ,Automotive engineering - Published
- 2021
- Full Text
- View/download PDF
34. Deep Convolutional Neural Network-based Image Processing for Vision-based Safe Landing Region Recognition Framework
- Author
-
Yeondeuk Jung and Sungwook Cho
- Subjects
Vision based ,Control and Systems Engineering ,Computer science ,business.industry ,Applied Mathematics ,Image processing ,Computer vision ,Artificial intelligence ,business ,Convolutional neural network ,Software - Published
- 2021
- Full Text
- View/download PDF
35. A novel image dehazing framework for robust vision‐based intelligent systems
- Author
-
Yi Du, Rizwan Ali Naqvi, Muhammad Zawish, Fayaz Ali Dharejo, Fida Hussain Memon, Kapal Dev, Yuanchun Zhou, and Farah Deeba
- Subjects
Human-Computer Interaction ,Vision based ,Artificial Intelligence ,Computer science ,business.industry ,Intelligent decision support system ,Computer vision ,Artificial intelligence ,business ,Software ,Theoretical Computer Science ,Image (mathematics) - Published
- 2021
- Full Text
- View/download PDF
36. Vision-based detection system of slag flow from ladle to tundish with the help of the detection of undulation of slag layer of the tundish using an image analysis technique
- Author
-
Sarbani Chakraborty, Biswajit Chakraborty, Arunjeet Chakraborty, and Joyjeet Ghose
- Subjects
Ladle ,Materials science ,Vision based ,business.industry ,Mechanical Engineering ,Flow (psychology) ,Metallurgy ,Metals and Alloys ,Steelmaking ,Tundish ,Continuous casting ,Mechanics of Materials ,Materials Chemistry ,Slag (welding) ,business ,Layer (electronics) - Abstract
It is very important to have a slag detection system (SDS) in steelmaking to detect slag and prevent its flow for improving the quality of steel produced. Slag raking is used for removing slag from...
- Published
- 2021
- Full Text
- View/download PDF
37. Real-Time Performance Comparison of Vision-Based Autonomous Landing of Quadcopter on a Ground Moving Target
- Author
-
Amit Kumar
- Subjects
Quadcopter ,Vision based ,business.industry ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,ComputerApplications_COMPUTERSINOTHERSYSTEMS ,Kalman filter ,ComputingMethodologies_ARTIFICIALINTELLIGENCE ,Computer Science Applications ,Theoretical Computer Science ,Domain (software engineering) ,Ground moving target ,Performance comparison ,ComputerSystemsOrganization_SPECIAL-PURPOSEANDAPPLICATION-BASEDSYSTEMS ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,business - Abstract
The tremendous applications of unmanned aerial vehicles (UAVs), such as inspection in complex environments, search, and rescue missions, have established this area of research. The domain of UAVs a...
- Published
- 2021
- Full Text
- View/download PDF
38. Vision‐Based Gesture Recognition: A Critical Review
- Author
-
Neela Harish, Athaf, Aparna, Praveen, and Prasanth
- Subjects
Vision based ,Human–computer interaction ,Computer science ,Gesture recognition - Published
- 2021
- Full Text
- View/download PDF
39. Vision-based outdoor navigation of self-driving car using lane detection
- Author
-
Pratik B. Pandey, Amit Kumar, Anand Agrawal, Basant Agarwal, Tejeshwar Saini, and Apoorv Agarwal
- Subjects
Vision based ,Computer Networks and Communications ,business.industry ,Computer science ,Applied Mathematics ,Computation ,Autonomous robot ,Track (rail transport) ,Field (computer science) ,Computer Science Applications ,Computational Theory and Mathematics ,Artificial Intelligence ,Robustness (computer science) ,Trajectory ,Computer vision ,Dashboard ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Information Systems - Abstract
The evolution of artificial intelligence has served as the catalyst in the field of technology and is making our imaginations real. One of such creation is the birth of self- driving car (autonomous robot). In this paper, a self-driving car physical prototype based on traditional visual method of lane keeping, implemented on Raspberry Pi is proposed which is capable of maneuvering on various kind of tracks autonomously. The proposed method comprises of taking an image from a front facing dashboard camera of the car, detecting the lane from the image and analysing the deviation of the car from the road which is further used to keep the car on the track. The proposed method is implemented on a 1/10 scale car which contains Raspberry Pi 3 Model-B computer and Pi Cam Rev 1.3 for computations and processing. The testing was done without any human intervention on self-made lined track having all kind of turns. The experimental results demonstrate the effectiveness and the robustness of the self-driving car in terms of the navigation error while following the reference trajectory.
- Published
- 2021
- Full Text
- View/download PDF
40. Vision-Based Approaches of the Small Satellites Relative Navigation
- Author
-
Chingiz Hajiyev and Tuncay Yunus Erkec
- Subjects
0209 industrial biotechnology ,020901 industrial engineering & automation ,General Computer Science ,Vision based ,Computer science ,business.industry ,020208 electrical & electronic engineering ,0202 electrical engineering, electronic engineering, information engineering ,General Engineering ,Computer vision ,02 engineering and technology ,Artificial intelligence ,business - Abstract
This paper means to comprehend vision-based relative navigation strategies which are utilized forsmall/micro satellites. Advancements dependent on this technique are utilized separately or joined with oneanother to deal with relative position issues. The benefits and detriments of vision-based relative navigationmodels change as indicated by space of utilization and stage type. Various strategies and approaches exist andneed distinctive assessment and advanced algorithms for variation, control, and sensor combination. Each ofthe models centering inside expects those ideal disposition conditions. This paper just spotlights on relativeroute and distance point of view. Additionally, the point of this article is to comprehend the connectionbetween's general navigation control frameworks and the adequacy of calculation, which are utilized inassessing the states during gathering or development flight/development.
- Published
- 2021
- Full Text
- View/download PDF
41. Information Communication Technology (ICT) Tools for Preservation of Underwater Environment: A Vision-Based Posidonia Oceanica Monitoring
- Author
-
Riccardo Costanzi, Giovanni Peralta, Lorenzo Pollini, and Francesco Ruscio
- Subjects
Underwater data geo-referencing ,Vision based ,biology ,Multimedia ,Computer science ,Underwater vision ,Posidonia oceanica monitoring ,Visual odometry ,Ocean Engineering ,Oceanography ,biology.organism_classification ,computer.software_genre ,Information and Communications Technology ,Posidonia oceanica ,Ict tools ,Underwater ,computer - Abstract
Underwater monitoring activities are crucial for the preservation of marine ecosystems. Currently, scuba divers are involved in data collection campaigns that are repetitive, dangerous, and expensive. This article describes the application of Information Communication Technology (ICT) tools to underwater visual data for monitoring purposes. The data refer to a Posidonia Oceanica survey mission carried out by a scuba diver using a Smart Dive Scooter equipped with visual acquisition and acoustic localization systems. An acoustic-based strategy for geo-referencing of the optical dataset is reported. It exploits the synchronization between the audio track extracted from a camera and the transponder pings adopted for the acoustic positioning. The positioning measurements are employed within an extended Kalman filter to estimate the diver's path during the mission. A visual odometry algorithm is implemented within the filter to refine the navigation state estimation of the diver with respect to the acoustic positioning only. Moreover, a smoothing step based on the Rauch-Tung-Striebel smoother is applied to further improve the estimated diver's positions. Finally, the article reports the results of two different data processing for monitoring applications. The first one is an image mosaicking obtained by concatenating subsequent frames, whereas the second one refers to a qualitative distribution of the Posidonia Oceanica over the mission area accomplished through an image segmentation process. The two outcomes are plotted over a satellite image of the surveyed area, showing that the proposed process is an effective tool capable of facilitating divers in their monitoring and inspection activities.
- Published
- 2021
- Full Text
- View/download PDF
42. Vision-Based Simultaneous Recognition System for Multiple Banknotes for The Visually Impaired
- Author
-
Kyu-Ree Kim, Jin-Woo Jung, Su-Jeong Choi, and Tae-Won Kang
- Subjects
Vision based ,Computer science ,business.industry ,Visually impaired ,Recognition system ,Computer vision ,Artificial intelligence ,business - Published
- 2021
- Full Text
- View/download PDF
43. A Vision-based Driver Night Time Assistance and Surveillance System
- Author
-
Sahil Sayyad
- Subjects
Aeronautics ,Vision based ,Computer science - Abstract
In India round 1.5 lakh humans died per year in avenue twist of fate because of road accidents most of them were due to low vision and weariness problems. Weariness (Extreme Tiredness) or Fatigue is a major purpose of avenue accidents and has extensive implications for street safety. several deadly accidents can be avoided if the drowsy drivers are warned in time. In many cases it is observed that a car hits some object/obstacle on road due to low vision in that case an object detection and warning system can help to avoid such accidents. Basically, Weariness is a state of sleepiness which abnormally happen when we are very tired or whilst drunken. A spread of drowsiness detection strategies exist that monitor the driving force’s drowsiness state at the same time as driving and alarm the drivers if they're no longer concentrating on driving. The relevant features can be extracted from facial expressions including yawning, eye closure and head actions for inferring the level of weariness. The organic condition of driver’s body is analyzed for driver weariness detection. So, this utility overcomes the trouble of driver weariness detection and object/obstacle detection & warning whilst driving using eye extraction, facial extraction, object and its distance detection using different algorithms.
- Published
- 2021
- Full Text
- View/download PDF
44. L2 Motivational Self System in Practice: Alleviating Students’ Learned Helplessness Through a Vision-Based Program
- Author
-
Farshad Ghasemi
- Subjects
Intervention program ,Vision based ,05 social sciences ,050301 education ,Learned helplessness ,Self system ,English language ,Checklist ,Education ,Intervention (counseling) ,Developmental and Educational Psychology ,0501 psychology and cognitive sciences ,Psychology ,0503 education ,050104 developmental & child psychology ,Clinical psychology - Abstract
This study investigated learned helplessness (LH) experienced by male secondary students of the English language and examined the effects of a motivational program based on the L2 Motivational Self System (L2MSS). Primarily, we administered the Learned Helplessness Scale to 189 students in a public school in Tehran, along with the Student Behavior Checklist completed by their teachers to identify and screen students with helplessness. The final sample (n = 74) was randomly assigned to experimental and control groups after evaluating the initial results. By designing and implementing an intervention program based on Dornyei’s L2MSS and supportive teaching for a semester, positive results were found with alleviated helplessness symptoms and improved academic achievements in the experimental group. Based on the results, teachers and their motivational practices could help alleviate students’ LH. Furthermore, the durability of the improvements remained elevated at a 6-month follow-up after the intervention. Further implications are discussed.
- Published
- 2021
- Full Text
- View/download PDF
45. Vision-Based Sensing Systems for Autonomous Driving: Centralized or Decentralized?
- Author
-
Akihito Ohsato, Masato Edahiro, Kosuke Murakami, Yukihiro Saito, Manato Hirabayashi, and Shinpei Kato
- Subjects
General Computer Science ,Vision based ,Computer science ,020208 electrical & electronic engineering ,Real-time computing ,0202 electrical engineering, electronic engineering, information engineering ,Image processing ,02 engineering and technology ,Electrical and Electronic Engineering ,Sensing system ,020202 computer hardware & architecture - Abstract
The perception of the surrounding circumstances is an essential task for fully autonomous driving systems, but its high computational and network loads typically impede a single host machine from taking charge of the systems. Decentralized processing is a candidate to decrease such loads; however, it has not been clear that this approach fulfills the requirements of onboard systems, including low latency and low power consumption. Embedded oriented graphics processing units (GPUs) are attracting great interest because they provide massively parallel computation capacity with lower power consumption compared to traditional GPUs. This study explored the effects of decentralized processing on autonomous driving using embedded oriented GPUs as decentralized units. We implemented a prototype system that off-loaded image-based object detection tasks onto embedded oriented GPUs to clarify the effects of decentralized processing. The results of experimental evaluation demonstrated that decentralized processing and network quantization achieved approximately 27 ms delay between the feeding of an image and the arrival of detection results to the host as well as approximately 7 W power consumption on each GPU and network load degradation in orders of magnitude. Judging from these results, we concluded that decentralized processing could be a promising approach to decrease processing latency, network load, and power consumption toward the deployment of autonomous driving systems.
- Published
- 2021
- Full Text
- View/download PDF
46. Vision-based human activity recognition for reducing building energy demand
- Author
-
Jo Darkwa, Shuangyu Wei, Paige Wenbin Tien, Christopher Wood, and John Kaiser Calautit
- Subjects
Architectural engineering ,Vision based ,Occupancy ,business.industry ,Computer science ,Deep learning ,Energy performance ,Building energy ,Building and Construction ,law.invention ,Activity recognition ,law ,Ventilation (architecture) ,Artificial intelligence ,business - Abstract
Occupancy behaviour in buildings can impact the energy performance and the operation of heating, ventilation and air-conditioning systems. To ensure building operations become optimised, it is vital to develop solutions that can monitor the utilisation of indoor spaces and provide occupants’ actual thermal comfort requirements. This study presents the analysis of the application of a vision-based deep learning approach for human activity detection and recognition in buildings. A convolutional neural network was employed to enable the detection and classification of occupancy activities. The model was deployed to a camera that enabled real-time detections, giving an average detection accuracy of 98.65%. Data on the number of occupants performing each of the selected activities were collected, and deep learning–influenced profile was generated. Building energy simulation and various scenario-based cases were used to assess the impact of such an approach on the building energy demand and provide insights into how the proposed detection method can enable heating, ventilation and air-conditioning systems to respond to occupancy’s dynamic changes. Results indicated that the deep learning approach could reduce the over- or under-estimation of occupancy heat gains. It is envisioned that the approach can be coupled with heating, ventilation and air-conditioning controls to adjust the setpoint based on the building space’s actual requirements, which could provide more comfortable environments and minimise unnecessary building energy loads. Practical application Occupancy behaviour has been identified as an important issue impacting the energy demand of building and heating, ventilation and air-conditioning systems. This study proposes a vision-based deep learning approach to capture, detect and recognise in real-time the occupancy patterns and activities within an office space environment. Initial building energy simulation analysis of the application of such an approach within buildings was performed. The proposed approach is envisioned to enable heating, ventilation and air-conditioning systems to adapt and make a timely response based on occupancy’s dynamic changes. The results presented here show the practicality of such an approach that could be integrated with heating, ventilation and air-conditioning systems for various building spaces and environments.
- Published
- 2021
- Full Text
- View/download PDF
47. Vision-Based Measurement: Actualities and Developing Trends in Automated Container Terminals
- Author
-
Octavian Postolache, Changhong Fu, Chao Mi, Zhiwei Zhang, and Yage Huang
- Subjects
Vision based ,Computer science ,business.industry ,020208 electrical & electronic engineering ,02 engineering and technology ,Market research ,Terminal (electronics) ,Container (abstract data type) ,0202 electrical engineering, electronic engineering, information engineering ,Systems engineering ,Electrical and Electronic Engineering ,business ,Instrumentation ,Throughput (business) - Abstract
An automated container terminal (ACT) is a cutting-edge type of container terminal that uses automated equipment and sensors to achieve autonomous applications such as container loading/unloading, horizontal transportation, and yard operations. It has integrated state-of-the-art sensing technologies, ensuring low operating costs, high throughput capacity, and enhanced management security. Vision-based measurement (VBM) is an especially advanced technology that obtains richer surrounding information from captured image data in comparison with other technologies. Due to its great potential capabilities, it has played a critically important role in ACT to realize productive vision-based tasks, e.g., target recognition, positioning, and geometric determination. This paper generally presents an overview of a VBM system, typical applications of VBM systems in ACT, as well as the challenges and future development trends of VBM in ACT.
- Published
- 2021
- Full Text
- View/download PDF
48. دعوى زیادة الحروف فی القرآن الکریم ,والرد علیها من خلال أشهر حروف الزیادة : (مِنْ ، ما ، الباء ، لا ، الواو)
- Subjects
Vision based ,Affix ,Sociology ,Linguistics - Abstract
عرض البحث لقضية زيادة الحروف في القرآن الکريم برؤية جديدة قامت على التحقق من مفهوم الزيادة لدى العلماء ‘ واتخذ خمسة من حروف الزيادة کنموذج للدراسة ,وحاول جاهدا الربط بين السياقات القرآنية المتعددة التي يرد فيها الحرف الواحد ليدفع القول بزيادته وجمع التوجيهات التي تعين على ذلک مع الاستعانة بالناحية البلاغية والشرعية دون الاکتفاء بأقوال النحويين في الحرف. وقد جاء البحث في تمهيد وخمسة مباحث وخاتمة ‘ أما التمهيد فقد عرض لمفهوم الزيادة وبين اختلاف العلماء في المصطلح نفسه قديما وحديثا ‘ أما المباحث فقد تناولت خمسة حروف وهي : (من ‘ ما ‘ الباء ‘ لا ‘ الواو) جعلت کل حرف في مبحث وفق ترتيبها السابق قمت بدراسة السياقات وجمع أقوال العلماء فيها وحاولت التوفيق بينها والخروج بها عن حد الزيادة ‘أما الخاتمة فقد ذکرت فيها أهم النتائج ومنها: ـ بينت الدراسة اضطراب کلمة النحويين في مفهوم الزيادة , أو الصلة ـ کما هو الحال عند الکوفيين ـ ما بين قائل : إنها تعني أن الحرف لم يفد معنى جديدا وغيره. ـ بينت الدراسة أن أکثر القائلين بزيادة الحروف من النحويين ,وأنهم يراعون القواعد النحوية التي قامت على استقراء ناقص , أکثر ما يراعون المعنى. ـ تتبعت الدراسة أشهر وأهم الآيات التي حکم فيها بزيادة الحروف محل الدراسة ,وأثبتت أن لها تخريجات قوية غير القول بالزيادة. کما أوصت الدراسة بقيام دراسات متخصصة مستقلة لاستقراء کتاب الله ـ عز وجل ـ وتتبع المواضع التي يقال عن الحروف فيها بأنها زائدة , وجمع التخريجات الواردة في کتب التفسير والنحو لهذه الآيات التي يقال عن حروف المعاني فيها بأنها زائدة ثم تحمل على وجه صحيح يخرج الحرف عن حد الزيادة ؛ لأن هذا القول يفتح بابا من أبواب الطعن على کتاب الله ,وبخاصة في زماننا . ثم ذيلت البحث بثبت للمراجع والمصادر ‘والله من وراء السبيل. The research presented the topic of Affixes in The Glorious Quran with a new vision based on verifying the concept of Affixes to scholars, applying on five of these affixes as a model for this study. The researcher has done his best to link between the multiple Qur’anic contexts in which one affix is included in order to do away with the idea of its “meaningless”. Not only had the researcher, quoted the grammarians’ points of view, but he also provided Rhetorical and legal interpretations to support his argument. The study comes in an introduction, five chapters, and a conclusion, As for the introduction, it points out the concept of Affix and showed how scholars view it differently through the past and present times. As for the next five chapters, I singled a chapter for each affix of the five whereas I carefully examined contexts, collected scholarly statements, and tried to reconcile them to refute the claims of its meaninglessness. As for the conclusion, I mentioned the most important results, including: The study showed that the grammarians' word is disturbed in the concept of addition, or the link - as is the case with the Kufians - between those who say: It means that the letter did not benefit a new meaning and others. The study has shown that most of grammarians’ points of views on affix phenomenon are indecisive weather it is meaningful or meaningless as it is the case of Kufi grammarians. The study traces the commonest verses in which affixes are found obviously, and provides pieces of evidence which, unlike what is commonly claimed, they are meaningful. The study recommended the execution of independent specialized studies to go thoroughly in the Glorious Quran whereas these affixes are alleged to be meaningless and provide sound arguments of refutation. In the light of this correction of the perception of meaningfulness of affixes, new interpretation and grammatical explanation of these verses shall be a must as an urgent step to defend The Holy Book against recurrent attacks which, have been waged nowadays to challenge it. The paper is entailed with index, bibliography. May Allah guide us to the most righteous path.
- Published
- 2021
- Full Text
- View/download PDF
49. Vision Based Automated Parking System
- Author
-
Shubham Sahare, Sanket Moon, Saurabh Shambharkar, and Shashant Jaykar
- Subjects
0203 mechanical engineering ,Vision based ,business.industry ,Computer science ,0202 electrical engineering, electronic engineering, information engineering ,020206 networking & telecommunications ,020302 automobile design & engineering ,Computer vision ,02 engineering and technology ,Artificial intelligence ,business - Abstract
In the early times the concept of Smart Cities have gained great popularity. The proposed Vision based parking system consist of an on-site development of an DOT NET FRAME WORK. This paper explains the approach to make easy to monitor and manage a parking area using a vision based automatic parking system. To find the available parking space in the most efficient manner and to avoid traffic congestion in the parking area, it is very necessity to manage the car parking. Nowadays by human personnel also we do manage, the car parking area and by using Sensors, we do monitor the availability of car park spaces. Even though in both we get to know the number of free parking spaces, but we don’t get the actual location available. The installation and maintenance cost of a Sensor based system varies as per the number of Sensors required. This paper shows that the performance has a more accuracy for a model of 4 space car park. The model indicated that the application of a vision based car parking management system would be able to detect and indicate the duration of car parking in the given space.
- Published
- 2021
- Full Text
- View/download PDF
50. A vision-based excavator productivity analysis in Vietnam
- Author
-
Huy Vu Quang and Tung Nguyen Hoang
- Subjects
Excavator ,Vision based ,Computer science ,Agricultural engineering ,Productivity - Abstract
The process of determining the working parameters of reverse bucket excavators is mainly consulted through the Ministry of Construction norm. However, in the era of industrialization and modernization, machine and equipment are increasingly modern and innovative, making the determination of excavator productivity or parameters through the regulations in the old norms unsuitable. Furthermore, updating the norms through data collected in the field take tremendous amount of time and procedures as it is labor intensive. Therefore, this paper proposes a vision-based analysis in calculating excavator productivity using image processing applications and coding language to automatically determine the excavator productivity and bring results on the basis of analysing big data collected from validated construction sites. To be specific, this paper introduces a new method in calculating the excavator productivity by extracting crucial coefficients from hundred images of the excavators using an open-source software, then compare with the traditional method to identify and analyse the importance of this new method and the practical use it might bring to the construction industry.
- Published
- 2021
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.