12,873 results on '"Activity recognition"'
Search Results
2. Activity recognition from trunk muscle activations for wearable and non-wearable robot conditions
- Author
-
Gonsalves, Nihar, Ogunseiju, Omobolanle Ruth, and Akanmu, Abiola Abosede
- Published
- 2024
- Full Text
- View/download PDF
3. LIMUNet: A Lightweight Neural Network for Human Activity Recognition Using Smartwatches.
- Author
-
Lin, Liangliang, Wu, Junjie, An, Ran, Ma, Song, Zhao, Kun, and Ding, Han
- Abstract
The rise of mobile communication, low-power chips, and the Internet of Things has made smartwatches increasingly popular. Equipped with inertial measurement units (IMUs), these devices can recognize user activities through artificial intelligence (AI) analysis of sensor data. However, most existing AI-based activity recognition algorithms require significant computational power and storage, making them unsuitable for low-power devices like smartwatches. Additionally, discrepancies between training data and real-world data often hinder model generalization and performance. To address these challenges, we propose LIMUNet and its smaller variant LIMUNet-Tiny—lightweight neural networks designed for human activity recognition on smartwatches. LIMUNet utilizes depthwise separable convolutions and residual blocks to reduce computational complexity and parameter count. It also incorporates a dual attention mechanism specifically tailored to smartwatch sensor data, improving feature extraction without sacrificing efficiency. Experiments on the PAMAP2 and LIMU datasets show that the LIMUNet improves recognition accuracy by 2.9% over leading lightweight models while reducing parameters by 88.3% and computational load by 58.4%. Compared to other state-of-the-art models, LIMUNet achieves a 9.6% increase in accuracy, with a 60% reduction in parameters and a 57.8% reduction in computational cost. LIMUNet-Tiny further reduces parameters by 75% and computational load by 80%, making it even more suitable for resource-constrained devices. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Robust Human Interaction Recognition Using Extended Kalman Filter.
- Author
-
Bukht, Tanvir Fatima Naik, Alazeb, Abdulwahab, Mudawi, Naif Al, Alabdullah, Bayan, Alnowaiser, Khaled, Jalal, Ahmad, and Liu, Hui
- Subjects
PATTERN recognition systems ,WOLVES ,KALMAN filtering ,FEATURE extraction ,VISUAL fields ,HUMAN activity recognition - Abstract
In the field of computer vision and pattern recognition, knowledge based on images of human activity has gained popularity as a research topic. Activity recognition is the process of determining human behavior based on an image. We implemented an Extended Kalman filter to create an activity recognition system here. The proposed method applies an HSI color transformation in its initial stages to improve the clarity of the frame of the image. To minimize noise, we use Gaussian filters. Extraction of silhouette using the statistical method. We use Binary Robust Invariant Scalable Keypoints (BRISK) and SIFT for feature extraction. The next step is to perform feature discrimination using Gray Wolf. After that, the features are input into the Extended Kalman filter and classified into relevant human activities according to their definitive characteristics. The experimental procedure uses the SUB-Interaction and HMDB51 datasets to a 0.88% and 0.86% recognition rate. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Emergency Detection in Smart Homes Using Inactivity Score for Handling Uncertain Sensor Data.
- Author
-
Wilhelm, Sebastian and Wahl, Florian
- Subjects
- *
LIVING alone , *SMART homes , *OLDER people , *POPULATION aging , *FALSE alarms - Abstract
In an aging society, the need for efficient emergency detection systems in smart homes is becoming increasingly important. For elderly people living alone, technical solutions for detecting emergencies are essential to receiving help quickly when needed. Numerous solutions already exist based on wearable or ambient sensors. However, existing methods for emergency detection typically assume that sensor data are error-free and contain no false positives, which cannot always be guaranteed in practice. Therefore, we present a novel method for detecting emergencies in private households that detects unusually long inactivity periods and can process erroneous or uncertain activity information. We introduce the Inactivity Score, which provides a probabilistic weighting of inactivity periods based on the reliability of sensor measurements. By analyzing historical Inactivity Scores, anomalies that potentially represent an emergency can be identified. The proposed method is compared with four related approaches on seven different datasets. Our method surpasses existing approaches when considering the number of false positives and the mean time to detect emergencies. It achieves an average detection time of approximately 05:23:28 h with only 0.09 false alarms per day under noise-free conditions. Moreover, unlike related approaches, the proposed method remains effective with noisy data. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Optimized neural network models for low power elderly monitoring system in Internet of things.
- Author
-
Hasan, Raqibul and Souri, Alireza
- Subjects
- *
CONVOLUTIONAL neural networks , *ARTIFICIAL neural networks , *SIGNAL classification , *ARM microprocessors , *OLDER people - Abstract
This paper proposes a low power consuming system for monitoring elderly people's activities and their health conditions. The proposed system has two activity recognition modules: smartphone sensor-based wearable module; infrared grid sensor-based remote module. The two activity recognition modules work in a coordinated way. The fraction of the time the person is detected by the infrared sensor, the smartphone remains idle. As a result, energy consumption in the smartphone is reduced significantly, and hence the battery lifetime is increased. In the smartphone, a Feed-forward Neural Network (FNN) based activity recognition algorithm is implemented using fixed-point computation to further reduce energy consumption. A Convolutional Neural Network is used in the infrared sensor-based activity recognition module. The proposed system also has real-time health monitoring capability, which is based on ECG signal classification. A FNN leveraging fixed-point operation is used for ECG signal classification on an embedded ARM processor. Proposed fixed-point implementations of the FNNs are faster than floating-point implementation and require 50% less memory to store the neural network model parameters without loss of classification accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Statistical Predictive Hybrid Choice Modeling: Exploring Embedded Neural Architecture.
- Author
-
Nafisah, Ibrahim A., Sajjad, Irsa, Alshahrani, Mohammed A., Alamri, Osama Abdulaziz, Almazah, Mohammed M. A., and Dar, Javid Gani
- Subjects
- *
SPATIAL orientation , *MNEMONICS , *VERNACULAR architecture , *MACHINE learning , *STANDARD deviations - Abstract
This study introduces an enhanced version of the discrete choice model combining embedded neural architecture to enhance predictive accuracy while preserving interpretability in choice modeling across temporal dimensions. Unlike the traditional architectures, which directly utilize raw data without intermediary transformations, this study introduces a modified approach incorporating temporal embeddings for improved predictive performance. Leveraging the Phones Accelerometer dataset, the model excels in predictive accuracy, discrimination capability and robustness, outperforming traditional benchmarks. With intricate parameter estimates capturing spatial orientations and user-specific patterns, the model offers enhanced interpretability. Additionally, the model exhibits remarkable computational efficiency, minimizing training time and memory usage while ensuring competitive inference speed. Domain-specific considerations affirm its predictive accuracy across different datasets. Overall, the subject model emerges as a transparent, comprehensible, and powerful tool for deciphering accelerometer data and predicting user activities in real-world applications. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. Spatiotemporal information complementary modeling and group relationship reasoning for group activity recognition.
- Author
-
Deng, Haigang, Zhang, Zhe, Li, Chengwei, Xu, Wenting, Wang, Chenyang, and Wang, Chuanxu
- Subjects
- *
SOCIAL interaction , *SPACETIME , *VOLLEYBALL , *RATS , *ATTENTION - Abstract
Exploring spatial-temporal interactions among group members is crucial for group activity recognition. However, most existing approaches cannot jointly consider it from multi-level cross-relations, which results in an incomplete representation. To address this issue, we propose a relational complementary module that comprehensively learns the interactions among members from both time-space and space-time perspectives. To suppress the information redundancy caused by this all-view interaction description, we introduce NH-Softmax to impose sparsity on the few relevant attention weights to generate robust and differentiated feature representations. In addition, to fully explore individual contextual interaction information, relaxed attention (RAT) is designed to enhance the feature information of each individual in a relaxed manner. It fleshes out individual representations by highlighting the most salient features and eases the computational burden. Our experiments on Volleyball dataset and Collective Activity dataset show significant improvements over previous state-of-the-art methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Görme engelliler için nesne tanıma ve resim altyazısını derin öğrenme teknikleriyle entegre eden verimli bir aktivite tanıma modeli.
- Author
-
Kilimci, Zeynep Hilal and Küçükmanisa, Ayhan
- Abstract
Automatically identifying the content of an image is a core task in artificial intelligence that connects computer vision and natural language processing. This study presents a generative model based on a deep and recurrent architecture, combining the latest developments in computer vision and machine translation, to create natural sentences describing an image. With this model, the texts obtained from the images can be converted into audio file format, and the activity of the objects around the person can be defined for visually impaired people. For this purpose, first, object recognition is performed on images with the YOLO model, which identifies the presence, location and type of one or more objects in a particular image. Next, long-short-term memory networks (LSTM) are trained to maximize the probability of the target statement sentence given the training image. Thus, the activities in the related image have been converted to text format as annotations. The activities, which are converted to text format, are obtained using the Google text-tospeech platform, and the audio file describing the activity is obtained. Flickr8K, Flickr30K and MSCOCO datasets are employed to evaluate four different features injection architectures to demonstrate the effectiveness of the proposed model. The experimental results show that our proposed model successfully expresses the activity description audibly for visually impaired individuals. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Effective Human Activity Recognition through Accelerometer Data.
- Author
-
Vu Thi Thuong, Duc-Nghia Tran, Duc-Tan Tran, Bui Thi Thu, Vu Duong Tung, Nguyen Thi Anh Phuong, Phung Cong Phi Khanh, Pham Khanh Tung, and Manh-Tuyen Vi
- Subjects
MACHINE learning ,SUPERVISED learning ,JOGGING ,MICROCONTROLLERS ,ACCELEROMETERS ,HUMAN activity recognition - Abstract
In recent years, the field of Human Activity Recognition (HAR) has emerged as a prominent area of research. A plethora of methodologies have been documented in the literature, all with the objective of identifying and analyzing human activities. Among these, the use of a body-worn accelerometer to collect motion data and the subsequent application of a supervised machine learning approach represents a highly promising solution, offering numerous benefits. These include affordability, comfort, ease of use, and high accuracy in recognizing activities. However, a significant challenge associated with this approach is the necessity for performing activity recognition directly on a low-cost, low-performance microcontroller. This research presents the development of a real-time human activity recognition system. The system employs optimized time windows for each activity, a comprehensive set of differentiating features, and a straightforward machine learning model. The efficacy of the proposed system was evaluated using both publicly available datasets and data collected in experiments, achieving an exceptional activity recognition rate of over 95.06%. The system is capable of recognizing six fundamental daily human activities: standing, sitting, jogging, walking, going downstairs, and going upstairs. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Efficient Real-Time Devices Based on Accelerometer Using Machine Learning for HAR on Low-Performance Microcontrollers.
- Author
-
Manh-Tuyen Vi, Duc-Nghia Tran, Vu Thi Thuong, Nguyen Ngoc Linh, and Duc-Tan Tran
- Subjects
SUPPORT vector machines ,K-nearest neighbor classification ,ROOT-mean-squares ,RANDOM forest algorithms ,SYSTEMS design ,MICROCONTROLLERS ,HUMAN activity recognition - Abstract
Analyzing physical activities through wearable devices is a promising research area for improving health assessment. This research focuses on the development of an affordable and real-time Human Activity Recognition (HAR) system designed to operate on low-performance microcontrollers. The system utilizes data from a body-worn accelerometer to recognize and classify human activities, providing a cost-effective, easy-to-use, and highly accurate solution. A key challenge addressed in this study is the execution of efficient motion recognition within a resource-constrained environment. The system employs a Random Forest (RF) classifier, which outperforms Gradient Boosting Decision Trees (GBDT), Support Vector Machines (SVM), and K-Nearest Neighbors (KNN) in terms of accuracy and computational efficiency. The proposed features Average absolute deviation (AAD), Standard deviation (STD), Interquartile range (IQR), Range, and Root mean square (RMS). The research has conducted numerous experiments and comparisons to establish optimal parameters for ensuring system effectiveness, including setting a sampling frequency of 50 Hz and selecting an 8-s window size with a 40% overlap between windows. Validation was conducted on both the WISDM public dataset and a self-collected dataset, focusing on five fundamental daily activities: Standing, Sitting, Jogging, Walking, and Walking the stairs. The results demonstrated high recognition accuracy, with the system achieving 96.7% on the WISDM dataset and 97.13% on the collected dataset. This research confirms the feasibility of deploying HAR systems on low-performance microcontrollers and highlights the system’s potential applications in patient support, rehabilitation, and elderly care. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Activity scenarios simulation by discovering knowledge through activities of daily living datasets
- Author
-
Swe Nwe Nwe Htun, Shusaku Egami, and Ken Fukuda
- Subjects
activities of daily living ,scenarios ,abnormal activities ,activity recognition ,simulated activities ,synthetic data ,Control engineering systems. Automatic machinery (General) ,TJ212-225 - Abstract
Efficiently recognizing Activities of Daily Living (ADLs) requires overcoming challenges in collecting datasets through innovative approaches. Simultaneously, it involves adapting to the demand for interpreting human activities amidst temporal sequences of actions and interactions with objects, considering real-life scenarios and resource constraints. This study investigates the potential of generating synthetic training data for ADLs recognition using the VirtualHome2KG framework. Furthermore, we investigate the transformative potential of simulating activities in virtual spaces, as evidenced by our survey of real-world activity datasets and exploration of synthetic datasets in virtual environments. Our work explicitly simulates activities in the 3D Unity platform, affording seamless transitions between environments and camera perspectives. Furthermore, we meticulously construct scenarios not only for regular daily activities but also for abnormal activities to detect risky situations for independent living, ensuring the incorporation of critical criteria. We incorporate one contemporary method for abnormal activity detection to demonstrate the efficacy of our simulated activity data. Our findings suggest that our activity scenario preparation accomplishes the intended research objective while paving the way for an interesting research avenue.
- Published
- 2024
- Full Text
- View/download PDF
13. Real-time monitoring of lower limb movement resistance based on deep learning
- Author
-
Burenbatu, Yuanmeng Liu, and Tianyi Lyu
- Subjects
Real-time monitoring ,Lower limb movement resistance ,MobileNetV3 ,Multi-task learning (MTL) ,Resistance prediction ,Activity recognition ,Engineering (General). Civil engineering (General) ,TA1-2040 - Abstract
Real-time lower limb movement resistance monitoring is critical for various applications in clinical and sports settings, such as rehabilitation and athletic training. Current methods often face limitations in accuracy, computational efficiency, and generalizability, which hinder their practical implementation. To address these challenges, we propose a novel Mobile Multi-Task Learning Network (MMTL-Net) that integrates MobileNetV3 for efficient feature extraction and employs multi-task learning to simultaneously predict resistance levels and recognize activities. The advantages of MMTL-Net include enhanced accuracy, reduced latency, and improved computational efficiency, making it highly suitable for real-time applications. Experimental results demonstrate that MMTL-Net significantly outperforms existing models on the UCI Human Activity Recognition and Wireless Sensor Data Mining Activity Prediction datasets, achieving a lower Force Error Rate (FER) of 6.8% and a higher Resistance Prediction Accuracy (RPA) of 91.2%. Additionally, the model shows a Real-time Responsiveness (RTR) of 12 ms and a Throughput (TP) of 33 frames per second. These findings underscore the model’s robustness and effectiveness in diverse real-world scenarios. The proposed framework not only advances the state-of-the-art in resistance monitoring but also paves the way for more efficient and accurate systems in clinical and sports applications. In real-world settings, the practical implications of MMTL-Net include its potential to enhance patient outcomes in rehabilitation and improve athletic performance through precise, real-time monitoring and feedback.
- Published
- 2025
- Full Text
- View/download PDF
14. The Real-Time Classification of Competency Swimming Activity Through Machine Learning.
- Author
-
Powell, Larry, Polsley, Seth, Casey, Drew, and Hammond, Tracy
- Subjects
SWIMMING techniques ,MACHINE learning ,SWIMMING ,SENSOR placement ,MOTION detectors - Abstract
Every year, an average of 3,536 people die from drowning in America. The significant factors that cause unintentional drowning are people's lack of water safety awareness and swimming proficiency. Current industry and research trends regarding swimming activity recognition and commercial motion sensors focus more on lap swimming utilized by expert swimmers and do not account for freeform activities. Enhancing swimming education through wearable technology can aid people in learning efficient and effective swimming techniques and water safety. We developed a novel wearable system capable of storing and processing sensor data to categorize competitive and survival swimming activities on a mobile device in real-time. This paper discusses the sensor placement, the hardware and app design, and the research process utilized to achieve activity recognition. For our studies, the data we have gathered comes from various swimming skill levels, from beginner to elite swimmers. Our wearable system uses angle-based novel features as inputs into optimal machine learning algorithms to classify flip turns, traditional competitive strokes, and survival swimming strokes. The machinelearning algorithm was able to classify all activities at .935 of an F-measure. Finally, we examined deep learning and created a CNN model to classify competitive and survival swimming strokes at 95% accuracy in real-time on a mobile device. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
15. Towards Prosthesis Control: Identification of Locomotion Activities through EEG-Based Measurements.
- Author
-
Zafar, Saqib, Maqbool, Hafiz Farhan, Ashraf, Muhammad Imran, Malik, Danial Javaid, Abdeen, Zain ul, Ali, Wahab, Taborri, Juri, and Rossi, Stefano
- Subjects
MACHINE learning ,INDEPENDENT component analysis ,FEATURE extraction ,HUMAN locomotion ,K-nearest neighbor classification - Abstract
The integration of advanced control systems in prostheses necessitates the accurate identification of human locomotion activities, a task that can significantly benefit from EEG-based measurements combined with machine learning techniques. The main contribution of this study is the development of a novel framework for the recognition and classification of locomotion activities using electroencephalography (EEG) data by comparing the performance of different machine learning algorithms. Data of the lower limb movements during level ground walking as well as going up stairs, down stairs, up ramps, and down ramps were collected from 10 healthy volunteers. Time- and frequency-domain features were extracted by applying independent component analysis (ICA). Successively, they were used to train and test random forest and k-nearest neighbors (kNN) algorithms. For the classification, random forest revealed itself as the best-performing one, achieving an overall accuracy up to 92%. The findings of this study contribute to the field of assistive robotics by confirming that EEG-based measurements, when combined with appropriate machine learning models, can serve as robust inputs for prosthesis control systems. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. PerMl-Fed: enabling personalized multi-level federated learning within heterogenous IoT environments for activity recognition.
- Author
-
Zhang, Chang, Zhu, Tao, Wu, Hangxing, and Ning, Huansheng
- Subjects
- *
FEDERATED learning , *DATA privacy , *DATA mining , *INDIVIDUALIZED instruction , *MACHINE learning - Abstract
Federated Learning (FL) has emerged as a promising approach to addressing issues related to centralized machine learning such as data privacy, security and access. Nevertheless, it also brings new challenges incurred by heterogeneity among data statistical levels, devices and models in the context of multi-level federated learning (MlFed) architecture. In this paper, we conceive a new Personalized Multi-level Federated Learning (PerMl-Fed) framework, which extends existing MlFed architecture with three specialized personalized FL methods to address the three challenges. Specially, we design a Transfer Multi-level Federated Learning (TrMlFed) model to mitigate statistical heterogeneity across multiple layers of FL. We propose an Asynchronous Multi-level Federated Learning (AsMlFed) approach which allows asynchronous update in MlFed, thus alleviating device heterogeneity. We develop a Deep Mutual Multi-level Federated Learning (DmMlFed) method based on the concept of deep mutual learning to tackle model heterogeneity. We evaluate the PerMl-Fed framework and associated technologies on the public Wireless Sensor Data Mining (WISDM) dataset. Initial results demonstrate improved average accuracy of 7 % and achieves accuracy ranging from 84 to 92 % across eight different hierarchical group structures. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Deep learning approaches for human-centered IoT applications in smart indoor environments: a contemporary survey.
- Author
-
Abdel-Basset, Mohamed, Chang, Victor, Hawash, Hossam, Chakrabortty, Ripon K., and Ryan, Michael
- Subjects
- *
DEEP learning , *INTERNET usage monitoring , *INTERNET of things , *SMART homes , *DATA analysis - Abstract
The widespread Internet of Things (IoT) technologies in day life indoor environments result in an enormous amount of daily generated data, which require reliable data analysis techniques to enable efficient exploitation of this data. The recent developments in deep learning (DL) have facilitated the processing and learning from the massive IoT data and learn essential features swiftly and professionally for a variety of IoT applications on smart indoor environments. This study surveys the recent literature on exploiting DL for different indoor IoT applications. We aim to give insights into how the DL approaches can be employed from various viewpoints to develop improved Indoor IoT applications in two distinct domains: indoor positioning/tracking and activity recognition. A primary target is to effortlessly amalgamate the two disciplines of IoT and DL, resultant in a broad range of innovative strategies in indoor IoT applications, such as health monitoring, smart home control, robotics, etc. Further, we have derived a thematic taxonomy from the comparative analysis of technical studies of the three beforementioned domains. Eventually, we proposed and discussed a set of matters, challenges, and some new directions in incorporating DL to improve the efficiency of indoor IoT applications, encouraging and stimulating additional advances in this auspicious research area. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. System Design for Sensing in Manufacturing to Apply AI through Hierarchical Abstraction Levels.
- Author
-
Sopidis, Georgios, Haslgrübler, Michael, Azadi, Behrooz, Guiza, Ouijdane, Schobesberger, Martin, Anzengruber-Tanase, Bernhard, and Ferscha, Alois
- Subjects
- *
SYSTEMS design , *HUMAN activity recognition , *ARTIFICIAL intelligence , *ASSEMBLY line methods , *SMART homes , *RESEARCH personnel - Abstract
Activity recognition combined with artificial intelligence is a vital area of research, ranging across diverse domains, from sports and healthcare to smart homes. In the industrial domain, and the manual assembly lines, the emphasis shifts to human–machine interaction and thus to human activity recognition (HAR) within complex operational environments. Developing models and methods that can reliably and efficiently identify human activities, traditionally just categorized as either simple or complex activities, remains a key challenge in the field. Limitations of the existing methods and approaches include their inability to consider the contextual complexities associated with the performed activities. Our approach to address this challenge is to create different levels of activity abstractions, which allow for a more nuanced comprehension of activities and define their underlying patterns. Specifically, we propose a new hierarchical taxonomy for human activity abstraction levels based on the context of the performed activities that can be used in HAR. The proposed hierarchy consists of five levels, namely atomic, micro, meso, macro, and mega. We compare this taxonomy with other approaches that divide activities into simple and complex categories as well as other similar classification schemes and provide real-world examples in different applications to demonstrate its efficacy. Regarding advanced technologies like artificial intelligence, our study aims to guide and optimize industrial assembly procedures, particularly in uncontrolled non-laboratory environments, by shaping workflows to enable structured data analysis and highlighting correlations across various levels throughout the assembly progression. In addition, it establishes effective communication and shared understanding between researchers and industry professionals while also providing them with the essential resources to facilitate the development of systems, sensors, and algorithms for custom industrial use cases that adapt to the level of abstraction. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. Movement Representation Learning for Pain Level Classification.
- Author
-
Olugbade, Temitayo, Williams, Amanda C de C, Gold, Nicolas, and Bianchi-Berthouze, Nadia
- Abstract
Self-supervised learning has shown value for uncovering informative movement features for human activity recognition. However, there has been minimal exploration of this approach for affect recognition where availability of large labelled datasets is particularly limited. In this paper, we propose a P-STEMR (Parallel Space-Time Encoding Movement Representation) architecture with the aim of addressing this gap and specifically leveraging the higher availability of human activity recognition datasets for pain-level classification. We evaluated and analyzed the architecture using three different datasets across four sets of experiments. We found statistically significant increase in average F1 score to 0.84 for pain level classification with two classes based on the architecture compared with the use of hand-crafted features. This suggests that it is capable of learning movement representations and transferring these from activity recognition based on data captured in lab settings to classification of pain levels with messier real-world data. We further found that the efficacy of transfer between datasets can be undermined by dissimilarities in population groups due to impairments that affect movement behaviour and in motion primitives (e.g. rotation versus flexion). Future work should investigate how the effect of these differences could be minimized so that data from healthy people can be more valuable for transfer learning. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. Collider-based movement detection and control of wearable soft robots for visually augmenting dance performance
- Author
-
Patrick Twomey, Vaibhavsingh Varma, Leslie L. Bush, and Mitja Trkov
- Subjects
activity recognition ,movement detection ,colliders ,wearable sensors ,inertial measurement units (IMUs) ,soft robots ,Mechanical engineering and machinery ,TJ1-1570 ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
The fusion of wearable soft robotic actuators and motion-tracking sensors can enhance dance performance, amplifying its visual language and communicative potential. However, the intricate and unpredictable nature of improvisational dance poses unique challenges for existing motion-tracking methods, underscoring the need for more adaptable solutions. Conventional methods such as optical tracking face limitations due to limb occlusion. The use of inertial measurement units (IMUs) can alleviate some of these challenges; however, their movement detection algorithms are complex and often based on fixed thresholds. Additionally, machine learning algorithms are unsuitable for detecting the arbitrary motion of improvisational dancers due to the non-repetitive and unique nature of their movements, resulting in limited available training data. To address these challenges, we introduce a collider-based movement detection algorithm. Colliders are modeled as virtual mass-spring-damper systems with its response related to dynamics of limb segments. Individual colliders are defined in planes corresponding to the limbs’ degrees of freedom. The system responses of these colliders relate to limb dynamics and can be used to quantify dynamic movements such as jab as demonstrated herein. One key advantage of collider dynamics is their ability to capture complex limb movements in their relative frame, as opposed to the global frame, thus avoiding drift issues common with IMUs. Additionally, we propose a simplified movement detection scheme based on individual dynamic system response variable, as opposed to fixed thresholds that consider multiple variables simultaneously (i.e., displacement, velocity, and acceleration). Our approach combines the collider-based algorithm with a hashing method to design a robust and high-speed detection algorithm for improvised dance motions. Experimental results demonstrate that our algorithm effectively detects improvisational dance movements, allowing control of wearable, origami-based soft actuators that can change size and lighting based on detected movements. This innovative method allows dancers to trigger events on stage, creating a unique organic aesthetics that seamlessly integrates technology with spontaneous movements. Our research highlights how this approach not only enriches dance performances by blending tradition and innovation but also enhances the expressive capabilities of dance, demonstrating the potential for technology to elevate and augment this art form.
- Published
- 2024
- Full Text
- View/download PDF
21. Varishta Rakshak: An AI-Based Comprehensive Web Framework for Ensuring Senior Citizen Care in Real Time
- Author
-
Gupta, Shivam Ramesh, Bohra, Meet Nirmal, Mahimi, Yashab Hanif, Mishra, Rupesh Sheshnath, Badgujar, Vishal Sahebrao, Deshpande, Kiran, Birje, Shradha Sanjay, Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Hassanien, Aboul Ella, editor, Anand, Sameer, editor, Jaiswal, Ajay, editor, and Kumar, Prabhat, editor
- Published
- 2024
- Full Text
- View/download PDF
22. Enhancing Player Experience Through AI-Powered Wireless Sensor Networks: A KNN Algorithm Approach for Tracking Daily and Sports Activities
- Author
-
Narendran, M., Swarna Teja, R., Sumithra Devi, K., Gayathri, S., Mansurali, A, editor, Jeyanthi, P. Mary, editor, Hack-Polay, Dieu, editor, and Mahmoud, Ali B., editor
- Published
- 2024
- Full Text
- View/download PDF
23. Enriching Scene-Graph Generation with Prior Knowledge from Work Instruction
- Author
-
Jeskó, Zoltán, Tran, Tuan-Anh, Halász, Gergely, Abonyi, János, Ruppert, Tamás, Rannenberg, Kai, Editor-in-Chief, Soares Barbosa, Luís, Editorial Board Member, Carette, Jacques, Editorial Board Member, Tatnall, Arthur, Editorial Board Member, Neuhold, Erich J., Editorial Board Member, Stiller, Burkhard, Editorial Board Member, Stettner, Lukasz, Editorial Board Member, Pries-Heje, Jan, Editorial Board Member, M. Davison, Robert, Editorial Board Member, Rettberg, Achim, Editorial Board Member, Furnell, Steven, Editorial Board Member, Mercier-Laurent, Eunika, Editorial Board Member, Winckler, Marco, Editorial Board Member, Malaka, Rainer, Editorial Board Member, Thürer, Matthias, editor, Riedel, Ralph, editor, von Cieminski, Gregor, editor, and Romero, David, editor
- Published
- 2024
- Full Text
- View/download PDF
24. Unveiling the Potential of Machine Learning in Activity Recognition for Industry 4.0
- Author
-
Chiaro, Diletta, Qi, Pian, Rosa, Mariapia De, Cuomo, Salvatore, Piccialli, Francesco, Fortino, Giancarlo, Series Editor, Liotta, Antonio, Series Editor, Ianni, Michele, editor, Guzzo, Antonella, editor, Gravina, Raffaele, editor, Ghasemzadeh, Hassan, editor, and Wang, Zhelong, editor
- Published
- 2024
- Full Text
- View/download PDF
25. A Survey on Driver Monitoring System Using Computer Vision Techniques
- Author
-
Kumar, K. L. Santhosh, Kannan, M. K. Jayanthi, Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Hassanien, Aboul Ella, editor, Anand, Sameer, editor, Jaiswal, Ajay, editor, and Kumar, Prabhat, editor
- Published
- 2024
- Full Text
- View/download PDF
26. College Student Activity Recognition from Smartwatch Dataset
- Author
-
de Clairval, Arthur, Schuler, Laurent Alain Erwin, Rellier, Mathis Franck, Irawan, Mohammad Isa, Mukhlash, Imam, Iqbal, Mohammad, Adzkiya, Dieky, editor, and Fahim, Kistosil, editor
- Published
- 2024
- Full Text
- View/download PDF
27. A Novel Method for Wearable Activity Recognition with Feature Evolvable Streams
- Author
-
Wang, Yixiao, Hu, Chunyu, Liu, Hong, Lyu, Lei, Yuan, Lin, Akan, Ozgur, Editorial Board Member, Bellavista, Paolo, Editorial Board Member, Cao, Jiannong, Editorial Board Member, Coulson, Geoffrey, Editorial Board Member, Dressler, Falko, Editorial Board Member, Ferrari, Domenico, Editorial Board Member, Gerla, Mario, Editorial Board Member, Kobayashi, Hisashi, Editorial Board Member, Palazzo, Sergio, Editorial Board Member, Sahni, Sartaj, Editorial Board Member, Shen, Xuemin, Editorial Board Member, Stan, Mircea, Editorial Board Member, Jia, Xiaohua, Editorial Board Member, Zomaya, Albert Y., Editorial Board Member, Zaslavsky, Arkady, editor, Ning, Zhaolong, editor, Kalogeraki, Vana, editor, Georgakopoulos, Dimitrios, editor, and Chrysanthis, Panos K., editor
- Published
- 2024
- Full Text
- View/download PDF
28. Activity Recognition of Nursing Tasks in a Hospital: Requirements and Challenges
- Author
-
Bruns, Fenja T., Pauls, Alexander, Koppelin, Frauke, Wallhoff, Frank, Akan, Ozgur, Editorial Board Member, Bellavista, Paolo, Editorial Board Member, Cao, Jiannong, Editorial Board Member, Coulson, Geoffrey, Editorial Board Member, Dressler, Falko, Editorial Board Member, Ferrari, Domenico, Editorial Board Member, Gerla, Mario, Editorial Board Member, Kobayashi, Hisashi, Editorial Board Member, Palazzo, Sergio, Editorial Board Member, Sahni, Sartaj, Editorial Board Member, Shen, Xuemin, Editorial Board Member, Stan, Mircea, Editorial Board Member, Jia, Xiaohua, Editorial Board Member, Zomaya, Albert Y., Editorial Board Member, Salvi, Dario, editor, Van Gorp, Pieter, editor, and Shah, Syed Ahmar, editor
- Published
- 2024
- Full Text
- View/download PDF
29. Overview of Human Activity Recognition Using Sensor Data
- Author
-
Hamad, Rebeen Ali, Woo, Wai Lok, Wei, Bo, Yang, Longzhi, Kacprzyk, Janusz, Series Editor, Pal, Nikhil R., Advisory Editor, Bello Perez, Rafael, Advisory Editor, Corchado, Emilio S., Advisory Editor, Hagras, Hani, Advisory Editor, Kóczy, László T., Advisory Editor, Kreinovich, Vladik, Advisory Editor, Lin, Chin-Teng, Advisory Editor, Lu, Jie, Advisory Editor, Melin, Patricia, Advisory Editor, Nedjah, Nadia, Advisory Editor, Nguyen, Ngoc Thanh, Advisory Editor, Wang, Jun, Advisory Editor, Panoutsos, George, editor, Mahfouf, Mahdi, editor, and Mihaylova, Lyudmila S, editor
- Published
- 2024
- Full Text
- View/download PDF
30. Spatiotemporal Object Detection and Activity Recognition
- Author
-
Kumar, Vimal, Jain, Shobhit, Lillis, David, Han, Jiawei, Advisory Editor, Meng, Xiaofeng, Editor-in-Chief, Zeng, Daniel Dajun, Editorial Board Member, Kitsuregawa, Masaru, Advisory Editor, Jin, Hai, Editorial Board Member, Yu, Philip S., Advisory Editor, Wang, Haixun, Editorial Board Member, Liu, Huan, Editorial Board Member, Tan, Tieniu, Advisory Editor, Gao, Wen, Advisory Editor, Wang, X. Sean, Editorial Board Member, Meng, Weiyi, Editorial Board Member, A, John, editor, Abimannan, Satheesh, editor, El-Alfy, El-Sayed M., editor, and Chang, Yue-Shan, editor
- Published
- 2024
- Full Text
- View/download PDF
31. An Open-Source Voice Command-Based Human-Computer Interaction System Using Speech Recognition Platforms
- Author
-
Fuad, Adnan Mahmud, Ahmed, Sheikh Jahan, Anannya, Nusrat Jahan, Mridha, M. F., Nur, Kamruddin, Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Arefin, Mohammad Shamsul, editor, Kaiser, M. Shamim, editor, Bhuiyan, Touhid, editor, Dey, Nilanjan, editor, and Mahmud, Mufti, editor
- Published
- 2024
- Full Text
- View/download PDF
32. Benchmarking of Semantic Segmentation Enabled Human Activity Recognition Methods
- Author
-
Rana, Akshit, Chauhan, Kshitij Kumar Singh, Sinha, Suyash Kumar, Tiwari, Vivek, Lovanshi, Mayank, Gupta, Shailendra, Dey, Nilanjan, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Piuri, Vincenzo, Series Editor, Mishra, Durgesh, editor, Yang, Xin She, editor, Unal, Aynur, editor, and Jat, Dharm Singh, editor
- Published
- 2024
- Full Text
- View/download PDF
33. Containerized Wearable Edge AI Inference Framework in Mobile Health Systems
- Author
-
Nkenyereye, Lionel, Lee, Boon Giin, Chung, Wan-Young, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Choi, Bong Jun, editor, Singh, Dhananjay, editor, Tiwary, Uma Shanker, editor, and Chung, Wan-Young, editor
- Published
- 2024
- Full Text
- View/download PDF
34. Violence Detection Using DenseNet and LSTM
- Author
-
Ranjan, Prashansa, Gupta, Ayushi, Jain, Nandini, Goyal, Tarushi, Singh, Krishna Kant, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Jiming, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Li, Yong, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Oneto, Luca, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zamboni, Walter, Series Editor, Zhang, Junjie James, Series Editor, Tan, Kay Chen, Series Editor, Jain, Shruti, editor, Marriwala, Nikhil, editor, Singh, Pushpendra, editor, Tripathi, C.C., editor, and Kumar, Dinesh, editor
- Published
- 2024
- Full Text
- View/download PDF
35. Digital Twin Architecture for Ambient Assisted Living
- Author
-
Ramadan, Abbas, De Lamotte, Florent Frizon, Julien, Nathalie, Kacprzyk, Janusz, Series Editor, Borangiu, Theodor, editor, Trentesaux, Damien, editor, Leitão, Paulo, editor, Berrah, Lamia, editor, and Jimenez, Jose-Fernando, editor
- Published
- 2024
- Full Text
- View/download PDF
36. Movement Pattern Recognition in Boxing Using Raw Inertial Measurements
- Author
-
Puchalski, Radosław, Giernacki, Wojciech, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Prates, Raquel Oliveira, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Pereira, Ana I., editor, Mendes, Armando, editor, Fernandes, Florbela P., editor, Pacheco, Maria F., editor, Coelho, João P., editor, and Lima, José, editor
- Published
- 2024
- Full Text
- View/download PDF
37. Robust Machine Learning for Low-Power Wearable Devices: Challenges and Opportunities
- Author
-
Bhat, Ganapati, Hussein, Dina, Yamin, Nuzhat, Pasricha, Sudeep, editor, and Shafique, Muhammad, editor
- Published
- 2024
- Full Text
- View/download PDF
38. Activity Recognition for Attachments of Construction Machinery Using Decision Trees
- Author
-
Theobald, Marc, Top, Felix, di Prisco, Marco, Series Editor, Chen, Sheng-Hong, Series Editor, Vayas, Ioannis, Series Editor, Kumar Shukla, Sanjay, Series Editor, Sharma, Anuj, Series Editor, Kumar, Nagesh, Series Editor, Wang, Chien Ming, Series Editor, Cui, Zhen-Dong, Series Editor, Fottner, Johannes, editor, Nübel, Konrad, editor, and Matt, Dominik, editor
- Published
- 2024
- Full Text
- View/download PDF
39. Day2Dark: Pseudo-Supervised Activity Recognition Beyond Silent Daylight
- Author
-
Zhang, Yunhua, Doughty, Hazel, and Snoek, Cees G. M.
- Published
- 2024
- Full Text
- View/download PDF
40. Fall Rate Detection, Identification and Analysis Object Oriented for Elderly Safety
- Author
-
Sudirman Sudirman, Ansar Suyuti, Zahir Zainuddin, and Arief Fauzan
- Subjects
activity recognition ,artificial intelligence ,foreground detection ,fall detection ,machine learning ,mask r-cnn ,motion history image ,object oriented programming ,svm classification. ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
The aged population in Indonesia in 2021 is 30. Sixteen million people. The aged populace elderly 60 years and over reached 11.01% of the complete populace of Indonesia, which amounted to 273.88 million humans. There are ages who live on their own because of busy households with work. if there's an incident of falling elderly, a motion detection gadget is needed for monitoring the situation of the elderly at domestic. This takes a look at designing a visual synthetic intelligence hobby recognition gadget with entry from the digital camera to come across aged sports from video. take video records with the photograph Acquisition technique, Foreground Detection for changing photographs into binary, masks R-CNN to come to aware of detection items and discover the location of the incident, movement history photo, and C_motion to represent the placement of the detected object's body, SVM magnificence to categorize aged statistics falls or sports of every day residing. The experimental outcomes display that this device can come across the condensed-space version with an accuracy of ninety-seven, 50.
- Published
- 2024
- Full Text
- View/download PDF
41. X-CHAR: A Concept-based Explainable Complex Human Activity Recognition Model.
- Author
-
Jeyakumar, Jeya, Sarker, Ankur, Garcia, Luis, and Srivastava, Mani
- Subjects
Activity recognition ,Explainable AI ,Interpretability ,Neural networks - Abstract
End-to-end deep learning models are increasingly applied to safety-critical human activity recognition (HAR) applications, e.g., healthcare monitoring and smart home control, to reduce developer burden and increase the performance and robustness of prediction models. However, integrating HAR models in safety-critical applications requires trust, and recent approaches have aimed to balance the performance of deep learning models with explainable decision-making for complex activity recognition. Prior works have exploited the compositionality of complex HAR (i.e., higher-level activities composed of lower-level activities) to form models with symbolic interfaces, such as concept-bottleneck architectures, that facilitate inherently interpretable models. However, feature engineering for symbolic concepts-as well as the relationship between the concepts-requires precise annotation of lower-level activities by domain experts, usually with fixed time windows, all of which induce a heavy and error-prone workload on the domain expert. In this paper, we introduce X-CHAR , an eXplainable Complex Human Activity Recognition model that doesnt require precise annotation of low-level activities, offers explanations in the form of human-understandable, high-level concepts, while maintaining the robust performance of end-to-end deep learning models for time series data. X-CHAR learns to model complex activity recognition in the form of a sequence of concepts. For each classification, X-CHAR outputs a sequence of concepts and a counterfactual example as the explanation. We show that the sequence information of the concepts can be modeled using Connectionist Temporal Classification (CTC) loss without having accurate start and end times of low-level annotations in the training dataset-significantly reducing developer burden. We evaluate our model on several complex activity datasets and demonstrate that our model offers explanations without compromising the prediction accuracy in comparison to baseline models. Finally, we conducted a mechanical Turk study to show that the explanations provided by our model are more understandable than the explanations from existing methods for complex activity recognition.
- Published
- 2023
42. Optimization of activity-driven event detection for long-term ambulatory urodynamics.
- Author
-
Zareen, Farhath, Elazab, Mohammed, Hanzlicek, Brett, Doelman, Adam, Bourbeau, Dennis, Majerus, Steve JA, Damaser, Margot S, and Karam, Robert
- Abstract
Lower urinary tract dysfunction (LUTD) is a debilitating condition that affects millions of individuals worldwide, greatly diminishing their quality of life. The use of wireless, catheter-free implantable devices for long-term ambulatory bladder monitoring, combined with a single-sensor system capable of detecting various bladder events, has the potential to significantly enhance the diagnosis and treatment of LUTD. However, these systems produce large amounts of bladder data that may contain physiological noise in the pressure signals caused by motion artifacts and sudden movements, such as coughing or laughing, potentially leading to false positives during bladder event classification and inaccurate diagnosis/treatment. Integration of activity recognition (AR) can improve classification accuracy, provide context regarding patient activity, and detect motion artifacts by identifying contractions that may result from patient movement. This work investigates the utility of including data from inertial measurement units (IMUs) in the classification pipeline, and considers various digital signal processing (DSP) and machine learning (ML) techniques for optimization and activity classification. In a case study, we analyze simultaneous bladder pressure and IMU data collected from an ambulating female Yucatan minipig. We identified 10 important, yet relatively inexpensive to compute signal features, with which we achieve an average 91.5% activity classification accuracy. Moreover, when classified activities are included in the bladder event analysis pipeline, we observe an improvement in classification accuracy, from 81% to 89.0%. These results suggest that certain IMU features can improve bladder event classification accuracy with low computational overhead. Clinical Relevance : This work establishes that activity recognition may be used in conjunction with single-channel bladder event detection systems to distinguish between contractions and motion artifacts for reducing the incorrect classification of bladder events. This is relevant for emerging sensors that measure intravesical pressure alone or for data analysis of bladder pressure in ambulatory subjects that contain significant abdominal pressure artifacts. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
43. DSANet: A lightweight hybrid network for human action recognition in virtual sports.
- Author
-
Xiao, Zhiyong, Yu, Feng, Liu, Li, Peng, Tao, Hu, Xinrong, and Jiang, Minghua
- Abstract
Human activity recognition (HAR) has significant potential in virtual sports applications. However, current HAR networks often prioritize high accuracy at the expense of practical application requirements, resulting in networks with large parameter counts and computational complexity. This can pose challenges for real‐time and efficient recognition. This paper proposes a hybrid lightweight DSANet network designed to address the challenges of real‐time performance and algorithmic complexity. The network utilizes a multi‐scale depthwise separable convolutional (Multi‐scale DWCNN) module to extract spatial information and a multi‐layer Gated Recurrent Unit (Multi‐layer GRU) module for temporal feature extraction. It also incorporates an improved channel‐space attention module called RCSFA to enhance feature extraction capability. By leveraging channel, spatial, and temporal information, the network achieves a low number of parameters with high accuracy. Experimental evaluations on UCIHAR, WISDM, and PAMAP2 datasets demonstrate that the network not only reduces parameter counts but also achieves accuracy rates of 97.55%, 98.99%, and 98.67%, respectively, compared to state‐of‐the‐art networks. This research provides valuable insights for the virtual sports field and presents a novel network for real‐time activity recognition deployment in embedded devices. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. Choice of Piezoelectric Element over Accelerometer for an Energy-Autonomous Shoe-Based System.
- Author
-
Gogoi, Niharika, Zhu, Yuanjia, Kirchner, Jens, and Fischer, Georg
- Subjects
- *
FOOT movements , *GAIT in humans , *ENERGY harvesting , *ACCELEROMETERS , *WEARABLE technology , *POTENTIAL energy , *WIRELESS sensor nodes , *PHYSICAL training & conditioning , *DIAGNOSIS - Abstract
Shoe-based wearable sensor systems are a growing research area in health monitoring, disease diagnosis, rehabilitation, and sports training. These systems—equipped with one or more sensors, either of the same or different types—capture information related to foot movement or pressure maps beneath the foot. This captured information offers an overview of the subject's overall movement, known as the human gait. Beyond sensing, these systems also provide a platform for hosting ambient energy harvesters. They hold the potential to harvest energy from foot movements and operate related low-power devices sustainably. This article proposes two types of strategies (Strategy 1 and Strategy 2) for an energy-autonomous shoe-based system. Strategy 1 uses an accelerometer as a sensor for gait acquisition, which reflects the classical choice. Strategy 2 uses a piezoelectric element for the same, which opens up a new perspective in its implementation. In both strategies, the piezoelectric elements are used to harvest energy from foot activities and operate the system. The article presents a fair comparison between both strategies in terms of power consumption, accuracy, and the extent to which piezoelectric energy harvesters can contribute to overall power management. Moreover, Strategy 2, which uses piezoelectric elements for simultaneous sensing and energy harvesting, is a power-optimized method for an energy-autonomous shoe system. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
45. Learning the micro-environment from rich trajectories in the context of mobile crowd sensing: Application to air quality monitoring.
- Author
-
El Hafyani, Hafsa, Abboud, Mohammad, Zuo, Jingwei, Zeitouni, Karine, Taher, Yehia, Chaix, Basile, and Wang, Limin
- Subjects
- *
CROWDSENSING , *AIR quality monitoring , *MACHINE learning , *DEEP learning , *MOBILE computing , *TIME series analysis - Abstract
With the rapid advancements of sensor technologies and mobile computing, Mobile Crowd Sensing (MCS) has emerged as a new paradigm to collect massive-scale rich trajectory data. Nomadic sensors empower people and objects with the capability of reporting and sharing observations on their state, their behavior and/or their surrounding environments. Processing and mining multi-source sensor data in MCS raise several challenges due to their multi-dimensional nature where the measured parameters (i.e., dimensions) may differ in terms of quality, variability, and time scale. We consider the context of air quality MCS and focus on the task of mining the micro-environment from the MCS data. Relating the measures to their micro-environment is crucial to interpret them and analyse the participant's exposure properly. In this paper, we focus on the problem of investigating the feasibility of recognizing the human's micro-environment in an environmental MCS scenario. We propose a novel approach for learning and predicting the micro-environment of users from their trajectories enriched with environmental data represented as multidimensional time series plus GPS tracks. We put forward a multi-view learning approach that we adapt to our context, and implement it along with other time series classification approaches. We extend the proposed approach to a hybrid method that employs trajectory segmentation to bring the best of both methods. We optimise the proposed approaches either by analysing the exact geolocation (which is privacy invasive), or simply applying some a priori rules (which is privacy friendly). The experimental results, applied to real MCS data, not only confirm the power of MCS and air quality (AQ) data in characterizing the micro-environment, but also show a moderate impact of the integration of mobility data in this recognition. Furthermore, and during the training phase, multi-view learning shows similar performance as the reference deep learning algorithm, without requiring specific hardware. However, during the application of models on new data, the deep learning algorithm fails to outperform our proposed models. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
46. A review on devices and learning techniques in domestic intelligent environment.
- Author
-
Ye, Jiancong, Wang, Mengxuan, Zhong, Junpei, and Jiang, Hongjie
- Abstract
With the rapid development and wide proliferation of sensor devices and the Internet of Things (IoT), machine learning algorithms processing and analysing one or more modalities of sensory signals have become an active research field given its numerous applications, particularly in the domestic intelligent environment (DIE). In the past decades, the research on sensing and interactive devices of DIE and deep learning (DL) based methods have become strikingly popular. Several missions, such as the pro- cessing and analysis of sensing signals related to domestic instruments and the control of certain devices to act upon the results, comprise the main working targets in DIE. The goal of this review is to provide a brief overview of the aforementioned sensors, their related DL algorithms and their applications. To comprehend the ideas behind the use of various devices found in domestic intelligent instruments, we first summarize the available information. Then, to quantify and adapt the residents' knowledge of the household environment, we review data-driven learning techniques based on the aforementioned sensor-based devices and introduce robotic applications that provide helpers and action outputs in the environment. Finally, we investigate the commonly utilized datasets relevant to DIE and human activ- ity recognition (HAR) and explore the challenges and prospects of their applications in the DIE field. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
47. Evaluating the effects of safety incentives on worker safety behavior control through image-based activity classification
- Author
-
Bogyeong Lee and Hyunsoo Kim
- Subjects
construction safety incentive program ,safety regulations ,activity recognition ,computer-vision ,spatiotemporal graph convolutional network ,Public aspects of medicine ,RA1-1270 - Abstract
IntroductionConstruction worker safety remains a major concern even as task automation increases. Although safety incentives have been introduced to encourage safety compliance, it is still difficult to accurately measure the effectiveness of these measures. A simple count of accident rates and lower numbers do not necessarily mean that workers are properly complying with safety regulations. To address this problem, this study proposes an image-based approach to monitor moment-by-moment worker safety behavior and evaluate the effects of different safety incentive scenarios.MethodsBy capturing workers’ safety behaviors using a model integrated with OpenPose and spatiotemporal graph convolutional network, this study evaluated the effects of safety-incentive scenarios on workers’ compliance with rules while on the job. The safety incentive scenarios in this study were designed as 1) varying the type (i.e., providing rewards and penalties) of incentives and 2) varying the frequency of feedback about ones’ own compliance status during tasks. The effects of the scenarios were compared to the average compliance rates of three safety regulations (i.e., personal protective equipment self-monitoring hazard avoidance, and arranging the safety hook) for each scenario.ResultsThe results show that 1) rewarding a good-compliance is more effective when there is no feedback on compliance status, and 2) penalizing non-compliance is more effective when there are three feedbacks during the tasks.DiscussionThis study provides a more accurate assessment of safety incentives and their effectiveness by focusing on safe behaviors to promote safety compliance among construction workers.
- Published
- 2024
- Full Text
- View/download PDF
48. Editorial: Wearable computing, volume II
- Author
-
Bo Zhou, Cheng Zhang, and Bashima Islam
- Subjects
activity recognition ,machine learning ,wearable technologies ,motion capture ,sensing ,empirical studies ,Electronic computers. Computer science ,QA75.5-76.95 - Published
- 2024
- Full Text
- View/download PDF
49. Shots segmentation-based optimized dual-stream framework for robust human activity recognition in surveillance video
- Author
-
Altaf Hussain, Samee Ullah Khan, Noman Khan, Waseem Ullah, Ahmed Alkhayyat, Meshal Alharbi, and Sung Wook Baik
- Subjects
Activity Recognition ,Video Classification ,Surveillance System ,Lowlight Image Enhancement ,Dual Stream Network ,Transformer Network ,Engineering (General). Civil engineering (General) ,TA1-2040 - Abstract
Nowadays, for controlling crime, surveillance cameras are typically installed in all public places to ensure urban safety and security. However, automating Human Activity Recognition (HAR) using computer vision techniques faces several challenges such as lowlighting, complex spatiotemporal features, clutter backgrounds, and inefficient utilization of surveillance system resources. Existing attempts in HAR designed straightforward networks by analyzing either spatial or motion patterns resulting in limited performance while the dual streams methods are entirely based on Convolutional Neural Networks (CNN) that are inadequate to learning the long-range temporal information for HAR. To overcome the above-mentioned challenges, this paper proposes an optimized dual stream framework for HAR which mainly consists of three steps. First, a shots segmentation module is introduced in the proposed framework to efficiently utilize the surveillance system resources by enhancing the lowlight video stream and then it detects salient video frames that consist of human. This module is trained on our own challenging Lowlight Human Surveillance Dataset (LHSD) which consists of both normal and different levels of lowlighting data to recognize humans in complex uncertain environments. Next, to learn HAR from both contextual and motion information, a dual stream approach is used in the feature extraction. In the first stream, it freezes the learned weights of the backbone Vision Transformer (ViT) B-16 model to select the discriminative contextual information. In the second stream, ViT features are then fused with the intermediate encoder layers of FlowNet2 model for optical flow to extract a robust motion feature vector. Finally, a two stream Parallel Bidirectional Long Short-Term Memory (PBiLSTM) is proposed for sequence learning to capture the global semantics of activities, followed by Dual Stream Multi-Head Attention (DSMHA) with a late fusion strategy to optimize the huge features vector for accurate HAR. To assess the strength of the proposed framework, extensive empirical results are conducted on real-world surveillance scenarios and various benchmark HAR datasets that achieve 78.6285%, 96.0151%, and 98.875% accuracies on HMDB51, UCF101, and YouTube Action, respectively. Our results show that the proposed strategy outperforms State-of-the-Art (SOTA) methods. The proposed framework gives superior performance in HAR, providing accurate and reliable recognition of human activities in surveillance systems.
- Published
- 2024
- Full Text
- View/download PDF
50. A Novel Framework for Daily Life Activity and Context Recognition In-The-Wild Using Smartphone Inertial Sensors
- Author
-
Sadam Hussain Noorani, Aamir Arsalan, Jaroslav Frnda, Sheharyar Khan, Aasim Raheel, and Muhammad Ehatisham-Ul-Haq
- Subjects
Smart sensing ,ubiquitous computing ,activity recognition ,context recognition ,inertial sensors ,machine learning ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Human Activity Recognition (HAR) systems are pivotal for numerous applications in pervasive computing. The rapid proliferation of smart devices, including smartphones has indeed transformed the way people live and interact with technology. Smartphones, in particular, have become indispensable for many individuals, with a large percentage of the global population owning and relying on them. Modern smartphones, equipped with various sensors have opened up new possibilities for HAR. This study introduces a novel framework for context-aware human activity recognition in real-world, unconstrained environments, leveraging smartphone inertial sensors (accelerometer and gyroscope). The proposed Human Activity and Associated Context Recognition (HAACR) framework includes data pre-processing, feature extraction and selection, and classification phases. Utilizing the publicly available Extrasensory dataset, the study examines 06 primary activities and 23 associated contexts. Data from 55 participants, filtered for completeness of both accelerometer and gyroscope readings, were used. Feature extraction was performed using multiple sliding window sizes. The extracted set of features was subjected to feature selection and later on classification using three different classifiers i.e., random forest, decision tree, and k-nearest neighbors classifiers. The proposed framework achieved the highest classification accuracy of 98.97%. This accuracy significantly surpasses previous state-of-the-art methods available in the literature, demonstrating the effectiveness of integrating data from multiple sensors for enhanced activity and context recognition in natural settings. The findings highlight the importance of window size and feature selection in improving recognition performance, offering valuable insights for future HAR research and applications.
- Published
- 2024
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.