270 results
Search Results
2. Channel Prediction for Underwater Acoustic Communication: A Review and Performance Evaluation of Algorithms.
- Author
-
Liu, Haotian, Ma, Lu, Wang, Zhaohui, and Qiao, Gang
- Subjects
- *
DEEP learning , *UNDERWATER acoustic communication , *MACHINE learning , *ALGORITHMS , *TELECOMMUNICATION systems , *FORECASTING - Abstract
Underwater acoustic (UWA) channel prediction technology, as an important topic in UWA communication, has played an important role in UWA adaptive communication network and underwater target perception. Although many significant advancements have been achieved in underwater acoustic channel prediction over the years, a comprehensive summary and introduction is still lacking. As the first comprehensive overview of UWA channel prediction, this paper introduces past works and algorithm implementation methods of channel prediction from the perspective of linear, kernel-based, and deep learning approaches. Importantly, based on available at-sea experiment datasets, this paper compares the performance of current primary UWA channel prediction algorithms under a unified system framework, providing researchers with a comprehensive and objective understanding of UWA channel prediction. Finally, it discusses the directions and challenges for future research. The survey finds that linear prediction algorithms are the most widely applied, and deep learning, as the most advanced type of algorithm, has moved this field into a new stage. The experimental results show that the linear algorithms have the lowest computational complexity, and when the training samples are sufficient, deep learning algorithms have the best prediction performance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Anomaly Detection in Blockchain Networks Using Unsupervised Learning: A Survey.
- Author
-
Cholevas, Christos, Angeli, Eftychia, Sereti, Zacharoula, Mavrikos, Emmanouil, and Tsekouras, George E.
- Subjects
- *
DATA structures , *MACHINE learning , *PRIVATE networks , *BLOCKCHAINS , *ALGORITHMS - Abstract
In decentralized systems, the quest for heightened security and integrity within blockchain networks becomes an issue. This survey investigates anomaly detection techniques in blockchain ecosystems through the lens of unsupervised learning, delving into the intricacies and going through the complex tapestry of abnormal behaviors by examining avant-garde algorithms to discern deviations from normal patterns. By seamlessly blending technological acumen with a discerning gaze, this survey offers a perspective on the symbiotic relationship between unsupervised learning and anomaly detection by reviewing this problem with a categorization of algorithms that are applied to a variety of problems in this field. We propose that the use of unsupervised algorithms in blockchain anomaly detection should be viewed not only as an implementation procedure but also as an integration procedure, where the merits of these algorithms can effectively be combined in ways determined by the problem at hand. In that sense, the main contribution of this paper is a thorough study of the interplay between various unsupervised learning algorithms and how this can be used in facing malicious activities and behaviors within public and private blockchain networks. The result is the definition of three categories, the characteristics of which are recognized in terms of the way the respective integration takes place. When implementing unsupervised learning, the structure of the data plays a pivotal role. Therefore, this paper also provides an in-depth presentation of the data structures commonly used in unsupervised learning-based blockchain anomaly detection. The above analysis is encircled by a presentation of the typical anomalies that have occurred so far along with a description of the general machine learning frameworks developed to deal with them. Finally, the paper spotlights challenges and directions that can serve as a comprehensive compendium for future research efforts. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. VIS-SLAM: A Real-Time Dynamic SLAM Algorithm Based on the Fusion of Visual, Inertial, and Semantic Information.
- Author
-
Wang, Yinglong, Liu, Xiaoxiong, Zhao, Minkun, and Xu, Xinlong
- Subjects
- *
MOBILE robots , *MACHINE learning , *MOBILE learning , *DEEP learning , *ALGORITHMS , *INFORMATION measurement , *PROBABILITY theory , *GEOMETRY - Abstract
A deep learning-based Visual Inertial SLAM technique is proposed in this paper to ensure accurate autonomous localization of mobile robots in environments with dynamic objects. Addressing the limitations of real-time performance in deep learning algorithms and the poor robustness of pure visual geometry algorithms, this paper presents a deep learning-based Visual Inertial SLAM technique. Firstly, a non-blocking model is designed to extract semantic information from images. Then, a motion probability hierarchy model is proposed to obtain prior motion probabilities of feature points. For image frames without semantic information, a motion probability propagation model is designed to determine the prior motion probabilities of feature points. Furthermore, considering that the output of inertial measurements is unaffected by dynamic objects, this paper integrates inertial measurement information to improve the estimation accuracy of feature point motion probabilities. An adaptive threshold-based motion probability estimation method is proposed, and finally, the positioning accuracy is enhanced by eliminating feature points with excessively high motion probabilities. Experimental results demonstrate that the proposed algorithm achieves accurate localization in dynamic environments while maintaining real-time performance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. A Semi-Automatic Magnetic Resonance Imaging Annotation Algorithm Based on Semi-Weakly Supervised Learning.
- Author
-
Chen, Shaolong and Zhang, Zhiyong
- Subjects
- *
MAGNETIC resonance imaging , *SUPERVISED learning , *MACHINE learning , *ITERATIVE learning control , *ALGORITHMS , *ANNOTATIONS , *DEEP learning - Abstract
The annotation of magnetic resonance imaging (MRI) images plays an important role in deep learning-based MRI segmentation tasks. Semi-automatic annotation algorithms are helpful for improving the efficiency and reducing the difficulty of MRI image annotation. However, the existing semi-automatic annotation algorithms based on deep learning have poor pre-annotation performance in the case of insufficient segmentation labels. In this paper, we propose a semi-automatic MRI annotation algorithm based on semi-weakly supervised learning. In order to achieve a better pre-annotation performance in the case of insufficient segmentation labels, semi-supervised and weakly supervised learning were introduced, and a semi-weakly supervised learning segmentation algorithm based on sparse labels was proposed. In addition, in order to improve the contribution rate of a single segmentation label to the performance of the pre-annotation model, an iterative annotation strategy based on active learning was designed. The experimental results on public MRI datasets show that the proposed algorithm achieved an equivalent pre-annotation performance when the number of segmentation labels was much less than that of the fully supervised learning algorithm, which proves the effectiveness of the proposed algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. A Cross-View Geo-Localization Algorithm Using UAV Image and Satellite Image.
- Author
-
Fan, Jiqi, Zheng, Enhui, He, Yufei, and Yang, Jianxing
- Subjects
- *
REMOTE-sensing images , *TRANSFORMER models , *ALGORITHMS , *TECHNOLOGY transfer , *MACHINE learning - Abstract
Within research on the cross-view geolocation of UAVs, differences in image sources and interference from similar scenes pose huge challenges. Inspired by multimodal machine learning, in this paper, we design a single-stream pyramid transformer network (SSPT). The backbone of the model uses the self-attention mechanism to enrich its own internal features in the early stage and uses the cross-attention mechanism in the later stage to refine and interact with different features to eliminate irrelevant interference. In addition, in the post-processing part of the model, a header module is designed for upsampling to generate heat maps, and a Gaussian weight window is designed to assign label weights to make the model converge better. Together, these methods improve the positioning accuracy of UAV images in satellite images. Finally, we also use style transfer technology to simulate various environmental changes in order to expand the experimental data, further proving the environmental adaptability and robustness of the method. The final experimental results show that our method yields significant performance improvement: The relative distance score (RDS) of the SSPT-384 model on the benchmark UL14 dataset is significantly improved from 76.25% to 84.40%, while the meter-level accuracy (MA) of 3 m, 5 m, and 20 m is increased by 12%, 12%, and 10%, respectively. For the SSPT-256 model, the RDS has been increased to 82.21%, and the meter-level accuracy (MA) of 3 m, 5 m, and 20 m has increased by 5%, 5%, and 7%, respectively. It still shows strong robustness on the extended thermal infrared (TIR), nighttime, and rainy day datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. ULG-SLAM: A Novel Unsupervised Learning and Geometric Feature-Based Visual SLAM Algorithm for Robot Localizability Estimation.
- Author
-
Huang, Yihan, Xie, Fei, Zhao, Jing, Gao, Zhilin, Chen, Jun, Zhao, Fei, and Liu, Xixiang
- Subjects
- *
MACHINE learning , *VISUAL learning , *ALGORITHMS , *ROBOTS , *FEATURE extraction , *WALKING speed - Abstract
Indoor localization has long been a challenging task due to the complexity and dynamism of indoor environments. This paper proposes ULG-SLAM, a novel unsupervised learning and geometric-based visual SLAM algorithm for robot localizability estimation to improve the accuracy and robustness of visual SLAM. Firstly, a dynamic feature filtering based on unsupervised learning and moving consistency checks is developed to eliminate the features of dynamic objects. Secondly, an improved line feature extraction algorithm based on LSD is proposed to optimize the effect of geometric feature extraction. Thirdly, geometric features are used to optimize localizability estimation, and an adaptive weight model and attention mechanism are built using the method of region delimitation and region growth. Finally, to verify the effectiveness and robustness of localizability estimation, multiple indoor experiments using the EuRoC dataset and TUM RGB-D dataset are conducted. Compared with ORBSLAM2, the experimental results demonstrate that absolute trajectory accuracy can be improved by 95% for equivalent processing speed in walking sequences. In fr3/walking_xyz and fr3/walking_half, ULG-SLAM tracks more trajectories than DS-SLAM, and the ATE RMSE is improved by 36% and 6%, respectively. Furthermore, the improvement in robot localizability over DynaSLAM is noteworthy, coming in at about 11% and 3%, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. Early Breast Cancer Risk Assessment: Integrating Histopathology with Artificial Intelligence.
- Author
-
Ivanova, Mariia, Pescia, Carlo, Trapani, Dario, Venetis, Konstantinos, Frascarelli, Chiara, Mane, Eltjona, Cursano, Giulia, Sajjadi, Elham, Scatena, Cristian, Cerbelli, Bruna, d'Amati, Giulia, Porta, Francesca Maria, Guerini-Rocco, Elena, Criscitiello, Carmen, Curigliano, Giuseppe, and Fusco, Nicola
- Subjects
- *
BREAST tumor risk factors , *RISK assessment , *MEDICAL protocols , *CANCER relapse , *ARTIFICIAL intelligence , *EARLY detection of cancer , *CYTOCHEMISTRY , *TUMOR markers , *DECISION making in clinical medicine , *IMMUNOHISTOCHEMISTRY , *PATIENT-centered care , *DEEP learning , *ARTIFICIAL neural networks , *MACHINE learning , *ONCOLOGISTS , *INDIVIDUALIZED medicine , *MOLECULAR pathology , *HEALTH care teams , *ALGORITHMS , *DISEASE risk factors - Abstract
Simple Summary: Risk assessment in early breast cancer is critical for clinical decisions, but defining risk categories poses a significant challenge. The integration of conventional histopathology and biomarkers with artificial intelligence (AI) techniques, including machine learning and deep learning, has the potential to offer more precise information. AI applications extend beyond detection to histological subtyping, grading, and molecular feature identification. The successful integration of AI into clinical practice requires collaboration between histopathologists, molecular pathologists, computational pathologists, and oncologists to optimize patient outcomes. Effective risk assessment in early breast cancer is essential for informed clinical decision-making, yet consensus on defining risk categories remains challenging. This paper explores evolving approaches in risk stratification, encompassing histopathological, immunohistochemical, and molecular biomarkers alongside cutting-edge artificial intelligence (AI) techniques. Leveraging machine learning, deep learning, and convolutional neural networks, AI is reshaping predictive algorithms for recurrence risk, thereby revolutionizing diagnostic accuracy and treatment planning. Beyond detection, AI applications extend to histological subtyping, grading, lymph node assessment, and molecular feature identification, fostering personalized therapy decisions. With rising cancer rates, it is crucial to implement AI to accelerate breakthroughs in clinical practice, benefiting both patients and healthcare providers. However, it is important to recognize that while AI offers powerful automation and analysis tools, it lacks the nuanced understanding, clinical context, and ethical considerations inherent to human pathologists in patient care. Hence, the successful integration of AI into clinical practice demands collaborative efforts between medical experts and computational pathologists to optimize patient outcomes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Novel Imaging Approaches for Glioma Classification in the Era of the World Health Organization 2021 Update: A Scoping Review.
- Author
-
Richter, Vivien, Ernemann, Ulrike, and Bender, Benjamin
- Subjects
- *
GLIOMAS , *RADIOMICS , *MAGNETIC resonance imaging , *DESCRIPTIVE statistics , *SYSTEMATIC reviews , *LITERATURE reviews , *DEEP learning , *GENETIC mutation , *NEURORADIOLOGY , *MACHINE learning , *DATA analysis software , *ALGORITHMS - Abstract
Simple Summary: The 2021 WHO classification of central nervous system (CNS) tumors is challenging for neuroradiologists due to the central role of the molecular profile of tumors. We performed a scoping review of recent literature to assess the existing data on the power of novel data analysis tools to predict new tumor classes by imaging. We found room for performance improvement for subgroups with lower incidence (e.g., 1p/19q codeleted or IDH1/2 mutated gliomas) and patients with rare diagnoses (e.g., pediatric gliomas, midline gliomas). More data regarding functional MRI techniques need to be collected. Studies explicitly designed to assess the generalizability of AI-aided tools for predicting molecular tumor subgroups are lacking. The 2021 WHO classification of CNS tumors is a challenge for neuroradiologists due to the central role of the molecular profile of tumors. The potential of novel data analysis tools in neuroimaging must be harnessed to maintain its role in predicting tumor subgroups. We performed a scoping review to determine current evidence and research gaps. A comprehensive literature search was conducted regarding glioma subgroups according to the 2021 WHO classification and the use of MRI, radiomics, machine learning, and deep learning algorithms. Sixty-two original articles were included and analyzed by extracting data on the study design and results. Only 8% of the studies included pediatric patients. Low-grade gliomas and diffuse midline gliomas were represented in one-third of the research papers. Public datasets were utilized in 22% of the studies. Conventional imaging sequences prevailed; data on functional MRI (DWI, PWI, CEST, etc.) are underrepresented. Multiparametric MRI yielded the best prediction results. IDH mutation and 1p/19q codeletion status prediction remain in focus with limited data on other molecular subgroups. Reported AUC values range from 0.6 to 0.98. Studies designed to assess generalizability are scarce. Performance is worse for smaller subgroups (e.g., 1p/19q codeleted or IDH1/2 mutated gliomas). More high-quality study designs with diversity in the analyzed population and techniques are needed. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. A Multi-Agent RL Algorithm for Dynamic Task Offloading in D2D-MEC Network with Energy Harvesting †.
- Author
-
Mi, Xin, He, Huaiwen, and Shen, Hong
- Subjects
- *
ENERGY harvesting , *MACHINE learning , *ALGORITHMS , *INTEGER programming , *DYNAMIC loads , *MOBILE computing , *NONLINEAR programming - Abstract
Delay-sensitive task offloading in a device-to-device assisted mobile edge computing (D2D-MEC) system with energy harvesting devices is a critical challenge due to the dynamic load level at edge nodes and the variability in harvested energy. In this paper, we propose a joint dynamic task offloading and CPU frequency control scheme for delay-sensitive tasks in a D2D-MEC system, taking into account the intricacies of multi-slot tasks, characterized by diverse processing speeds and data transmission rates. Our methodology involves meticulous modeling of task arrival and service processes using queuing systems, coupled with the strategic utilization of D2D communication to alleviate edge server load and prevent network congestion effectively. Central to our solution is the formulation of average task delay optimization as a challenging nonlinear integer programming problem, requiring intelligent decision making regarding task offloading for each generated task at active mobile devices and CPU frequency adjustments at discrete time slots. To navigate the intricate landscape of the extensive discrete action space, we design an efficient multi-agent DRL learning algorithm named MAOC, which is based on MAPPO, to minimize the average task delay by dynamically determining task-offloading decisions and CPU frequencies. MAOC operates within a centralized training with decentralized execution (CTDE) framework, empowering individual mobile devices to make decisions autonomously based on their unique system states. Experimental results demonstrate its swift convergence and operational efficiency, and it outperforms other baseline algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. The Use of Artificial Intelligence Algorithms in the Prognosis and Detection of Lymph Node Involvement in Head and Neck Cancer and Possible Impact in the Development of Personalized Therapeutic Strategy: A Systematic Review.
- Author
-
Michelutti, Luca, Tel, Alessandro, Zeppieri, Marco, Ius, Tamara, Sembronio, Salvatore, and Robiony, Massimo
- Subjects
- *
ARTIFICIAL intelligence , *LYMPH nodes , *HEAD & neck cancer , *ALGORITHMS , *PROGNOSIS - Abstract
Given the increasingly important role that the use of artificial intelligence algorithms is taking on in the medical field today (especially in oncology), the purpose of this systematic review is to analyze the main reports on such algorithms applied for the prognostic evaluation of patients with head and neck malignancies. The objective of this paper is to examine the currently available literature in the field of artificial intelligence applied to head and neck oncology, particularly in the prognostic evaluation of the patient with this kind of tumor, by means of a systematic review. The paper exposes an overview of the applications of artificial intelligence in deriving prognostic information related to the prediction of survival and recurrence and how these data may have a potential impact on the choice of therapeutic strategy, making it increasingly personalized. This systematic review was written following the PRISMA 2020 guidelines. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
12. Deep Learning for Structural Health Monitoring: Data, Algorithms, Applications, Challenges, and Trends.
- Author
-
Jia, Jing and Li, Ying
- Subjects
- *
STRUCTURAL health monitoring , *DEEP learning , *DIGITAL twins , *STRUCTURAL frames , *MACHINE learning , *ALGORITHMS - Abstract
Environmental effects may lead to cracking, stiffness loss, brace damage, and other damages in bridges, frame structures, buildings, etc. Structural Health Monitoring (SHM) technology could prevent catastrophic events by detecting damage early. In recent years, Deep Learning (DL) has developed rapidly and has been applied to SHM to detect, localize, and evaluate diverse damages through efficient feature extraction. This paper analyzes 337 articles through a systematic literature review to investigate the application of DL for SHM in the operation and maintenance phase of facilities from three perspectives: data, DL algorithms, and applications. Firstly, the data types in SHM and the corresponding collection methods are summarized and analyzed. The most common data types are vibration signals and images, accounting for 80% of the literature studied. Secondly, the popular DL algorithm types and application areas are reviewed, of which CNN accounts for 60%. Then, this article carefully analyzes the specific functions of DL application for SHM based on the facility's characteristics. The most scrutinized study focused on cracks, accounting for 30 percent of research papers. Finally, challenges and trends in applying DL for SHM are discussed. Among the trends, the Structural Health Monitoring Digital Twin (SHMDT) model framework is suggested in response to the trend of strong coupling between SHM technology and Digital Twin (DT), which can advance the digitalization, visualization, and intelligent management of SHM. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
13. A Review on Federated Learning and Machine Learning Approaches: Categorization, Application Areas, and Blockchain Technology.
- Author
-
Ogundokun, Roseline Oluwaseun, Misra, Sanjay, Maskeliunas, Rytis, and Damasevicius, Robertas
- Subjects
- *
BLOCKCHAINS , *ARTIFICIAL intelligence , *MACHINE learning , *CONFERENCE papers , *ALGORITHMS , *SCIENCE publishing - Abstract
Federated learning (FL) is a scheme in which several consumers work collectively to unravel machine learning (ML) problems, with a dominant collector synchronizing the procedure. This decision correspondingly enables the training data to be distributed, guaranteeing that the individual device's data are secluded. The paper systematically reviewed the available literature using the Preferred Reporting Items for Systematic Review and Meta-analysis (PRISMA) guiding principle. The study presents a systematic review of appliable ML approaches for FL, reviews the categorization of FL, discusses the FL application areas, presents the relationship between FL and Blockchain Technology (BT), and discusses some existing literature that has used FL and ML approaches. The study also examined applicable machine learning models for federated learning. The inclusion measures were (i) published between 2017 and 2021, (ii) written in English, (iii) published in a peer-reviewed scientific journal, and (iv) Preprint published papers. Unpublished studies, thesis and dissertation studies, (ii) conference papers, (iii) not in English, and (iv) did not use artificial intelligence models and blockchain technology were all removed from the review. In total, 84 eligible papers were finally examined in this study. Finally, in recent years, the amount of research on ML using FL has increased. Accuracy equivalent to standard feature-based techniques has been attained, and ensembles of many algorithms may yield even better results. We discovered that the best results were obtained from the hybrid design of an ML ensemble employing expert features. However, some additional difficulties and issues need to be overcome, such as efficiency, complexity, and smaller datasets. In addition, novel FL applications should be investigated from the standpoint of the datasets and methodologies. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
14. Personalized Treatment Policies with the Novel Buckley-James Q-Learning Algorithm.
- Author
-
Lee, Jeongjin and Kim, Jong-Min
- Subjects
- *
MACHINE learning , *ALGORITHMS , *SURVIVAL analysis (Biometry) , *TIME management , *PATIENT care , *REINFORCEMENT learning - Abstract
This research paper presents the Buckley-James Q-learning (BJ-Q) algorithm, a cutting-edge method designed to optimize personalized treatment strategies, especially in the presence of right censoring. We critically assess the algorithm's effectiveness in improving patient outcomes and its resilience across various scenarios. Central to our approach is the innovative use of the survival time to impute the reward in Q-learning, employing the Buckley-James method for enhanced accuracy and reliability. Our findings highlight the significant potential of personalized treatment regimens and introduce the BJ-Q learning algorithm as a viable and promising approach. This work marks a substantial advancement in our comprehension of treatment dynamics and offers valuable insights for augmenting patient care in the ever-evolving clinical landscape. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Reweighted Extreme Learning Machine-Based Clutter Suppression and Range Compensation Algorithm for Non-Side-Looking Airborne Radar.
- Author
-
Liu, Jing, Liao, Guisheng, Zeng, Cao, Tao, Haihong, Xu, Jingwei, Zhu, Shengqi, and Juwono, Filbert H.
- Subjects
- *
RADAR in aeronautics , *MACHINE learning , *ALGORITHMS , *MATHEMATICAL complexes - Abstract
Non-side-looking airborne radar provides important applications on account of its all-round multi-angle airspace coverage. However, it suffers clutter range dependence that makes the samples fail to satisfy the condition of being independent and identically distributed (IID), and it severely degrades traditional approaches to clutter suppression and target detection. In this paper, a novel reweighted extreme learning machine (ELM)-based clutter suppression and range compensation algorithm is proposed for non-side-looking airborne radar. The proposed method involves first designing the pre-processing stage, the special reweighted complex-valued activation function containing an unknown range compensation matrix, and two new objective outputs for constructing an initial reweighted ELM-based network with its training. Then, two other objective outputs, a new loss function, and a reverse feedback framework driven by the specifically designed objectives are proposed for the unknown range compensation matrix. Finally, aiming to estimate and reconstruct the unknown compensation matrix, special processes of the complex-valued structures and the theoretical derivations are designed and analyzed in detail. Consequently, with the updated and compensated samples, further processing including space–time adaptive processing (STAP) can be performed for clutter suppression and target detection. Compared with the classic relevant methods, the proposed algorithm achieves significantly superior performance with reasonable computation time. It provides an obviously higher detection probability and better improvement factor (IF). The simulation results verify that the proposed algorithm is effective and has many advantages. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. YOLOv7oSAR: A Lightweight High-Precision Ship Detection Model for SAR Images Based on the YOLOv7 Algorithm.
- Author
-
Liu, Yilin, Ma, Yong, Chen, Fu, Shang, Erping, Yao, Wutao, Zhang, Shuyan, and Yang, Jin
- Subjects
- *
SHIP models , *SYNTHETIC aperture radar , *MACHINE learning , *SOLID state drives , *ALGORITHMS , *DEEP learning - Abstract
Researchers have explored various methods to fully exploit the all-weather characteristics of Synthetic aperture radar (SAR) images to achieve high-precision, real-time, computationally efficient, and easily deployable ship target detection models. These methods include Constant False Alarm Rate (CFAR) algorithms and deep learning approaches such as RCNN, YOLO, and SSD, among others. While these methods outperform traditional algorithms in SAR ship detection, challenges still exist in handling the arbitrary ship distributions and small target features in SAR remote sensing images. Existing models are complex, with a large number of parameters, hindering effective deployment. This paper introduces a YOLOv7 oriented bounding box SAR ship detection model (YOLOv7oSAR). The model employs a rotation box detection mechanism, uses the KLD loss function to enhance accuracy, and introduces a Bi-former attention mechanism to improve small target detection. By redesigning the network's width and depth and incorporating a lightweight P-ELAN structure, the model effectively reduces its size and computational requirements. The proposed model achieves high-precision detection results on the public RSDD dataset (94.8% offshore, 66.6% nearshore), and its generalization ability is validated on a custom dataset (94.2% overall detection accuracy). [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Sea Ice Extraction via Remote Sensing Imagery: Algorithms, Datasets, Applications and Challenges.
- Author
-
Huang, Wenjun, Yu, Anzhu, Xu, Qing, Sun, Qun, Guo, Wenyue, Ji, Song, Wen, Bowei, and Qiu, Chunping
- Subjects
- *
SEA ice , *DEEP learning , *REMOTE sensing , *IMAGE recognition (Computer vision) , *GEOGRAPHIC information systems , *ALGORITHMS - Abstract
Deep learning, which is a dominating technique in artificial intelligence, has completely changed image understanding over the past decade. As a consequence, the sea ice extraction (SIE) problem has reached a new era. We present a comprehensive review of four important aspects of SIE, including algorithms, datasets, applications and future trends. Our review focuses on research published from 2016 to the present, with a specific focus on deep-learning-based approaches in the last five years. We divided all related algorithms into three categories, including the conventional image classification approach, the machine learning-based approach and deep-learning-based methods. We reviewed the accessible ice datasets including SAR-based datasets, the optical-based datasets and others. The applications are presented in four aspects including climate research, navigation, geographic information systems (GIS) production and others. This paper also provides insightful observations and inspiring future research directions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. Dendritic Growth Optimization: A Novel Nature-Inspired Algorithm for Real-World Optimization Problems.
- Author
-
Priyadarshini, Ishaani
- Subjects
- *
OPTIMIZATION algorithms , *BIOLOGICALLY inspired computing , *DEEP learning , *MACHINE learning , *METAHEURISTIC algorithms , *PROBLEM solving , *ALGORITHMS - Abstract
In numerous scientific disciplines and practical applications, addressing optimization challenges is a common imperative. Nature-inspired optimization algorithms represent a highly valuable and pragmatic approach to tackling these complexities. This paper introduces Dendritic Growth Optimization (DGO), a novel algorithm inspired by natural branching patterns. DGO offers a novel solution for intricate optimization problems and demonstrates its efficiency in exploring diverse solution spaces. The algorithm has been extensively tested with a suite of machine learning algorithms, deep learning algorithms, and metaheuristic algorithms, and the results, both before and after optimization, unequivocally support the proposed algorithm's feasibility, effectiveness, and generalizability. Through empirical validation using established datasets like diabetes and breast cancer, the algorithm consistently enhances model performance across various domains. Beyond its working and experimental analysis, DGO's wide-ranging applications in machine learning, logistics, and engineering for solving real-world problems have been highlighted. The study also considers the challenges and practical implications of implementing DGO in multiple scenarios. As optimization remains crucial in research and industry, DGO emerges as a promising avenue for innovation and problem solving. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. Artificial Intelligence Algorithms for Healthcare.
- Author
-
Chumachenko, Dmytro and Yakovlev, Sergiy
- Subjects
- *
ARTIFICIAL intelligence , *DEEP learning , *ALGORITHMS , *MACHINE learning , *INFORMATION technology , *MEDICAL care , *MOTION capture (Human mechanics) , *MEDICAL technology - Abstract
Artificial intelligence (AI) algorithms are playing a crucial role in transforming healthcare by enhancing the quality, accessibility, and efficiency of medical care, research, and operations. These algorithms enable healthcare providers to offer more accurate diagnoses, predict outcomes, and customize treatments to individual patient needs. AI also improves operational efficiency by automating routine tasks and optimizing resource management. However, there are challenges to adopting AI in healthcare, such as data privacy concerns and potential biases in algorithms. Collaboration among stakeholders is necessary to ensure ethical use of AI and its positive impact on the field. AI also has applications in medical research, preventive medicine, and public health. It is important to recognize that AI should augment, not replace, the expertise and compassionate care provided by healthcare professionals. The ethical implications and societal impact of AI in healthcare must be carefully considered, guided by fairness, transparency, and accountability principles. Several research papers in this special issue explore the application of AI algorithms in various aspects of healthcare, such as gait analysis for Parkinson's disease diagnosis, human activity recognition, heart disease prediction, compliance assessment with clinical protocols, epidemic management, neurological complications identification, fall prevention, leukemia diagnosis, and genetic clinical pathways. These studies demonstrate the potential of AI in improving medical diagnostics, patient monitoring, and personalized care. [Extracted from the article]
- Published
- 2024
- Full Text
- View/download PDF
20. Custom Loss Functions in XGBoost Algorithm for Enhanced Critical Error Mitigation in Drill-Wear Analysis of Melamine-Faced Chipboard.
- Author
-
Bukowski, Michał, Kurek, Jarosław, Świderski, Bartosz, and Jegorowa, Albina
- Subjects
- *
MELAMINE , *MACHINE learning , *ALGORITHMS , *INDUSTRIAL efficiency - Abstract
The advancement of machine learning in industrial applications has necessitated the development of tailored solutions to address specific challenges, particularly in multi-class classification tasks. This study delves into the customization of loss functions within the eXtreme Gradient Boosting (XGBoost) algorithm, which is a critical step in enhancing the algorithm's performance for specific applications. Our research is motivated by the need for precision and efficiency in the industrial domain, where the implications of misclassification can be substantial. We focus on the drill-wear analysis of melamine-faced chipboard, a common material in furniture production, to demonstrate the impact of custom loss functions. The paper explores several variants of Weighted Softmax Loss Functions, including Edge Penalty and Adaptive Weighted Softmax Loss, to address the challenges of class imbalance and the heightened importance of accurately classifying edge classes. Our findings reveal that these custom loss functions significantly reduce critical errors in classification without compromising the overall accuracy of the model. This research not only contributes to the field of industrial machine learning by providing a nuanced approach to loss function customization but also underscores the importance of context-specific adaptations in machine learning algorithms. The results showcase the potential of tailored loss functions in balancing precision and efficiency, ensuring reliable and effective machine learning solutions in industrial settings. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. Fractional-Order Control Method Based on Twin-Delayed Deep Deterministic Policy Gradient Algorithm.
- Author
-
Jiao, Guangxin, An, Zhengcai, Shao, Shuyi, and Sun, Dong
- Subjects
- *
RADIAL basis functions , *SLIDING mode control , *REINFORCEMENT learning , *ALGORITHMS , *MACHINE learning , *CLOSED loop systems - Abstract
In this paper, a fractional-order control method based on the twin-delayed deep deterministic policy gradient (TD3) algorithm in reinforcement learning is proposed. A fractional-order disturbance observer is designed to estimate the disturbances, and the radial basis function network is selected to approximate system uncertainties in the system. Then, a fractional-order sliding-mode controller is constructed to control the system, and the parameters of the controller are tuned using the TD3 algorithm, which can optimize the control effect. The results show that the fractional-order control method based on the TD3 algorithm can not only improve the closed-loop system performance under different operating conditions but also enhance the signal tracking capability. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. Autonomous Parameter Balance in Population-Based Approaches: A Self-Adaptive Learning-Based Strategy.
- Author
-
Vega, Emanuel, Lemus-Romani, José, Soto, Ricardo, Crawford, Broderick, Löffler, Christoffer, Peña, Javier, and Talbi, El-Gazhali
- Subjects
- *
SELF-adaptive software , *METAHEURISTIC algorithms , *MANUFACTURING cells , *KNAPSACK problems , *ALGORITHMS - Abstract
Population-based metaheuristics can be seen as a set of agents that smartly explore the space of solutions of a given optimization problem. These agents are commonly governed by movement operators that decide how the exploration is driven. Although metaheuristics have successfully been used for more than 20 years, performing rapid and high-quality parameter control is still a main concern. For instance, deciding the proper population size yielding a good balance between quality of results and computing time is constantly a hard task, even more so in the presence of an unexplored optimization problem. In this paper, we propose a self-adaptive strategy based on the on-line population balance, which aims for improvements in the performance and search process on population-based algorithms. The design behind the proposed approach relies on three different components. Firstly, an optimization-based component which defines all metaheuristic tasks related to carry out the resolution of the optimization problems. Secondly, a learning-based component focused on transforming dynamic data into knowledge in order to influence the search in the solution space. Thirdly, a probabilistic-based selector component is designed to dynamically adjust the population. We illustrate an extensive experimental process on large instance sets from three well-known discrete optimization problems: Manufacturing Cell Design Problem, Set covering Problem, and Multidimensional Knapsack Problem. The proposed approach is able to compete against classic, autonomous, as well as IRace-tuned metaheuristics, yielding interesting results and potential future work regarding dynamically adjusting the number of solutions interacting on different times within the search process. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. Ship-Fire Net: An Improved YOLOv8 Algorithm for Ship Fire Detection.
- Author
-
Zhang, Ziyang, Tan, Lingye, and Tiong, Robert Lee Kong
- Subjects
- *
FIRE detectors , *MACHINE learning , *OBJECT recognition (Computer vision) , *DEEP learning , *ALGORITHMS , *COMPUTATIONAL complexity , *SHIPS - Abstract
Ship fire may result in significant damage to its structure and large economic loss. Hence, the prompt identification of fires is essential in order to provide prompt reactions and effective mitigation strategies. However, conventional detection systems exhibit limited efficacy and accuracy in detecting targets, which has been mostly attributed to limitations imposed by distance constraints and the motion of ships. Although the development of deep learning algorithms provides a potential solution, the computational complexity of ship fire detection algorithm pose significant challenges. To solve this, this paper proposes a lightweight ship fire detection algorithm based on YOLOv8n. Initially, a dataset, including more than 4000 unduplicated images and their labels, is established before training. In order to ensure the performance of algorithms, both fire inside ship rooms and also fire on board are considered. Then after tests, YOLOv8n is selected as the model with the best performance and fastest speed from among several advanced object detection algorithms. GhostnetV2-C2F is then inserted in the backbone of the algorithm for long-range attention with inexpensive operation. In addition, spatial and channel reconstruction convolution (SCConv) is used to reduce redundant features with significantly lower complexity and computational costs for real-time ship fire detection. For the neck part, omni-dimensional dynamic convolution is used for the multi-dimensional attention mechanism, which also lowers the parameters. After these improvements, a lighter and more accurate YOLOv8n algorithm, called Ship-Fire Net, was proposed. The proposed method exceeds 0.93, both in precision and recall for fire and smoke detection in ships. In addition, the mAP@0.5 reaches about 0.9. Despite the improvement in accuracy, Ship-Fire Net also has fewer parameters and lower FLOPs compared to the original, which accelerates its detection speed. The FPS of Ship-Fire Net also reaches 286, which is helpful for real-time ship fire monitoring. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. Implementation of Chaotic Reverse Slime Mould Algorithm Based on the Dandelion Optimizer.
- Author
-
Zhang, Yi, Liu, Yang, Zhao, Yue, and Wang, Xu
- Subjects
- *
MYXOMYCETES , *LEVY processes , *MACHINE learning , *ALGORITHMS , *HYBRID systems - Abstract
This paper presents a hybrid algorithm based on the slime mould algorithm (SMA) and the mixed dandelion optimizer. The hybrid algorithm improves the convergence speed and prevents the algorithm from falling into the local optimal. (1) The Bernoulli chaotic mapping is added in the initialization phase to enrich the population diversity. (2) The Brownian motion and Lévy flight strategy are added to further enhance the global search ability and local exploitation performance of the slime mould. (3) The specular reflection learning is added in the late iteration to improve the population search ability and avoid falling into local optimality. The experimental results show that the convergence speed and precision of the improved algorithm are improved in the standard test functions. At last, this paper optimizes the parameters of the Extreme Learning Machine (ELM) model with the improved method and applies it to the power load forecasting problem. The effectiveness of the improved method in solving practical engineering problems is further verified. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
25. Forgetful Forests: Data Structures for Machine Learning on Streaming Data under Concept Drift.
- Author
-
Yuan, Zhehu, Sun, Yinqi, and Shasha, Dennis
- Subjects
- *
MACHINE learning , *DATA structures , *DATABASES , *MACHINE performance , *PROBABILISTIC databases , *ALGORITHMS - Abstract
Database and data structure research can improve machine learning performance in many ways. One way is to design better algorithms on data structures. This paper combines the use of incremental computation as well as sequential and probabilistic filtering to enable "forgetful" tree-based learning algorithms to cope with streaming data that suffers from concept drift. (Concept drift occurs when the functional mapping from input to classification changes over time). The forgetful algorithms described in this paper achieve high performance while maintaining high quality predictions on streaming data. Specifically, the algorithms are up to 24 times faster than state-of-the-art incremental algorithms with, at most, a 2% loss of accuracy, or are at least twice faster without any loss of accuracy. This makes such structures suitable for high volume streaming applications. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
26. Advances in Slime Mould Algorithm: A Comprehensive Survey.
- Author
-
Wei, Yuanfei, Othman, Zalinda, Daud, Kauthar Mohd, Luo, Qifang, and Zhou, Yongquan
- Subjects
- *
ALGORITHMS , *MACHINE learning , *IMAGE segmentation , *RESEARCH personnel - Abstract
The slime mould algorithm (SMA) is a new swarm intelligence algorithm inspired by the oscillatory behavior of slime moulds during foraging. Numerous researchers have widely applied the SMA and its variants in various domains in the field and proved its value by conducting various literatures. In this paper, a comprehensive review of the SMA is introduced, which is based on 130 articles obtained from Google Scholar between 2022 and 2023. In this study, firstly, the SMA theory is described. Secondly, the improved SMA variants are provided and categorized according to the approach used to apply them. Finally, we also discuss the main applications domains of the SMA, such as engineering optimization, energy optimization, machine learning, network, scheduling optimization, and image segmentation. This review presents some research suggestions for researchers interested in this algorithm, such as conducting additional research on multi-objective and discrete SMAs and extending this to neural networks and extreme learning machining. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. Hybrid Deep Learning and Sensitivity Operator-Based Algorithm for Identification of Localized Emission Sources.
- Author
-
Penenko, Alexey, Emelyanov, Mikhail, Rusin, Evgeny, Tsybenova, Erjena, and Shablyko, Vasily
- Subjects
- *
DEEP learning , *ALGORITHMS , *CONVOLUTIONAL neural networks , *INVERSE problems , *MACHINE learning , *REMOTE sensing - Abstract
Hybrid approaches combining machine learning with traditional inverse problem solution methods represent a promising direction for the further development of inverse modeling algorithms. The paper proposes an approach to emission source identification from measurement data for advection–diffusion–reaction models. The approach combines general-type source identification and post-processing refinement: first, emission source identification by measurement data is carried out by a sensitivity operator-based algorithm, and then refinement is done by incorporating a priori information about unknown sources. A general-type distributed emission source identified at the first stage is transformed into a localized source consisting of multiple point-wise sources. The second, refinement stage consists of two steps: point-wise source localization and emission rate estimation. Emission source localization is carried out using deep learning with convolutional neural networks. Training samples are generated using a sensitivity operator obtained at the source identification stage. The algorithm was tested in regional remote sensing emission source identification scenarios for the Lake Baikal region and was able to refine the emission source reconstruction results. Hence, the aggregates used in traditional inverse problem solution algorithms can be successfully applied within machine learning frameworks to produce hybrid algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. Efficient Inverse Fractional Neural Network-Based Simultaneous Schemes for Nonlinear Engineering Applications.
- Author
-
Shams, Mudassir and Carpentieri, Bruno
- Subjects
- *
MACHINE learning , *NONLINEAR equations , *ENGINEERING , *IMAGE encryption , *POLYNOMIAL time algorithms , *LEARNING ability , *ALGORITHMS - Abstract
Finding all the roots of a nonlinear equation is an important and difficult task that arises naturally in numerous scientific and engineering applications. Sequential iterative algorithms frequently use a deflating strategy to compute all the roots of the nonlinear equation, as rounding errors have the potential to produce inaccurate results. On the other hand, simultaneous iterative parallel techniques require an accurate initial estimation of the roots to converge effectively. In this paper, we propose a new class of global neural network-based root-finding algorithms for locating real and complex polynomial roots, which exploits the ability of machine learning techniques to learn from data and make accurate predictions. The approximations computed by the neural network are used to initialize two efficient fractional Caputo-inverse simultaneous algorithms of convergence orders ς + 2 and 2 ς + 4 , respectively. The results of our numerical experiments on selected engineering applications show that the new inverse parallel fractional schemes have the potential to outperform other state-of-the-art nonlinear root-finding methods in terms of both accuracy and elapsed solution time. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
29. A Learnheuristic Algorithm for the Capacitated Dispersion Problem under Dynamic Conditions.
- Author
-
Gomez, Juan F., Uguina, Antonio R., Panadero, Javier, and Juan, Angel A.
- Subjects
- *
MACHINE learning , *REINFORCEMENT learning , *ALGORITHMS , *TELECOMMUNICATION systems , *DISPERSION (Chemistry) - Abstract
The capacitated dispersion problem, which is a variant of the maximum diversity problem, aims to determine a set of elements within a network. These elements could symbolize, for instance, facilities in a supply chain or transmission nodes in a telecommunication network. While each element typically has a bounded service capacity, in this research, we introduce a twist. The capacity of each node might be influenced by a random Bernoulli component, thereby rendering the possibility of a node having zero capacity, which is contingent upon a black box mechanism that accounts for environmental variables. Recognizing the inherent complexity and the NP-hard nature of the capacitated dispersion problem, heuristic algorithms have become indispensable for handling larger instances. In this paper, we introduce a novel approach by hybridizing a heuristic algorithm with reinforcement learning to address this intricate problem variant. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
30. A Time-Domain Signal Processing Algorithm for Data-Driven Drive-by Inspection Methods: An Experimental Study.
- Author
-
Lan, Yifu, Li, Zhenkun, and Lin, Weiwei
- Subjects
- *
SIGNAL processing , *MACHINE learning , *DETERIORATION of materials , *STRUCTURAL health monitoring , *ALGORITHMS , *MODEL trucks , *AIR filters , *BRIDGES - Abstract
Constructional material deterioration and member damage can cause changes in the dynamic characteristics of bridge structures, and such changes can be tracked in the responses of passing vehicles via the vehicle-bridge interaction (VBI). Though data-driven methods have shown promising results in damage inspection for drive-by methods, there is still much room for improvement in their performance. Given this background, this paper proposes a novel time-domain signal processing algorithm for the raw vehicle acceleration data of data-driven drive-by inspection methods. To achieve the best data processing performance, an optimizing strategy is designed to automatically search for the optimal parameters, tuning the algorithm. The proposed method intentionally overcomes the difficulties in the application of drive-by methods, such as measurement noise, speed variance, and enormous data volumes. Meanwhile, the use of this method can greatly improve the accuracy and efficiency of Machine Learning (ML) models in vehicle-based damage detection. It consists of a filtering process to denoise the data, a pooling process to reduce data redundancy, and an optimizing procedure to maximize algorithm performance. A dataset is obtained to validate the proposed algorithm through laboratory experiments with a scale truck model and a steel beam. The results show that, compared to using raw data, the present algorithm can increase the average accuracy by 12.2–15.0%, and the average efficiency by 35.7–96.7% for different damaged cases and ML models. Additionally, the functions of filtering and pooling operations, the influence of window function parameters, as well as the performance of different sensor locations, are also investigated in the paper. The goal is to present a signal processing algorithm for data-driven drive-by inspection methods to improve their detection performance of bridge damage caused by material deterioration or structural change. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
31. A Review on Nature-Inspired Algorithms for Cancer Disease Prediction and Classification.
- Author
-
Yaqoob, Abrar, Aziz, Rabia Musheer, Verma, Navneet Kumar, Lalwani, Praveen, Makrariya, Akshara, and Kumar, Pavan
- Subjects
- *
NOSOLOGY , *ALGORITHMS , *TUMOR classification , *MACHINE learning , *CLASSIFICATION algorithms , *FEATURE selection - Abstract
In the era of healthcare and its related research fields, the dimensionality problem of high-dimensional data is a massive challenge as it is crucial to identify significant genes while conducting research on diseases like cancer. As a result, studying new Machine Learning (ML) techniques for raw gene expression biomedical data is an important field of research. Disease detection, sample classification, and early disease prediction are all important analyses of high-dimensional biomedical data in the field of bioinformatics. Recently, machine-learning techniques have dramatically improved the analysis of high-dimension biomedical data sets. Nonetheless, researchers' studies on biomedical data faced the challenge of vast dimensions, i.e., the vast features (genes) with a very low sample space. In this paper, two-dimensionality reduction methods, feature selection, and feature extraction are introduced with a systematic comparison of several dimension reduction techniques for the analysis of high-dimensional gene expression biomedical data. We presented a systematic review of some of the most popular nature-inspired algorithms and analyzed them. The paper is mainly focused on the original principles behind each of the algorithms and their applications for cancer classification and prediction from gene expression data. Lastly, the advantages and disadvantages of nature-inspired algorithms for biomedical data are evaluated. This review paper may guide researchers to choose the most effective algorithm for cancer classification and prediction for the satisfactory analysis of high-dimensional biomedical data. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
32. Standardising Breast Radiotherapy Structure Naming Conventions: A Machine Learning Approach.
- Author
-
Haidar, Ali, Field, Matthew, Batumalai, Vikneswary, Cloak, Kirrily, Al Mouiee, Daniel, Chlap, Phillip, Huang, Xiaoshui, Chin, Vicky, Aly, Farhannah, Carolan, Martin, Sykes, Jonathan, Vinod, Shalini K., Delaney, Geoffrey P., and Holloway, Lois
- Subjects
- *
SPECIALTY hospitals , *HUMAN body , *MACHINE learning , *RETROSPECTIVE studies , *ARTIFICIAL intelligence , *CANCER treatment , *TERMS & phrases , *RESEARCH funding , *RADIOTHERAPY , *DATA analysis , *ARTIFICIAL neural networks , *RECEIVER operating characteristic curves , *THREE-dimensional printing , *BREAST tumors , *ONCOLOGY , *ALGORITHMS , *LONGITUDINAL method , *RADIATION dosimetry , *DATA mining - Abstract
Simple Summary: In radiotherapy treatment, organs at risk and target volumes are contoured by the clinicians to prepare a dosimetry plan. In retrospective data, these structures are not often standardised to universal names across the patients plans, which is required to enable data mining and analysis. In this paper, a new method was proposed and evaluated to automatically standardise radiotherapy structures names using machine learning algorithms. The proposed approach was deployed over a dataset with 1613 patients collected from Liverpool & Macarthur Cancer Therapy Centres, New South Wales, Australia. It was concluded that machine learning techniques can standardise the dosimetry plan structures, taking into consideration the integration of multiple modalities representing each structure during the training process. In progressing the use of big data in health systems, standardised nomenclature is required to enable data pooling and analyses. In many radiotherapy planning systems and their data archives, target volumes (TV) and organ-at-risk (OAR) structure nomenclature has not been standardised. Machine learning (ML) has been utilised to standardise volumes nomenclature in retrospective datasets. However, only subsets of the structures have been targeted. Within this paper, we proposed a new approach for standardising all the structures nomenclature by using multi-modal artificial neural networks. A cohort consisting of 1613 breast cancer patients treated with radiotherapy was identified from Liverpool & Macarthur Cancer Therapy Centres, NSW, Australia. Four types of volume characteristics were generated to represent each target and OAR volume: textual features, geometric features, dosimetry features, and imaging data. Five datasets were created from the original cohort, the first four represented different subsets of volumes and the last one represented the whole list of volumes. For each dataset, 15 sets of combinations of features were generated to investigate the effect of using different characteristics on the standardisation performance. The best model reported 99.416% classification accuracy over the hold-out sample when used to standardise all the nomenclatures in a breast cancer radiotherapy plan into 21 classes. Our results showed that ML based automation methods can be used for standardising naming conventions in a radiotherapy plan taking into consideration the inclusion of multiple modalities to better represent each volume. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
33. Path Planning Algorithm for Dual-Arm Robot Based on Depth Deterministic Gradient Strategy Algorithm.
- Author
-
Zhang, Xiaomei, Yang, Fan, Jin, Qiwen, Lou, Ping, and Hu, Jiwei
- Subjects
- *
POTENTIAL field method (Robotics) , *REINFORCEMENT learning , *MACHINE learning , *ROBOTIC path planning , *ALGORITHMS , *ROBOTS , *ROBOTICS competitions - Abstract
In recent years, the utilization of dual-arm robots has gained substantial prominence across various industries owing to their collaborative operational capabilities. In order to achieve collision avoidance and facilitate cooperative task completion, efficient path planning plays a pivotal role. The high dimensionality associated with collaborative task execution in dual-arm robots renders existing path planning methods ineffective for conducting efficient exploration. This paper introduces a multi-agent path planning reinforcement learning algorithm that integrates an experience replay strategy, a shortest-path constraint, and the policy gradient method. To foster collaboration and avoid competition between the robot arms, the proposed approach incorporates a mechanism known as "reward cooperation, punishment competition" during the training process. Our algorithm demonstrates strong performance in the control of dual-arm robots and exhibits the potential to mitigate the challenge of reward sparsity encountered during the training process. The effectiveness of the proposed algorithm is validated through simulations and experiments, comparing the results with existing methods and showcasing its superiority in dual-arm robot path planning. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
34. Analysis of Artificial Intelligence-Based Approaches Applied to Non-Invasive Imaging for Early Detection of Melanoma: A Systematic Review.
- Author
-
Patel, Raj H., Foltz, Emilie A., Witkowski, Alexander, and Ludzik, Joanna
- Subjects
- *
MELANOMA diagnosis , *ONLINE information services , *MEDICAL databases , *DERMATOLOGISTS , *DEEP learning , *MEDICAL information storage & retrieval systems , *IN vivo studies , *MICROSCOPY , *SYSTEMATIC reviews , *EARLY detection of cancer , *ARTIFICIAL intelligence , *MACHINE learning , *DIAGNOSTIC imaging , *OPTICAL coherence tomography , *DERMOSCOPY , *DESCRIPTIVE statistics , *MEDLINE , *SENSITIVITY & specificity (Statistics) , *ARTIFICIAL neural networks , *ALGORITHMS - Abstract
Simple Summary: Melanoma is the most dangerous type of skin cancer worldwide. Early detection of melanoma is crucial for better outcomes, but this often can be challenging. This research explores the use of artificial intelligence (AI) techniques combined with non-invasive imaging methods to improve melanoma detection. The authors aim to evaluate the current state of AI-based techniques using tools including dermoscopy, optical coherence tomography (OCT), and reflectance confocal microscopy (RCM). The findings demonstrate that several AI algorithms perform as well as or better than dermatologists in detecting melanoma, particularly in the analysis of dermoscopy images. This research highlights the potential of AI to enhance diagnostic accuracy, leading to improved patient outcomes. Further studies are needed to address limitations and ensure the reliability and effectiveness of AI-based techniques. Background: Melanoma, the deadliest form of skin cancer, poses a significant public health challenge worldwide. Early detection is crucial for improved patient outcomes. Non-invasive skin imaging techniques allow for improved diagnostic accuracy; however, their use is often limited due to the need for skilled practitioners trained to interpret images in a standardized fashion. Recent innovations in artificial intelligence (AI)-based techniques for skin lesion image interpretation show potential for the use of AI in the early detection of melanoma. Objective: The aim of this study was to evaluate the current state of AI-based techniques used in combination with non-invasive diagnostic imaging modalities including reflectance confocal microscopy (RCM), optical coherence tomography (OCT), and dermoscopy. We also aimed to determine whether the application of AI-based techniques can lead to improved diagnostic accuracy of melanoma. Methods: A systematic search was conducted via the Medline/PubMed, Cochrane, and Embase databases for eligible publications between 2018 and 2022. Screening methods adhered to the 2020 version of the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines. Included studies utilized AI-based algorithms for melanoma detection and directly addressed the review objectives. Results: We retrieved 40 papers amongst the three databases. All studies directly comparing the performance of AI-based techniques with dermatologists reported the superior or equivalent performance of AI-based techniques in improving the detection of melanoma. In studies directly comparing algorithm performance on dermoscopy images to dermatologists, AI-based algorithms achieved a higher ROC (>80%) in the detection of melanoma. In these comparative studies using dermoscopic images, the mean algorithm sensitivity was 83.01% and the mean algorithm specificity was 85.58%. Studies evaluating machine learning in conjunction with OCT boasted accuracy of 95%, while studies evaluating RCM reported a mean accuracy rate of 82.72%. Conclusions: Our results demonstrate the robust potential of AI-based techniques to improve diagnostic accuracy and patient outcomes through the early identification of melanoma. Further studies are needed to assess the generalizability of these AI-based techniques across different populations and skin types, improve standardization in image processing, and further compare the performance of AI-based techniques with board-certified dermatologists to evaluate clinical applicability. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
35. Indoor Localization Algorithm Based on a High-Order Graph Neural Network.
- Author
-
Kang, Xiaofei, Liang, Xian, and Liang, Qiyue
- Subjects
- *
MACHINE learning , *CUMULATIVE distribution function , *SUPERVISED learning , *K-nearest neighbor classification , *ALGORITHMS , *LOCALIZATION (Mathematics) - Abstract
Given that fingerprint localization methods can be effectively modeled as supervised learning problems, machine learning has been employed for indoor localization tasks based on fingerprint methods. However, it is often challenging for popular machine learning models to effectively capture the unstructured data features inherent in fingerprint data that are generated in diverse propagation environments. In this paper, we propose an indoor localization algorithm based on a high-order graph neural network (HoGNNLoc) to enhance the accuracy of indoor localization and improve localization stability in dynamic environments. The algorithm first designs an adjacency matrix based on the spatial relative locations of access points (APs) to obtain a graph structure; on this basis, a high-order graph neural network is constructed to extract and aggregate the features; finally, the designed fully connected network is used to achieve the regression prediction of the location of the target to be located. The experimental results on our self-built dataset show that the proposed algorithm achieves localization accuracy within 1.29 m at 80% of the cumulative distribution function (CDF) points. The improvements are 59.2%, 51.3%, 36.1%, and 22.7% compared to the K-nearest neighbors (KNN), deep neural network (DNN), simple graph convolutional network (SGC), and graph attention network (GAT). Moreover, even with a 30% reduction in fingerprint data, the proposed algorithm exhibits stable localization performance. On a public dataset, our proposed localization algorithm can also show better performance. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
36. A Novel Feature-Selection Algorithm in IoT Networks for Intrusion Detection.
- Author
-
Nazir, Anjum, Memon, Zulfiqar, Sadiq, Touseef, Rahman, Hameedur, and Khan, Inam Ullah
- Subjects
- *
INTRUSION detection systems (Computer security) , *COMPUTER network traffic , *ALGORITHMS , *INTERNET of things , *CYBERTERRORISM , *SMART devices - Abstract
The Internet of Things (IoT) and network-enabled smart devices are crucial to the digitally interconnected society of the present day. However, the increased reliance on IoT devices increases their susceptibility to malicious activities within network traffic, posing significant challenges to cybersecurity. As a result, both system administrators and end users are negatively affected by these malevolent behaviours. Intrusion-detection systems (IDSs) are commonly deployed as a cyber attack defence mechanism to mitigate such risks. IDS plays a crucial role in identifying and preventing cyber hazards within IoT networks. However, the development of an efficient and rapid IDS system for the detection of cyber attacks remains a challenging area of research. Moreover, IDS datasets contain multiple features, so the implementation of feature selection (FS) is required to design an effective and timely IDS. The FS procedure seeks to eliminate irrelevant and redundant features from large IDS datasets, thereby improving the intrusion-detection system's overall performance. In this paper, we propose a hybrid wrapper-based feature-selection algorithm that is based on the concepts of the Cellular Automata (CA) engine and Tabu Search (TS)-based aspiration criteria. We used a Random Forest (RF) ensemble learning classifier to evaluate the fitness of the selected features. The proposed algorithm, CAT-S, was tested on the TON_IoT dataset. The simulation results demonstrate that the proposed algorithm, CAT-S, enhances classification accuracy while simultaneously reducing the number of features and the false positive rate. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
37. An Adaptive Bandwidth Management Algorithm for Next-Generation Vehicular Networks.
- Author
-
Huang, Chenn-Jung, Hu, Kai-Wen, and Cheng, Hao-Wen
- Subjects
- *
TERAHERTZ technology , *BANDWIDTH allocation , *IN-vehicle entertainment equipment , *OPTICAL communications , *BANDWIDTHS , *ALGORITHMS , *VEHICULAR ad hoc networks , *NEXT generation networks - Abstract
The popularity of video services such as video call or video on-demand has made it impossible for people to live without them in their daily lives. It can be anticipated that the explosive growth of vehicular communication owing to the widespread use of in-vehicle video infotainment applications in the future will result in increasing fragmentation and congestion of the wireless transmission spectrum. Accordingly, effective bandwidth management algorithms are demanded to achieve efficient communication and stable scalability in next-generation vehicular networks. To the best of our current knowledge, a noticeable gap remains in the existing literature regarding the application of the latest advancements in network communication technologies. Specifically, this gap is evident in the lack of exploration regarding how cutting-edge technologies can be effectively employed to optimize bandwidth allocation, especially in the realm of video service applications within the forthcoming vehicular networks. In light of this void, this paper presents a seamless integration of cutting-edge 6G communication technologies, such as terahertz (THz) and visible light communication (VLC), with the existing 5G millimeter-wave and sub-6 GHz base stations. This integration facilitates the creation of a network environment characterized by high transmission rates and extensive coverage. Our primary aim is to ensure the uninterrupted playback of real-time video applications for vehicle users. These video applications encompass video conferencing, live video, and on-demand video services. The outcomes of our simulations convincingly indicate that the proposed strategy adeptly addresses the challenge of bandwidth competition among vehicle users. Moreover, it notably boosts the efficient utilization of bandwidth from less crowded base stations, optimizes the fulfillment of bandwidth prerequisites for various video applications, and elevates the overall video quality experienced by users. Consequently, our findings serve as a successful validation of the practicality and effectiveness of the proposed methodology. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
38. Improving Prediction of Cervical Cancer Using KNN Imputed SMOTE Features and Multi-Model Ensemble Learning Approach.
- Author
-
Karamti, Hanen, Alharthi, Raed, Anizi, Amira Al, Alhebshi, Reemah M., Eshmawi, Ala' Abdulmajid, Alsubai, Shtwai, and Umer, Muhammad
- Subjects
- *
MACHINE learning , *AUTOMATION , *DESCRIPTIVE statistics , *RESEARCH funding , *PREDICTION models , *SENSITIVITY & specificity (Statistics) , *ALGORITHMS ,CERVIX uteri tumors - Abstract
Simple Summary: This paper presents a cervical cancer detection approach where the KNN Imputer techniques is used to fill the missing values and after that SMOTE upsampled features are utilized to train a multi-model ensemble learning approach. Results demonstrate that use of KNN Imputed SMOTE features yields better results than the original features to classify cancerous and normal patients. Objective: Cervical cancer ranks among the top causes of death among females in developing countries. The most important procedures that should be followed to guarantee the minimizing of cervical cancer's aftereffects are early identification and treatment under the finest medical guidance. One of the best methods to find this sort of malignancy is by looking at a Pap smear image. For automated detection of cervical cancer, the available datasets often have missing values, which can significantly affect the performance of machine learning models. Methods: To address these challenges, this study proposes an automated system for predicting cervical cancer that efficiently handles missing values with SMOTE features to achieve high accuracy. The proposed system employs a stacked ensemble voting classifier model that combines three machine learning models, along with KNN Imputer and SMOTE up-sampled features for handling missing values. Results: The proposed model achieves 99.99% accuracy, 99.99% precision, 99.99% recall, and 99.99% F1 score when using KNN imputed SMOTE features. The study compares the performance of the proposed model with multiple other machine learning algorithms under four scenarios: with missing values removed, with KNN imputation, with SMOTE features, and with KNN imputed SMOTE features. The study validates the efficacy of the proposed model against existing state-of-the-art approaches. Conclusions: This study investigates the issue of missing values and class imbalance in the data collected for cervical cancer detection and might aid medical practitioners in timely detection and providing cervical cancer patients with better care. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
39. The Suitability of Machine-Learning Algorithms for the Automatic Acoustic Seafloor Classification of Hard Substrate Habitats in the German Bight.
- Author
-
Breyer, Gavin, Bartholomä, Alexander, and Pesch, Roland
- Subjects
- *
MACHINE learning , *SUPPORT vector machines , *CONVOLUTIONAL neural networks , *SONAR imaging , *ALGORITHMS - Abstract
The automatic calculation of sediment maps from hydroacoustic data is of great importance for habitat and sediment mapping as well as monitoring tasks. For this reason, numerous papers have been published that are based on a variety of algorithms and different kinds of input data. However, the current literature lacks comparative studies that investigate the performance of different approaches in depth. Therefore, this study aims to provide recommendations for suitable approaches for the automatic classification of side-scan sonar data that can be applied by agencies and researchers. With random forests, support vector machines, and convolutional neural networks, both traditional machine-learning methods and novel deep learning techniques have been implemented to evaluate their performance regarding the classification of backscatter data from two study sites located in the Sylt Outer Reef in the German Bight. Simple statistical values, textural features, and Weyl coefficients were calculated for different patch sizes as well as levels of quantization and then utilized in the machine-learning algorithms. It is found that large image patches of 32 px size and the combined use of different feature groups lead to the best classification performances. Further, the neural network and support vector machines generated visually more appealing sediment maps than random forests, despite scoring lower overall accuracy. Based on these findings, we recommend classifying side-scan sonar data with image patches of 32 px size and 6-bit quantization either directly in neural networks or with the combined use of multiple feature groups in support vector machines. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
40. Unsupervised Attribute Reduction Algorithm for Mixed Data Based on Fuzzy Optimal Approximation Set.
- Author
-
Wen, Haotong, Zhao, Shixin, and Liang, Meishe
- Subjects
- *
ROUGH sets , *FUZZY sets , *MACHINE learning , *ALGORITHMS , *APPROXIMATION theory , *APPROXIMATION algorithms - Abstract
Fuzzy rough set theory has been successfully applied to many attribute reduction methods, in which the lower approximation set plays a pivotal role. However, the definition of lower approximation used has ignored the information conveyed by the upper approximation and the boundary region. This oversight has resulted in an unreasonable relation representation of the target set. Despite the fact that scholars have proposed numerous enhancements to rough set models, such as the variable precision model, none have successfully resolved the issues inherent in the classical models. To address this limitation, this paper proposes an unsupervised attribute reduction algorithm for mixed data based on an improved optimal approximation set. Firstly, the theory of an improved optimal approximation set and its associated algorithm are proposed. Subsequently, we extend the classical theory of optimal approximation sets to fuzzy rough set theory, leading to the development of a fuzzy improved approximation set method. Finally, building on the proposed theory, we introduce a novel, fuzzy optimal approximation-set-based unsupervised attribute reduction algorithm (FOUAR). Comparative experiments conducted with all the proposed algorithms indicate the efficacy of FOUAR in selecting fewer attributes while maintaining and improving the performance of the machine learning algorithm. Furthermore, they highlight the advantage of the improved optimal approximation set algorithm, which offers higher similarity to the target set and provides a more concise expression. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
41. Application of the Fuzzy Approach for Evaluating and Selecting Relevant Objects, Features, and Their Ranges.
- Author
-
Paja, Wiesław
- Subjects
- *
MACHINE learning , *SONAR , *ALGORITHMS , *FEATURE selection , *TRACKING algorithms - Abstract
Relevant attribute selection in machine learning is a key aspect aimed at simplifying the problem, reducing its dimensionality, and consequently accelerating computation. This paper proposes new algorithms for selecting relevant features and evaluating and selecting a subset of relevant objects in a dataset. Both algorithms are mainly based on the use of a fuzzy approach. The research presented here yielded preliminary results of a new approach to the problem of selecting relevant attributes and objects and selecting appropriate ranges of their values. Detailed results obtained on the Sonar dataset show the positive effects of this approach. Moreover, the observed results may suggest the effectiveness of the proposed method in terms of identifying a subset of truly relevant attributes from among those identified by traditional feature selection methods. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
42. Distance Metric Optimization-Driven Neural Network Learning Framework for Pattern Classification.
- Author
-
Jiang, Yimeng, Yu, Guolin, and Ma, Jun
- Subjects
- *
MACHINE learning , *STATISTICS , *ALGORITHMS , *CLASSIFICATION , *GENERALIZATION , *COMPUTATIONAL complexity - Abstract
As a novel neural network learning framework, Twin Extreme Learning Machine (TELM) has received extensive attention and research in the field of machine learning. However, TELM is affected by noise or outliers in practical applications so that its generalization performance is reduced compared to robust learning algorithms. In this paper, we propose two novel distance metric optimization-driven robust twin extreme learning machine learning frameworks for pattern classification, namely, CWTELM and FCWTELM. By introducing the robust Welsch loss function and capped L 2 , p -distance metric, our methods reduce the effect of outliers and improve the generalization performance of the model compared to TELM. In addition, two efficient iterative algorithms are designed to solve the challenges brought by the non-convex optimization problems CWTELM and FCWTELM, and we theoretically guarantee their convergence, local optimality, and computational complexity. Then, the proposed algorithms are compared with five other classical algorithms under different noise and different datasets, and the statistical detection analysis is implemented. Finally, we conclude that our algorithm has excellent robustness and classification performance. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
43. The Improved Stochastic Fractional Order Gradient Descent Algorithm.
- Author
-
Yang, Yang, Mo, Lipo, Hu, Yusen, and Long, Fei
- Subjects
- *
STOCHASTIC orders , *ALGORITHMS , *ONLINE shopping , *FRACTIONAL calculus , *ONLINE algorithms - Abstract
This paper mainly proposes some improved stochastic gradient descent (SGD) algorithms with a fractional order gradient for the online optimization problem. For three scenarios, including standard learning rate, adaptive gradient learning rate, and momentum learning rate, three new SGD algorithms are designed combining a fractional order gradient and it is shown that the corresponding regret functions are convergent at a sub-linear rate. Then we discuss the impact of the fractional order on the convergence and monotonicity and prove that the better performance can be obtained by adjusting the order of the fractional gradient. Finally, several practical examples are given to verify the superiority and validity of the proposed algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
44. Comparative Study of Random Forest and Support Vector Machine Algorithms in Mineral Prospectivity Mapping with Limited Training Data.
- Author
-
Lachaud, Alix, Adam, Marcus, and Mišković, Ilija
- Subjects
- *
RANDOM forest algorithms , *KERNEL functions , *RADIAL basis functions , *SUPPORT vector machines , *MINERALS , *ALGORITHMS , *COMPARATIVE studies - Abstract
This paper employs two data-driven methods, Random Forest (RF) and Support Vector Machines (SVM), to develop mineral prospectivity models for an epithermal Au deposit. Four distinct models are presented for comparison: one employing RF and three using SVM with different kernel functions—namely linear, Radial Basis Function (RBF), and polynomial. The analysis leverages a compact training dataset, encompassing just 20 deposits, with deposit and non-deposit locations chosen from known mineral occurrences. Fourteen predictor maps are constructed based on the available data and the exploration model. The findings indicate that RF is more stable and robust than SVM, regardless of the kernel function implemented. While all SVM models outperformed the RF model in terms of classification capability on the training dataset achieving an accuracy exceeding 89% versus 78% for the RF model, the success rate curves suggest superior predictive abilities of RF over SVM models. This implies that the SVM models may be overfitting the training data due to the limited quantity of training deposits. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
45. A Method for UWB Localization Based on CNN-SVM and Hybrid Locating Algorithm.
- Author
-
Gao, Zefu, Jiao, Yiwen, Yang, Wenge, Li, Xuejian, and Wang, Yuxin
- Subjects
- *
LOCALIZATION (Mathematics) , *MACHINE learning , *SIGNAL classification , *CLASSIFICATION algorithms , *ERROR probability , *ALGORITHMS - Abstract
In this paper, aiming at the severe problems of UWB positioning in NLOS-interference circumstances, a complete method is proposed for NLOS/LOS classification, NLOS identification and mitigation, and a final accurate UWB coordinate solution through the integration of two machine learning algorithms and a hybrid localization algorithm, which is called the C-T-CNN-SVM algorithm. This algorithm consists of three basic processes: an LOS/NLOS signal classification method based on SVM, an NLOS signal recognition and error elimination method based on CNN, and an accurate coordinate solution based on the hybrid weighting of the Chan–Taylor method. Finally, the validity and accuracy of the C-T-CNN-SVM algorithm are proved through a comparison with traditional and state-of-the-art methods. (i) Focusing on four main prediction errors (range measurements, maxNoise, stdNoise and rangeError), the standard deviation decreases from 13.65 cm to 4.35 cm, while the mean error decreases from 3.65 cm to 0.27 cm, and the errors are practically distributed normally, demonstrating that after training a SVM for LOS/NLOS signal classification and a CNN for NLOS recognition and mitigation, the accuracy of UWB range measurements may be greatly increased. (ii) After target positioning, the proposed method can realize a one-dimensional X-axis and Y-axis accuracy within 175 mm, and a Z-axis accuracy within 200 mm; a 2D ( X , Y ) accuracy within 200 mm; and a 3D accuracy within 200 mm, most of which fall within (100 mm, 100 mm, 100 mm). (iii) Compared with the traditional algorithms, the proposed C-T-CNN-SVM algorithm performs better in location accuracy, cumulative error probability (CDF), and root-mean-square difference (RMSE): the 1D, 2D, and 3D accuracy of the proposed method is 2.5 times that of the traditional methods. When the location error is less than 10 cm, the CDF of the proposed algorithm only reaches a value of 0.17; when the positioning error reaches 30 cm, only the CDF of the proposed algorithm remains in an acceptable range. The RMSE of the proposed algorithm remains ideal when the distance error is greater than 30 cm. The results of this paper and the idea of a combination of machine learning methods with the classical locating algorithms for improved UWB positioning under NLOS interference could meet the growing need for wireless indoor locating and communication, which indicates the possibility for the practical deployment of such a method in the future. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
46. A Context-Aware Android Malware Detection Approach Using Machine Learning.
- Author
-
AlJarrah, Mohammed N., Yaseen, Qussai M., and Mustafa, Ahmad M.
- Subjects
- *
MACHINE learning , *MALWARE , *APPLICATION program interfaces , *MOBILE apps , *MOBILE operating systems , *ALGORITHMS - Abstract
The Android platform has become the most popular smartphone operating system, which makes it a target for malicious mobile apps. This paper proposes a machine learning-based approach for Android malware detection based on application features. Unlike many prior research that focused exclusively on API Calls and permissions features to improve detection efficiency and accuracy, this paper incorporates applications' contextual features with API Calls and permissions features. Moreover, the proposed approach extracted a new dataset of static API Calls and permission features using a large dataset of malicious and benign Android APK samples. Furthermore, the proposed approach used the Information Gain algorithm to reduce the API and permission feature space from 527 to the most relevant 50 features only. Several combinations of API Calls, permissions, and contextual features were used. These combinations were fed into different machine-learning algorithms to show the significance of using the selected contextual features in detecting Android malware. The experiments show that the proposed model achieved a very high accuracy of about 99.4% when using contextual features in comparison to 97.2% without using contextual features. Moreover, the paper shows that the proposed approach outperformed the state-of-the-art models considered in this work. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
47. Machine Learning in the Classification of Pediatric Posterior Fossa Tumors: A Systematic Review.
- Author
-
Yearley, Alexander G., Blitz, Sarah E., Patel, Ruchit V., Chan, Alvin, Baird, Lissa C., Friedman, Gregory K., Arnaout, Omar, Smith, Timothy R., and Bernstock, Joshua D.
- Subjects
- *
DIGITAL image processing , *ONLINE information services , *META-analysis , *MEDICAL information storage & retrieval systems , *SYSTEMATIC reviews , *MACHINE learning , *GLIOMAS , *MEDLINE , *INFRATENTORIAL brain tumors , *ALGORITHMS - Abstract
Simple Summary: Diagnosis of posterior fossa tumors is challenging yet proper classification is imperative given that treatment decisions diverge based on tumor type. The aim of this systematic review is to summarize the current state of machine learning methods developed as diagnostic tools for these pediatric brain tumors. We found that, while individual algorithms were quite efficacious, the field is limited by its heterogeneity in methods, outcome reporting, and study populations. We identify common limitations in the study and development of these algorithms and make recommendations as to how they can be overcome. If incorporated into algorithm design, the practical guidelines outlined in this review could help to bridge the gap between theoretical algorithm diagnostic testing and practical clinical application for a wide variety of pathologies. Background: Posterior fossa tumors (PFTs) are a morbid group of central nervous system tumors that most often present in childhood. While early diagnosis is critical to drive appropriate treatment, definitive diagnosis is currently only achievable through invasive tissue collection and histopathological analyses. Machine learning has been investigated as an alternative means of diagnosis. In this systematic review and meta-analysis, we evaluated the primary literature to identify all machine learning algorithms developed to classify and diagnose pediatric PFTs using imaging or molecular data. Methods: Of the 433 primary papers identified in PubMed, EMBASE, and Web of Science, 25 ultimately met the inclusion criteria. The included papers were extracted for algorithm architecture, study parameters, performance, strengths, and limitations. Results: The algorithms exhibited variable performance based on sample size, classifier(s) used, and individual tumor types being investigated. Ependymoma, medulloblastoma, and pilocytic astrocytoma were the most studied tumors with algorithm accuracies ranging from 37.5% to 94.5%. A minority of studies compared the developed algorithm to a trained neuroradiologist, with three imaging-based algorithms yielding superior performance. Common algorithm and study limitations included small sample sizes, uneven representation of individual tumor types, inconsistent performance reporting, and a lack of application in the clinical environment. Conclusions: Artificial intelligence has the potential to improve the speed and accuracy of diagnosis in this field if the right algorithm is applied to the right scenario. Work is needed to standardize outcome reporting and facilitate additional trials to allow for clinical uptake. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
48. PSG-Yolov5: A Paradigm for Traffic Sign Detection and Recognition Algorithm Based on Deep Learning.
- Author
-
Hu, Jie, Wang, Zhanbin, Chang, Minjie, Xie, Lihao, Xu, Wencai, and Chen, Nan
- Subjects
- *
TRAFFIC monitoring , *DEEP learning , *TRAFFIC signs & signals , *MACHINE learning , *ALGORITHMS , *FEATURE extraction - Abstract
With the gradual popularization of autonomous driving technology, how to obtain traffic sign information efficiently and accurately is very important for subsequent decision-making and planning tasks. Traffic sign detection and recognition (TSDR) algorithms include color-based, shape-based, and machine learning based. However, the algorithms mentioned above are insufficient for traffic sign detection tasks in complex environments. In this paper, we propose a traffic sign detection and recognition paradigm based on deep learning algorithms. First, to solve the problem of insufficient spatial information in high-level features of small traffic signs, the parallel deformable convolution module (PDCM) is proposed in this paper. PDCM adaptively acquires the corresponding receptive field preserving the integrity of the abstract information through symmetrical branches thereby improving the feature extraction capability. Simultaneously, we propose sub-pixel convolution attention module (SCAM) based on the attention mechanism to alleviate the influence of scale distribution. Distinguishing itself from other feature fusion, our proposed method can better focus on the information of scale distribution through the attention module. Eventually, we introduce GSConv to further reduce the computational complexity of our proposed algorithm, better satisfying industrial application. Experimental results demonstrate that our proposed methods can effectively improve performance, both in detection accuracy and mAP@0.5. Specifically, when the proposed PDCM, SCAM, and GSConv are applied to the Yolov5, it achieves 89.2% mAP@0.5 in TT100K, which exceeds the benchmark network by 4.9%. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
49. The Potential of MicroRNAs as Non-Invasive Prostate Cancer Biomarkers: A Systematic Literature Review Based on a Machine Learning Approach.
- Author
-
Bevacqua, Emilia, Ammirato, Salvatore, Cione, Erika, Curcio, Rosita, Dolce, Vincenza, and Tucci, Paola
- Subjects
- *
PROSTATE tumors treatment , *DISEASE progression , *SYSTEMATIC reviews , *MICRORNA , *EARLY detection of cancer , *MACHINE learning , *TUMOR markers , *SOFTWARE analytics , *PROSTATE tumors , *DATA mining , *ALGORITHMS - Abstract
Simple Summary: Prostate cancer (PCa) is the most common cancer in men worldwide. Screening and diagnosis are based on prostate-specific antigen (PSA) blood testing and digital rectal examination. Nevertheless, these methods are not specific and have a high risk of mistaken results. This has led to overtreatment and unnecessary radical therapy; thus, better prognostic tools are urgently needed. In this view, microRNAs (miRs) appear as potential non-invasive biomarkers for PCa diagnosis, prognosis, and therapy. As the scientific literature available in this field is huge and very often controversial, we identified and discussed three topics that characterize the investigated research area by combining the big data from the literature together with a novel machine learning approach. By analyzing the papers clustered into these topics we have offered a deeper understanding of the current research, which helps to contribute to the advancement of this research field. Background: Prostate cancer (PCa) is the second leading cause of cancer-related deaths in men. Although the prostate-specific antigen (PSA) test is used in clinical practice for screening and/or early detection of PCa, it is not specific, thus resulting in high false-positive rates. MicroRNAs (miRs) provide an opportunity as biomarkers for diagnosis, prognosis, and recurrence of PCa. Because the size of the literature on it is increasing and often controversial, this study aims to consolidate the state-of-art of relevant published research. Methods: A Systematic Literature Review (SLR) approach was applied to analyze a set of 213 scientific publications through a text mining method that makes use of the Latent Dirichlet Allocation (LDA) algorithm. Results and Conclusions: The result of this activity, performed through the MySLR digital platform, allowed us to identify a set of three relevant topics characterizing the investigated research area. We analyzed and discussed all the papers clustered into them. We highlighted that several miRs are associated with PCa progression, and that their detection in patients' urine seems to be the more reliable and promising non-invasive tool for PCa diagnosis. Finally, we proposed some future research directions to help future scientists advance the field further. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
50. MLP-PSO Hybrid Algorithm for Heart Disease Prediction.
- Author
-
Al Bataineh, Ali and Manacek, Sarah
- Subjects
- *
HEART diseases , *MEDICAL personnel , *PARTICLE swarm optimization , *ALGORITHMS , *MACHINE learning - Abstract
Background: Machine Learning (ML) is becoming increasingly popular in healthcare, particularly for improving the timing and accuracy of diagnosis. ML can provide disease prediction by analyzing vast amounts of healthcare data, thereby, empowering patients and healthcare providers with information to make informed decisions about disease prevention. Due to the rising cost of treatment, one of the most important topics in clinical data analysis is the prediction and prevention of cardiovascular disease. It is difficult to manually calculate the chances of developing heart disease due to a myriad of contributing factors. Objective: The aim of this paper is to develop and compare various intelligent systems built with ML algorithms for predicting whether a person is likely to develop heart disease using the publicly available Cleveland Heart Disease dataset. This paper describes an alternative multilayer perceptron (MLP) training technique that utilizes a particle swarm optimization (PSO) algorithm for heart disease detection. Methods: The proposed MLP-PSO hybrid algorithm and ten different ML algorithms are used in this study to predict heart disease. Various classification metrics are used to evaluate the performance of the algorithms. Results: The proposed MLP-PSO outperforms all other algorithms, obtaining an accuracy of 84.61%. Conclusions: According to our findings, the current MLP-PSO classifier enables practitioners to diagnose heart disease earlier, more accurately, and more effectively. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.