63 results
Search Results
2. Channel Prediction for Underwater Acoustic Communication: A Review and Performance Evaluation of Algorithms.
- Author
-
Liu, Haotian, Ma, Lu, Wang, Zhaohui, and Qiao, Gang
- Subjects
- *
DEEP learning , *UNDERWATER acoustic communication , *MACHINE learning , *ALGORITHMS , *TELECOMMUNICATION systems , *FORECASTING - Abstract
Underwater acoustic (UWA) channel prediction technology, as an important topic in UWA communication, has played an important role in UWA adaptive communication network and underwater target perception. Although many significant advancements have been achieved in underwater acoustic channel prediction over the years, a comprehensive summary and introduction is still lacking. As the first comprehensive overview of UWA channel prediction, this paper introduces past works and algorithm implementation methods of channel prediction from the perspective of linear, kernel-based, and deep learning approaches. Importantly, based on available at-sea experiment datasets, this paper compares the performance of current primary UWA channel prediction algorithms under a unified system framework, providing researchers with a comprehensive and objective understanding of UWA channel prediction. Finally, it discusses the directions and challenges for future research. The survey finds that linear prediction algorithms are the most widely applied, and deep learning, as the most advanced type of algorithm, has moved this field into a new stage. The experimental results show that the linear algorithms have the lowest computational complexity, and when the training samples are sufficient, deep learning algorithms have the best prediction performance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Anomaly Detection in Blockchain Networks Using Unsupervised Learning: A Survey.
- Author
-
Cholevas, Christos, Angeli, Eftychia, Sereti, Zacharoula, Mavrikos, Emmanouil, and Tsekouras, George E.
- Subjects
- *
DATA structures , *MACHINE learning , *PRIVATE networks , *BLOCKCHAINS , *ALGORITHMS - Abstract
In decentralized systems, the quest for heightened security and integrity within blockchain networks becomes an issue. This survey investigates anomaly detection techniques in blockchain ecosystems through the lens of unsupervised learning, delving into the intricacies and going through the complex tapestry of abnormal behaviors by examining avant-garde algorithms to discern deviations from normal patterns. By seamlessly blending technological acumen with a discerning gaze, this survey offers a perspective on the symbiotic relationship between unsupervised learning and anomaly detection by reviewing this problem with a categorization of algorithms that are applied to a variety of problems in this field. We propose that the use of unsupervised algorithms in blockchain anomaly detection should be viewed not only as an implementation procedure but also as an integration procedure, where the merits of these algorithms can effectively be combined in ways determined by the problem at hand. In that sense, the main contribution of this paper is a thorough study of the interplay between various unsupervised learning algorithms and how this can be used in facing malicious activities and behaviors within public and private blockchain networks. The result is the definition of three categories, the characteristics of which are recognized in terms of the way the respective integration takes place. When implementing unsupervised learning, the structure of the data plays a pivotal role. Therefore, this paper also provides an in-depth presentation of the data structures commonly used in unsupervised learning-based blockchain anomaly detection. The above analysis is encircled by a presentation of the typical anomalies that have occurred so far along with a description of the general machine learning frameworks developed to deal with them. Finally, the paper spotlights challenges and directions that can serve as a comprehensive compendium for future research efforts. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. VIS-SLAM: A Real-Time Dynamic SLAM Algorithm Based on the Fusion of Visual, Inertial, and Semantic Information.
- Author
-
Wang, Yinglong, Liu, Xiaoxiong, Zhao, Minkun, and Xu, Xinlong
- Subjects
- *
MOBILE robots , *MACHINE learning , *MOBILE learning , *DEEP learning , *ALGORITHMS , *INFORMATION measurement , *PROBABILITY theory , *GEOMETRY - Abstract
A deep learning-based Visual Inertial SLAM technique is proposed in this paper to ensure accurate autonomous localization of mobile robots in environments with dynamic objects. Addressing the limitations of real-time performance in deep learning algorithms and the poor robustness of pure visual geometry algorithms, this paper presents a deep learning-based Visual Inertial SLAM technique. Firstly, a non-blocking model is designed to extract semantic information from images. Then, a motion probability hierarchy model is proposed to obtain prior motion probabilities of feature points. For image frames without semantic information, a motion probability propagation model is designed to determine the prior motion probabilities of feature points. Furthermore, considering that the output of inertial measurements is unaffected by dynamic objects, this paper integrates inertial measurement information to improve the estimation accuracy of feature point motion probabilities. An adaptive threshold-based motion probability estimation method is proposed, and finally, the positioning accuracy is enhanced by eliminating feature points with excessively high motion probabilities. Experimental results demonstrate that the proposed algorithm achieves accurate localization in dynamic environments while maintaining real-time performance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Improved minority attack detection in Intrusion Detection System using efficient feature selection algorithms.
- Author
-
Rejimol Robinson, R. R., Anagha Madhav, K. P., and Thomas, Ciza
- Subjects
- *
FEATURE selection , *MACHINE learning , *INTRUSION detection systems (Computer security) , *COMPUTER network traffic , *SUPERVISED learning , *ALGORITHMS - Abstract
Machine Learning and Data Mining algorithms are used extensively to enhance the performance of Intrusion Detection Systems. The number of training instances and the dimensionality of data are crucial factors affecting the performance of the model built during the training of any supervised learning algorithms. A sufficient proportion of instances having relevant features from all classes of attacks and normal traffic are considered most desirable while building the classification model that classifies the network traffic into attack and normal. This paper proposes a methodology to improve the accuracy of the model by giving importance to the relevant features that can contribute to model building. The feature selection using correlation‐based and information gain‐based techniques during training and testing contributes much to the detection of stealthier attacks and minority attacks. Then the features of the less detected attacks are identified as the second phase of the filter that is used to improve the performance. The relevant features of stealthy attacks are identified based on the correlation of corresponding features of the attack and normal data as the attacks are made stealthy mostly by making it resemble the normal traffic. Finally, the attacks that are rarely found in the training data are oversampled to improve their detection. CICIDS 2017 data set is employed as it comprises stealthier attacks generated using modern tools. NSL KDD data set is also used for evaluation to compare the proposed work with existing literature as it is used in most of the available literature. The results show superior performance with an accuracy of 99.8%, false positive rate of 0.2%, and a detection rate and 99.8%. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. A Semi-Automatic Magnetic Resonance Imaging Annotation Algorithm Based on Semi-Weakly Supervised Learning.
- Author
-
Chen, Shaolong and Zhang, Zhiyong
- Subjects
- *
MAGNETIC resonance imaging , *SUPERVISED learning , *MACHINE learning , *ITERATIVE learning control , *ALGORITHMS , *ANNOTATIONS , *DEEP learning - Abstract
The annotation of magnetic resonance imaging (MRI) images plays an important role in deep learning-based MRI segmentation tasks. Semi-automatic annotation algorithms are helpful for improving the efficiency and reducing the difficulty of MRI image annotation. However, the existing semi-automatic annotation algorithms based on deep learning have poor pre-annotation performance in the case of insufficient segmentation labels. In this paper, we propose a semi-automatic MRI annotation algorithm based on semi-weakly supervised learning. In order to achieve a better pre-annotation performance in the case of insufficient segmentation labels, semi-supervised and weakly supervised learning were introduced, and a semi-weakly supervised learning segmentation algorithm based on sparse labels was proposed. In addition, in order to improve the contribution rate of a single segmentation label to the performance of the pre-annotation model, an iterative annotation strategy based on active learning was designed. The experimental results on public MRI datasets show that the proposed algorithm achieved an equivalent pre-annotation performance when the number of segmentation labels was much less than that of the fully supervised learning algorithm, which proves the effectiveness of the proposed algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. A Cross-View Geo-Localization Algorithm Using UAV Image and Satellite Image.
- Author
-
Fan, Jiqi, Zheng, Enhui, He, Yufei, and Yang, Jianxing
- Subjects
- *
REMOTE-sensing images , *TRANSFORMER models , *ALGORITHMS , *TECHNOLOGY transfer , *MACHINE learning - Abstract
Within research on the cross-view geolocation of UAVs, differences in image sources and interference from similar scenes pose huge challenges. Inspired by multimodal machine learning, in this paper, we design a single-stream pyramid transformer network (SSPT). The backbone of the model uses the self-attention mechanism to enrich its own internal features in the early stage and uses the cross-attention mechanism in the later stage to refine and interact with different features to eliminate irrelevant interference. In addition, in the post-processing part of the model, a header module is designed for upsampling to generate heat maps, and a Gaussian weight window is designed to assign label weights to make the model converge better. Together, these methods improve the positioning accuracy of UAV images in satellite images. Finally, we also use style transfer technology to simulate various environmental changes in order to expand the experimental data, further proving the environmental adaptability and robustness of the method. The final experimental results show that our method yields significant performance improvement: The relative distance score (RDS) of the SSPT-384 model on the benchmark UL14 dataset is significantly improved from 76.25% to 84.40%, while the meter-level accuracy (MA) of 3 m, 5 m, and 20 m is increased by 12%, 12%, and 10%, respectively. For the SSPT-256 model, the RDS has been increased to 82.21%, and the meter-level accuracy (MA) of 3 m, 5 m, and 20 m has increased by 5%, 5%, and 7%, respectively. It still shows strong robustness on the extended thermal infrared (TIR), nighttime, and rainy day datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. ULG-SLAM: A Novel Unsupervised Learning and Geometric Feature-Based Visual SLAM Algorithm for Robot Localizability Estimation.
- Author
-
Huang, Yihan, Xie, Fei, Zhao, Jing, Gao, Zhilin, Chen, Jun, Zhao, Fei, and Liu, Xixiang
- Subjects
- *
MACHINE learning , *VISUAL learning , *ALGORITHMS , *ROBOTS , *FEATURE extraction , *WALKING speed - Abstract
Indoor localization has long been a challenging task due to the complexity and dynamism of indoor environments. This paper proposes ULG-SLAM, a novel unsupervised learning and geometric-based visual SLAM algorithm for robot localizability estimation to improve the accuracy and robustness of visual SLAM. Firstly, a dynamic feature filtering based on unsupervised learning and moving consistency checks is developed to eliminate the features of dynamic objects. Secondly, an improved line feature extraction algorithm based on LSD is proposed to optimize the effect of geometric feature extraction. Thirdly, geometric features are used to optimize localizability estimation, and an adaptive weight model and attention mechanism are built using the method of region delimitation and region growth. Finally, to verify the effectiveness and robustness of localizability estimation, multiple indoor experiments using the EuRoC dataset and TUM RGB-D dataset are conducted. Compared with ORBSLAM2, the experimental results demonstrate that absolute trajectory accuracy can be improved by 95% for equivalent processing speed in walking sequences. In fr3/walking_xyz and fr3/walking_half, ULG-SLAM tracks more trajectories than DS-SLAM, and the ATE RMSE is improved by 36% and 6%, respectively. Furthermore, the improvement in robot localizability over DynaSLAM is noteworthy, coming in at about 11% and 3%, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. An Intelligent Decision Algorithm for a Greenhouse System Based on a Rough Set and D-S Evidence Theory.
- Author
-
Lina Wang, Mengjie Xu, and Ying Zhang
- Subjects
- *
GREENHOUSES , *MACHINE learning , *ROUGH sets , *EXPERT evidence , *SUPPORT vector machines , *THEORY of knowledge , *ALGORITHMS , *SOFT sets - Abstract
This paper presents a decision-making approach grounded in rough set theory and evidential reasoning to address the demand for expert decision-making in greenhouse environmental control systems. Furthermore, a decision-making model is developed by integrating the D-S evidence theory with an expert knowledge table for greenhouse environmental control systems. The model's reasoning process encompasses continuous attribute discretization, expert decision table formation, attribute reduction, and evidence combination reasoning. Firstly, the fuzzy C-means clustering algorithm is employed to discretize the original environmental data and cluster it. Subsequently, an attribute reduction algorithm based on information entropy is utilized to optimize the decision table by eliminating unnecessary conditional attributes in expert knowledge. The reduced indicators are then combined using evidential theory. Finally, suitable greenhouse control methods are determined by the confidence decision proposed by the D-S evidence theory. To assess the efficacy of this intelligent decision-making algorithm based on rough set and D-S evidence theory, its performance is compared with traditional SVM algorithms and small-shot learning algorithms. The results indicate that this proposed method significantly enhances the credibility of control decision-making processes, with an average running time of 0.002378s for the fusion decision algorithm and 0.017939s for the support vector machine (SVM) algorithm, respectively. The SVM accuracy rate after testing and training stands at 90.34%. Moreover, retraining based on information entropy attribute reduction leads to a correct decision rate increase of up to 100%. This method notably improves confidence levels in decision-making processes while reducing uncertainty and demonstrates reliability when applied in making decisions regarding greenhouse environments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
10. Early Breast Cancer Risk Assessment: Integrating Histopathology with Artificial Intelligence.
- Author
-
Ivanova, Mariia, Pescia, Carlo, Trapani, Dario, Venetis, Konstantinos, Frascarelli, Chiara, Mane, Eltjona, Cursano, Giulia, Sajjadi, Elham, Scatena, Cristian, Cerbelli, Bruna, d'Amati, Giulia, Porta, Francesca Maria, Guerini-Rocco, Elena, Criscitiello, Carmen, Curigliano, Giuseppe, and Fusco, Nicola
- Subjects
- *
BREAST tumor risk factors , *RISK assessment , *MEDICAL protocols , *CANCER relapse , *ARTIFICIAL intelligence , *EARLY detection of cancer , *CYTOCHEMISTRY , *TUMOR markers , *DECISION making in clinical medicine , *IMMUNOHISTOCHEMISTRY , *PATIENT-centered care , *DEEP learning , *ARTIFICIAL neural networks , *MACHINE learning , *ONCOLOGISTS , *INDIVIDUALIZED medicine , *MOLECULAR pathology , *HEALTH care teams , *ALGORITHMS , *DISEASE risk factors - Abstract
Simple Summary: Risk assessment in early breast cancer is critical for clinical decisions, but defining risk categories poses a significant challenge. The integration of conventional histopathology and biomarkers with artificial intelligence (AI) techniques, including machine learning and deep learning, has the potential to offer more precise information. AI applications extend beyond detection to histological subtyping, grading, and molecular feature identification. The successful integration of AI into clinical practice requires collaboration between histopathologists, molecular pathologists, computational pathologists, and oncologists to optimize patient outcomes. Effective risk assessment in early breast cancer is essential for informed clinical decision-making, yet consensus on defining risk categories remains challenging. This paper explores evolving approaches in risk stratification, encompassing histopathological, immunohistochemical, and molecular biomarkers alongside cutting-edge artificial intelligence (AI) techniques. Leveraging machine learning, deep learning, and convolutional neural networks, AI is reshaping predictive algorithms for recurrence risk, thereby revolutionizing diagnostic accuracy and treatment planning. Beyond detection, AI applications extend to histological subtyping, grading, lymph node assessment, and molecular feature identification, fostering personalized therapy decisions. With rising cancer rates, it is crucial to implement AI to accelerate breakthroughs in clinical practice, benefiting both patients and healthcare providers. However, it is important to recognize that while AI offers powerful automation and analysis tools, it lacks the nuanced understanding, clinical context, and ethical considerations inherent to human pathologists in patient care. Hence, the successful integration of AI into clinical practice demands collaborative efforts between medical experts and computational pathologists to optimize patient outcomes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Novel Imaging Approaches for Glioma Classification in the Era of the World Health Organization 2021 Update: A Scoping Review.
- Author
-
Richter, Vivien, Ernemann, Ulrike, and Bender, Benjamin
- Subjects
- *
GLIOMAS , *RADIOMICS , *MAGNETIC resonance imaging , *DESCRIPTIVE statistics , *SYSTEMATIC reviews , *LITERATURE reviews , *DEEP learning , *GENETIC mutation , *NEURORADIOLOGY , *MACHINE learning , *DATA analysis software , *ALGORITHMS - Abstract
Simple Summary: The 2021 WHO classification of central nervous system (CNS) tumors is challenging for neuroradiologists due to the central role of the molecular profile of tumors. We performed a scoping review of recent literature to assess the existing data on the power of novel data analysis tools to predict new tumor classes by imaging. We found room for performance improvement for subgroups with lower incidence (e.g., 1p/19q codeleted or IDH1/2 mutated gliomas) and patients with rare diagnoses (e.g., pediatric gliomas, midline gliomas). More data regarding functional MRI techniques need to be collected. Studies explicitly designed to assess the generalizability of AI-aided tools for predicting molecular tumor subgroups are lacking. The 2021 WHO classification of CNS tumors is a challenge for neuroradiologists due to the central role of the molecular profile of tumors. The potential of novel data analysis tools in neuroimaging must be harnessed to maintain its role in predicting tumor subgroups. We performed a scoping review to determine current evidence and research gaps. A comprehensive literature search was conducted regarding glioma subgroups according to the 2021 WHO classification and the use of MRI, radiomics, machine learning, and deep learning algorithms. Sixty-two original articles were included and analyzed by extracting data on the study design and results. Only 8% of the studies included pediatric patients. Low-grade gliomas and diffuse midline gliomas were represented in one-third of the research papers. Public datasets were utilized in 22% of the studies. Conventional imaging sequences prevailed; data on functional MRI (DWI, PWI, CEST, etc.) are underrepresented. Multiparametric MRI yielded the best prediction results. IDH mutation and 1p/19q codeletion status prediction remain in focus with limited data on other molecular subgroups. Reported AUC values range from 0.6 to 0.98. Studies designed to assess generalizability are scarce. Performance is worse for smaller subgroups (e.g., 1p/19q codeleted or IDH1/2 mutated gliomas). More high-quality study designs with diversity in the analyzed population and techniques are needed. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. A Multi-Agent RL Algorithm for Dynamic Task Offloading in D2D-MEC Network with Energy Harvesting †.
- Author
-
Mi, Xin, He, Huaiwen, and Shen, Hong
- Subjects
- *
ENERGY harvesting , *MACHINE learning , *ALGORITHMS , *INTEGER programming , *DYNAMIC loads , *MOBILE computing , *NONLINEAR programming - Abstract
Delay-sensitive task offloading in a device-to-device assisted mobile edge computing (D2D-MEC) system with energy harvesting devices is a critical challenge due to the dynamic load level at edge nodes and the variability in harvested energy. In this paper, we propose a joint dynamic task offloading and CPU frequency control scheme for delay-sensitive tasks in a D2D-MEC system, taking into account the intricacies of multi-slot tasks, characterized by diverse processing speeds and data transmission rates. Our methodology involves meticulous modeling of task arrival and service processes using queuing systems, coupled with the strategic utilization of D2D communication to alleviate edge server load and prevent network congestion effectively. Central to our solution is the formulation of average task delay optimization as a challenging nonlinear integer programming problem, requiring intelligent decision making regarding task offloading for each generated task at active mobile devices and CPU frequency adjustments at discrete time slots. To navigate the intricate landscape of the extensive discrete action space, we design an efficient multi-agent DRL learning algorithm named MAOC, which is based on MAPPO, to minimize the average task delay by dynamically determining task-offloading decisions and CPU frequencies. MAOC operates within a centralized training with decentralized execution (CTDE) framework, empowering individual mobile devices to make decisions autonomously based on their unique system states. Experimental results demonstrate its swift convergence and operational efficiency, and it outperforms other baseline algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Personalized Treatment Policies with the Novel Buckley-James Q-Learning Algorithm.
- Author
-
Lee, Jeongjin and Kim, Jong-Min
- Subjects
- *
MACHINE learning , *ALGORITHMS , *SURVIVAL analysis (Biometry) , *TIME management , *PATIENT care , *REINFORCEMENT learning - Abstract
This research paper presents the Buckley-James Q-learning (BJ-Q) algorithm, a cutting-edge method designed to optimize personalized treatment strategies, especially in the presence of right censoring. We critically assess the algorithm's effectiveness in improving patient outcomes and its resilience across various scenarios. Central to our approach is the innovative use of the survival time to impute the reward in Q-learning, employing the Buckley-James method for enhanced accuracy and reliability. Our findings highlight the significant potential of personalized treatment regimens and introduce the BJ-Q learning algorithm as a viable and promising approach. This work marks a substantial advancement in our comprehension of treatment dynamics and offers valuable insights for augmenting patient care in the ever-evolving clinical landscape. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Reweighted Extreme Learning Machine-Based Clutter Suppression and Range Compensation Algorithm for Non-Side-Looking Airborne Radar.
- Author
-
Liu, Jing, Liao, Guisheng, Zeng, Cao, Tao, Haihong, Xu, Jingwei, Zhu, Shengqi, and Juwono, Filbert H.
- Subjects
- *
RADAR in aeronautics , *MACHINE learning , *ALGORITHMS , *MATHEMATICAL complexes - Abstract
Non-side-looking airborne radar provides important applications on account of its all-round multi-angle airspace coverage. However, it suffers clutter range dependence that makes the samples fail to satisfy the condition of being independent and identically distributed (IID), and it severely degrades traditional approaches to clutter suppression and target detection. In this paper, a novel reweighted extreme learning machine (ELM)-based clutter suppression and range compensation algorithm is proposed for non-side-looking airborne radar. The proposed method involves first designing the pre-processing stage, the special reweighted complex-valued activation function containing an unknown range compensation matrix, and two new objective outputs for constructing an initial reweighted ELM-based network with its training. Then, two other objective outputs, a new loss function, and a reverse feedback framework driven by the specifically designed objectives are proposed for the unknown range compensation matrix. Finally, aiming to estimate and reconstruct the unknown compensation matrix, special processes of the complex-valued structures and the theoretical derivations are designed and analyzed in detail. Consequently, with the updated and compensated samples, further processing including space–time adaptive processing (STAP) can be performed for clutter suppression and target detection. Compared with the classic relevant methods, the proposed algorithm achieves significantly superior performance with reasonable computation time. It provides an obviously higher detection probability and better improvement factor (IF). The simulation results verify that the proposed algorithm is effective and has many advantages. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. A hybrid feature selection algorithm combining information gain and grouping particle swarm optimization for cancer diagnosis.
- Author
-
Yang, Fangyuan, Xu, Zhaozhao, Wang, Hong, Sun, Lisha, Zhai, Mengjiao, and Zhang, Juan
- Subjects
- *
FEATURE selection , *PARTICLE swarm optimization , *MACHINE learning , *CANCER diagnosis , *ALGORITHMS , *SUPPORT vector machines - Abstract
Background: Cancer diagnosis based on machine learning has become a popular application direction. Support vector machine (SVM), as a classical machine learning algorithm, has been widely used in cancer diagnosis because of its advantages in high-dimensional and small sample data. However, due to the high-dimensional feature space and high feature redundancy of gene expression data, SVM faces the problem of poor classification effect when dealing with such data. Methods: Based on this, this paper proposes a hybrid feature selection algorithm combining information gain and grouping particle swarm optimization (IG-GPSO). The algorithm firstly calculates the information gain values of the features and ranks them in descending order according to the value. Then, ranked features are grouped according to the information index, so that the features in the group are close, and the features outside the group are sparse. Finally, grouped features are searched using grouping PSO and evaluated according to in-group and out-group. Results: Experimental results show that the average accuracy (ACC) of the SVM on the feature subset selected by the IG-GPSO is 98.50%, which is significantly better than the traditional feature selection algorithm. Compared with KNN, the classification effect of the feature subset selected by the IG-GPSO is still optimal. In addition, the results of multiple comparison tests show that the feature selection effect of the IG-GPSO is significantly better than that of traditional feature selection algorithms. Conclusion: The feature subset selected by IG-GPSO not only has the best classification effect, but also has the least feature scale (FS). More importantly, the IG-GPSO significantly improves the ACC of SVM in cancer diagnostic. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. Artificial Intelligence in Pediatrics: Learning to Walk Together.
- Author
-
Demirbaş, Kaan Can, Yıldız, Mehmet, Saygılı, Seha, Canpolat, Nur, and Kasapçopur, Özgür
- Subjects
- *
GENOME editing , *COMPUTER assisted instruction , *ARTIFICIAL intelligence , *PEDIATRICS , *MACHINE learning , *LEARNING strategies , *ROBOTICS , *RISK assessment , *CHILD health services , *EDUCATIONAL technology , *DECISION making in clinical medicine , *PREDICTION models , *ALGORITHMS , *EVALUATION - Abstract
In this era of rapidly advancing technology, artificial intelligence (AI) has emerged as a transformative force, even being called the Fourth Industrial Revolution, along with gene editing and robotics. While it has undoubtedly become an increasingly important part of our daily lives, it must be recognized that it is not an additional tool, but rather a complex concept that poses a variety of challenges. AI, with considerable potential, has found its place in both medical care and clinical research. Within the vast field of pediatrics, it stands out as a particularly promising advancement. As pediatricians, we are indeed witnessing the impactful integration of AI-based applications into our daily clinical practice and research efforts. These tools are being used for simple to more complex tasks such as diagnosing clinically challenging conditions, predicting disease outcomes, creating treatment plans, educating both patients and healthcare professionals, and generating accurate medical records or scientific papers. In conclusion, the multifaceted applications of AI in pediatrics will increase efficiency and improve the quality of healthcare and research. However, there are certain risks and threats accompanying this advancement including the biases that may contribute to health disparities and, inaccuracies. Therefore, it is crucial to recognize and address the technical, ethical, and legal challenges as well as explore the benefits in both clinical and research fields. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. YOLOv7oSAR: A Lightweight High-Precision Ship Detection Model for SAR Images Based on the YOLOv7 Algorithm.
- Author
-
Liu, Yilin, Ma, Yong, Chen, Fu, Shang, Erping, Yao, Wutao, Zhang, Shuyan, and Yang, Jin
- Subjects
- *
SHIP models , *SYNTHETIC aperture radar , *MACHINE learning , *SOLID state drives , *ALGORITHMS , *DEEP learning - Abstract
Researchers have explored various methods to fully exploit the all-weather characteristics of Synthetic aperture radar (SAR) images to achieve high-precision, real-time, computationally efficient, and easily deployable ship target detection models. These methods include Constant False Alarm Rate (CFAR) algorithms and deep learning approaches such as RCNN, YOLO, and SSD, among others. While these methods outperform traditional algorithms in SAR ship detection, challenges still exist in handling the arbitrary ship distributions and small target features in SAR remote sensing images. Existing models are complex, with a large number of parameters, hindering effective deployment. This paper introduces a YOLOv7 oriented bounding box SAR ship detection model (YOLOv7oSAR). The model employs a rotation box detection mechanism, uses the KLD loss function to enhance accuracy, and introduces a Bi-former attention mechanism to improve small target detection. By redesigning the network's width and depth and incorporating a lightweight P-ELAN structure, the model effectively reduces its size and computational requirements. The proposed model achieves high-precision detection results on the public RSDD dataset (94.8% offshore, 66.6% nearshore), and its generalization ability is validated on a custom dataset (94.2% overall detection accuracy). [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. Sea Ice Extraction via Remote Sensing Imagery: Algorithms, Datasets, Applications and Challenges.
- Author
-
Huang, Wenjun, Yu, Anzhu, Xu, Qing, Sun, Qun, Guo, Wenyue, Ji, Song, Wen, Bowei, and Qiu, Chunping
- Subjects
- *
SEA ice , *DEEP learning , *REMOTE sensing , *IMAGE recognition (Computer vision) , *GEOGRAPHIC information systems , *ALGORITHMS - Abstract
Deep learning, which is a dominating technique in artificial intelligence, has completely changed image understanding over the past decade. As a consequence, the sea ice extraction (SIE) problem has reached a new era. We present a comprehensive review of four important aspects of SIE, including algorithms, datasets, applications and future trends. Our review focuses on research published from 2016 to the present, with a specific focus on deep-learning-based approaches in the last five years. We divided all related algorithms into three categories, including the conventional image classification approach, the machine learning-based approach and deep-learning-based methods. We reviewed the accessible ice datasets including SAR-based datasets, the optical-based datasets and others. The applications are presented in four aspects including climate research, navigation, geographic information systems (GIS) production and others. This paper also provides insightful observations and inspiring future research directions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. Dendritic Growth Optimization: A Novel Nature-Inspired Algorithm for Real-World Optimization Problems.
- Author
-
Priyadarshini, Ishaani
- Subjects
- *
OPTIMIZATION algorithms , *BIOLOGICALLY inspired computing , *DEEP learning , *MACHINE learning , *METAHEURISTIC algorithms , *PROBLEM solving , *ALGORITHMS - Abstract
In numerous scientific disciplines and practical applications, addressing optimization challenges is a common imperative. Nature-inspired optimization algorithms represent a highly valuable and pragmatic approach to tackling these complexities. This paper introduces Dendritic Growth Optimization (DGO), a novel algorithm inspired by natural branching patterns. DGO offers a novel solution for intricate optimization problems and demonstrates its efficiency in exploring diverse solution spaces. The algorithm has been extensively tested with a suite of machine learning algorithms, deep learning algorithms, and metaheuristic algorithms, and the results, both before and after optimization, unequivocally support the proposed algorithm's feasibility, effectiveness, and generalizability. Through empirical validation using established datasets like diabetes and breast cancer, the algorithm consistently enhances model performance across various domains. Beyond its working and experimental analysis, DGO's wide-ranging applications in machine learning, logistics, and engineering for solving real-world problems have been highlighted. The study also considers the challenges and practical implications of implementing DGO in multiple scenarios. As optimization remains crucial in research and industry, DGO emerges as a promising avenue for innovation and problem solving. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. Artificial Intelligence Algorithms for Healthcare.
- Author
-
Chumachenko, Dmytro and Yakovlev, Sergiy
- Subjects
- *
ARTIFICIAL intelligence , *DEEP learning , *ALGORITHMS , *MACHINE learning , *INFORMATION technology , *MEDICAL care , *MOTION capture (Human mechanics) , *MEDICAL technology - Abstract
Artificial intelligence (AI) algorithms are playing a crucial role in transforming healthcare by enhancing the quality, accessibility, and efficiency of medical care, research, and operations. These algorithms enable healthcare providers to offer more accurate diagnoses, predict outcomes, and customize treatments to individual patient needs. AI also improves operational efficiency by automating routine tasks and optimizing resource management. However, there are challenges to adopting AI in healthcare, such as data privacy concerns and potential biases in algorithms. Collaboration among stakeholders is necessary to ensure ethical use of AI and its positive impact on the field. AI also has applications in medical research, preventive medicine, and public health. It is important to recognize that AI should augment, not replace, the expertise and compassionate care provided by healthcare professionals. The ethical implications and societal impact of AI in healthcare must be carefully considered, guided by fairness, transparency, and accountability principles. Several research papers in this special issue explore the application of AI algorithms in various aspects of healthcare, such as gait analysis for Parkinson's disease diagnosis, human activity recognition, heart disease prediction, compliance assessment with clinical protocols, epidemic management, neurological complications identification, fall prevention, leukemia diagnosis, and genetic clinical pathways. These studies demonstrate the potential of AI in improving medical diagnostics, patient monitoring, and personalized care. [Extracted from the article]
- Published
- 2024
- Full Text
- View/download PDF
21. Formulation of Feature and Label Space Using Modified Delphi in Support of Developing a Machine-Learning Algorithm to Automate Clash Resolution.
- Author
-
Harode, Ashit, Thabet, Walid, and Leite, Fernanda
- Subjects
- *
LITERATURE reviews , *ALGORITHMS , *EVIDENCE gaps , *MACHINE learning , *CONSTRUCTION projects - Abstract
To improve the current manual and iterative nature of clash resolution on construction projects, current research efforts continue to explore and test the utilization of machine-learning algorithms to automate the process. Though current research shows significant accuracy in automating clash resolution, many have failed to provide clear explanation and justification for the selection of their feature and label space. Since this is critical in developing an effective and explainable solution in machine learning, it is crucial to address this research gap. In this paper, the authors utilize an in-depth literature review and industry interviews to capture domain knowledge on how design clashes are resolved by industry experts. From analysis of the knowledge captured, we identified 23 factors considered by experts when resolving clashes and five alternative solutions/options to resolve a clash. Using a pool of industry experts, a modified Delphi approach was conducted to validate the factors and options and to determine a priority ranking. The authors identified 94 industry experts based on a predetermined qualification matrix to take part in the modified Delphi. Twelve participants responded and took part in the first round, and 11 completed the second round. A consensus was reached on all clash factors and resolution options. Factors including "clashing elements type," "constrained slope," "critical element in the clash," "location of the clash," "code compliance," and "project stage clashing element is in" were ranked as the most important factors, while "clashing element material" and "insulation type" were considered the least important. Participants also showed more preference to the "moving the clashing element with low priority in/along x-y-z directions" option to resolve clashes. These identified factors and options will be utilized to collect specific clash data to train and test effective and explainable machine-learning algorithms toward automating clash resolution. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. Formulation of Feature and Label Space Using Modified Delphi in Support of Developing a Machine-Learning Algorithm to Automate Clash Resolution.
- Author
-
Harode, Ashit, Thabet, Walid, and Leite, Fernanda
- Subjects
- *
MACHINE learning , *LITERATURE reviews , *ALGORITHMS , *EVIDENCE gaps , *CONSTRUCTION projects - Abstract
To improve the current manual and iterative nature of clash resolution on construction projects, current research efforts continue to explore and test the utilization of machine-learning algorithms to automate the process. Though current research shows significant accuracy in automating clash resolution, many have failed to provide clear explanation and justification for the selection of their feature and label space. Since this is critical in developing an effective and explainable solution in machine learning, it is crucial to address this research gap. In this paper, the authors utilize an in-depth literature review and industry interviews to capture domain knowledge on how design clashes are resolved by industry experts. From analysis of the knowledge captured, we identified 23 factors considered by experts when resolving clashes and five alternative solutions/options to resolve a clash. Using a pool of industry experts, a modified Delphi approach was conducted to validate the factors and options and to determine a priority ranking. The authors identified 94 industry experts based on a predetermined qualification matrix to take part in the modified Delphi. Twelve participants responded and took part in the first round, and 11 completed the second round. A consensus was reached on all clash factors and resolution options. Factors including "clashing elements type," "constrained slope," "critical element in the clash," "location of the clash," "code compliance," and "project stage clashing element is in" were ranked as the most important factors, while "clashing element material" and "insulation type" were considered the least important. Participants also showed more preference to the "moving the clashing element with low priority in/along x-y-z directions" option to resolve clashes. These identified factors and options will be utilized to collect specific clash data to train and test effective and explainable machine-learning algorithms toward automating clash resolution. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. Efficient improvement of energy detection technique in cognitive radio networks using K-nearest neighbour (KNN) algorithm.
- Author
-
Musuvathi, Aneesh Sarjit S., Archbald, Jofin F., Velmurugan, T., Sumathi, D., Renuga Devi, S., and Preetha, K. S.
- Subjects
- *
COGNITIVE radio , *RADIO networks , *MACHINE learning , *WIRELESS channels , *ALGORITHMS , *RESOURCE allocation - Abstract
With the birth of the IoT era, it is evident that the existing number of devices is going to rise exponentially. Any two devices will communicate with each other using the same frequency band with limited availability. Therefore, it is of vital importance that this frequency band used for communication be used efficiently to accommodate the maximum number of devices with the available radio resources. Cognitive radio (CR) technology serves this exact purpose. The stated one is an intelligent radio that is made to automatically identify the optimal wireless channel in the available wireless spectrum at a given instant. An important functionality of CR is spectrum sensing. Energy detection is a very popular algorithm used for spectrum sensing in CR technology for efficient allocation of radio resources to the devices intended to communicate with each other. Energy detection detects the presence of a primary user (PU) signal by continuously monitoring a selected frequency bandwidth. The conventional energy detection technique is known to perform poorly in lower SNR ranges. This paper works towards the improvement of the energy detection algorithm with the help of machine learning (ML). The ML model uses the general properties of the signal as training data and classifies between a PU signal and noise at very low SNR ranges (− 25 to − 10 dB). In this research, a K-nearest neighbours (KNN) model is selected for its versatility and simplicity. Upon testing the model with an out-of-sample dataset, the KNN model produced a detection accuracy of 94.5%. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. Custom Loss Functions in XGBoost Algorithm for Enhanced Critical Error Mitigation in Drill-Wear Analysis of Melamine-Faced Chipboard.
- Author
-
Bukowski, Michał, Kurek, Jarosław, Świderski, Bartosz, and Jegorowa, Albina
- Subjects
- *
MELAMINE , *MACHINE learning , *ALGORITHMS , *INDUSTRIAL efficiency - Abstract
The advancement of machine learning in industrial applications has necessitated the development of tailored solutions to address specific challenges, particularly in multi-class classification tasks. This study delves into the customization of loss functions within the eXtreme Gradient Boosting (XGBoost) algorithm, which is a critical step in enhancing the algorithm's performance for specific applications. Our research is motivated by the need for precision and efficiency in the industrial domain, where the implications of misclassification can be substantial. We focus on the drill-wear analysis of melamine-faced chipboard, a common material in furniture production, to demonstrate the impact of custom loss functions. The paper explores several variants of Weighted Softmax Loss Functions, including Edge Penalty and Adaptive Weighted Softmax Loss, to address the challenges of class imbalance and the heightened importance of accurately classifying edge classes. Our findings reveal that these custom loss functions significantly reduce critical errors in classification without compromising the overall accuracy of the model. This research not only contributes to the field of industrial machine learning by providing a nuanced approach to loss function customization but also underscores the importance of context-specific adaptations in machine learning algorithms. The results showcase the potential of tailored loss functions in balancing precision and efficiency, ensuring reliable and effective machine learning solutions in industrial settings. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. A penalized variable selection ensemble algorithm for high-dimensional group-structured data.
- Author
-
Li, Dongsheng, Pan, Chunyan, Zhao, Jing, and Luo, Anfei
- Subjects
- *
LOW birth weight , *STANDARD deviations , *HIGH-dimensional model representation , *MATHEMATICAL variables , *MACHINE learning , *ALGORITHMS - Abstract
This paper presents a multi-algorithm fusion model (StackingGroup) based on the Stacking ensemble learning framework to address the variable selection problem in high-dimensional group structure data. The proposed algorithm takes into account the differences in data observation and training principles of different algorithms. It leverages the strengths of each model and incorporates Stacking ensemble learning with multiple group structure regularization methods. The main approach involves dividing the data set into K parts on average, using more than 10 algorithms as basic learning models, and selecting the base learner based on low correlation, strong prediction ability, and small model error. Finally, we selected the grSubset + grLasso, grLasso, and grSCAD algorithms as the base learners for the Stacking algorithm. The Lasso algorithm was used as the meta-learner to create a comprehensive algorithm called StackingGroup. This algorithm is designed to handle high-dimensional group structure data. Simulation experiments showed that the proposed method outperformed other R2, RMSE, and MAE prediction methods. Lastly, we applied the proposed algorithm to investigate the risk factors of low birth weight in infants and young children. The final results demonstrate that the proposed method achieves a mean absolute error (MAE) of 0.508 and a root mean square error (RMSE) of 0.668. The obtained values are smaller compared to those obtained from a single model, indicating that the proposed method surpasses other algorithms in terms of prediction accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. Fractional-Order Control Method Based on Twin-Delayed Deep Deterministic Policy Gradient Algorithm.
- Author
-
Jiao, Guangxin, An, Zhengcai, Shao, Shuyi, and Sun, Dong
- Subjects
- *
RADIAL basis functions , *SLIDING mode control , *REINFORCEMENT learning , *ALGORITHMS , *MACHINE learning , *CLOSED loop systems - Abstract
In this paper, a fractional-order control method based on the twin-delayed deep deterministic policy gradient (TD3) algorithm in reinforcement learning is proposed. A fractional-order disturbance observer is designed to estimate the disturbances, and the radial basis function network is selected to approximate system uncertainties in the system. Then, a fractional-order sliding-mode controller is constructed to control the system, and the parameters of the controller are tuned using the TD3 algorithm, which can optimize the control effect. The results show that the fractional-order control method based on the TD3 algorithm can not only improve the closed-loop system performance under different operating conditions but also enhance the signal tracking capability. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. Autonomous Parameter Balance in Population-Based Approaches: A Self-Adaptive Learning-Based Strategy.
- Author
-
Vega, Emanuel, Lemus-Romani, José, Soto, Ricardo, Crawford, Broderick, Löffler, Christoffer, Peña, Javier, and Talbi, El-Gazhali
- Subjects
- *
SELF-adaptive software , *METAHEURISTIC algorithms , *MANUFACTURING cells , *KNAPSACK problems , *ALGORITHMS - Abstract
Population-based metaheuristics can be seen as a set of agents that smartly explore the space of solutions of a given optimization problem. These agents are commonly governed by movement operators that decide how the exploration is driven. Although metaheuristics have successfully been used for more than 20 years, performing rapid and high-quality parameter control is still a main concern. For instance, deciding the proper population size yielding a good balance between quality of results and computing time is constantly a hard task, even more so in the presence of an unexplored optimization problem. In this paper, we propose a self-adaptive strategy based on the on-line population balance, which aims for improvements in the performance and search process on population-based algorithms. The design behind the proposed approach relies on three different components. Firstly, an optimization-based component which defines all metaheuristic tasks related to carry out the resolution of the optimization problems. Secondly, a learning-based component focused on transforming dynamic data into knowledge in order to influence the search in the solution space. Thirdly, a probabilistic-based selector component is designed to dynamically adjust the population. We illustrate an extensive experimental process on large instance sets from three well-known discrete optimization problems: Manufacturing Cell Design Problem, Set covering Problem, and Multidimensional Knapsack Problem. The proposed approach is able to compete against classic, autonomous, as well as IRace-tuned metaheuristics, yielding interesting results and potential future work regarding dynamically adjusting the number of solutions interacting on different times within the search process. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. Ship-Fire Net: An Improved YOLOv8 Algorithm for Ship Fire Detection.
- Author
-
Zhang, Ziyang, Tan, Lingye, and Tiong, Robert Lee Kong
- Subjects
- *
FIRE detectors , *MACHINE learning , *OBJECT recognition (Computer vision) , *DEEP learning , *ALGORITHMS , *COMPUTATIONAL complexity , *SHIPS - Abstract
Ship fire may result in significant damage to its structure and large economic loss. Hence, the prompt identification of fires is essential in order to provide prompt reactions and effective mitigation strategies. However, conventional detection systems exhibit limited efficacy and accuracy in detecting targets, which has been mostly attributed to limitations imposed by distance constraints and the motion of ships. Although the development of deep learning algorithms provides a potential solution, the computational complexity of ship fire detection algorithm pose significant challenges. To solve this, this paper proposes a lightweight ship fire detection algorithm based on YOLOv8n. Initially, a dataset, including more than 4000 unduplicated images and their labels, is established before training. In order to ensure the performance of algorithms, both fire inside ship rooms and also fire on board are considered. Then after tests, YOLOv8n is selected as the model with the best performance and fastest speed from among several advanced object detection algorithms. GhostnetV2-C2F is then inserted in the backbone of the algorithm for long-range attention with inexpensive operation. In addition, spatial and channel reconstruction convolution (SCConv) is used to reduce redundant features with significantly lower complexity and computational costs for real-time ship fire detection. For the neck part, omni-dimensional dynamic convolution is used for the multi-dimensional attention mechanism, which also lowers the parameters. After these improvements, a lighter and more accurate YOLOv8n algorithm, called Ship-Fire Net, was proposed. The proposed method exceeds 0.93, both in precision and recall for fire and smoke detection in ships. In addition, the mAP@0.5 reaches about 0.9. Despite the improvement in accuracy, Ship-Fire Net also has fewer parameters and lower FLOPs compared to the original, which accelerates its detection speed. The FPS of Ship-Fire Net also reaches 286, which is helpful for real-time ship fire monitoring. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. Geometric Deep Learning sub-network extraction for Maximum Clique Enumeration.
- Author
-
Carchiolo, Vincenza, Grassia, Marco, Malgeri, Michele, and Mangioni, Giuseppe
- Subjects
- *
DEEP learning , *NP-hard problems , *MACHINE learning , *ALGORITHMS - Abstract
The paper presents an algorithm to approach the problem of Maximum Clique Enumeration, a well known NP-hard problem that have several real world applications. The proposed solution, called LGP-MCE, exploits Geometric Deep Learning, a Machine Learning technique on graphs, to filter out nodes that do not belong to maximum cliques and then applies an exact algorithm to the pruned network. To assess the LGP-MCE, we conducted multiple experiments using a substantial dataset of real-world networks, varying in size, density, and other characteristics. We show that LGP-MCE is able to drastically reduce the running time, while retaining all the maximum cliques. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
30. Comparative Analysis of Time Series Prediction Algorithms on Multiple Network Function Data of NWDAF.
- Author
-
Chen, Dasheng, Song, Qi, Zhang, Yinbin, Li, Ling, Yang, Zhiming, and Perš, Janez
- Subjects
- *
TIME series analysis , *DEEP learning , *COMPUTER network traffic , *MACHINE learning , *NETWORK performance , *ALGORITHMS , *COMPARATIVE studies - Abstract
With the emergence and vigorous development of 5G technology, there is a significant surge in network usage and traffic, resulting in heightened complexity within network and IT environments. This exponential increase in activity produces a plethora of events, making conventional systems inadequate for the efficient management of 5G networks. In comparison to 4G technology, 5G technology brings forth a host of new features, one of which is the network data analytics function (NWDAF). This function grants network operators the flexibility to either employ their own data analytics methodologies based on machine learning (ML) and deep learning (DL) into their networks. In this paper, we present a dataset named "NWDAF-NFPP" for network function performance time series prediction, collected from a laboratory at China Telecom. The dataset is carefully anonymized to ensure maximum realism and comprehensiveness, while safeguarding sensitive information. It encompasses eight categories of network functions, with data collected at five-minute intervals. The availability of this dataset provides valuable resources for researchers to conduct time series prediction research on network element performance. Following data collection, a total of six models were employed for network element performance prediction, encompassing both machine learning and deep learning approaches. This diverse set of models was carefully chosen to ensure comprehensive coverage of different techniques and algorithms. Through the comparison and analysis of these models, we aim to evaluate their predictive capabilities and identify the most effective approach for network element performance prediction. This comparative analysis will provide valuable insights into the strengths and limitations of each model, aiding in informed decision-making for network optimization and management strategies in the future. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. A new hierarchical algorithm based on CapsGAN for imbalanced image classification.
- Author
-
Jabbari, Hamed and Bigdeli, Nooshin
- Subjects
- *
IMAGE recognition (Computer vision) , *GENERATIVE adversarial networks , *MACHINE learning , *CAPSULE neural networks , *ALGORITHMS - Abstract
Imbalanced image datasets consist of image datasets where there is a significant disparity in the number of samples across different classes. With imbalanced image datasets, learning algorithms often tend to be biased toward the majority class samples. This leads to poor classification of minority class samples as their training is not properly conducted. It becomes more complicated when the number of samples in the minority class is very low. In this paper, a novel hierarchical algorithm is proposed for generating new data using Capsule Generative Adversarial Networks (CapsGAN) to address the class imbalance problem in imbalanced image datasets. Unlike common GAN models, the proposed method incorporates an auxiliary CapsNet to identify high‐value images in both minority and majority classes. This identification is based on the ability to detect complex relationships between low‐level and high‐level features present in capsule networks. Furthermore, the proposed CapsGAN model is conditioned to generate minority class samples based on feature vectors of last capsule layer to achieve a more balanced image dataset. For evaluating the performance of the proposed model, an image dataset called CICS was collected and introduced. Extensive experiments were also conducted using various online image datasets from different domains, with varying numbers of classes and data sizes. The experimental results demonstrated that the proposed model can generate high‐quality samples in cases where the image dataset or the number of minority class samples is relatively small. Furthermore, the proposed model has maintained an accuracy of over 80% in an imbalanced ratio of 1:60. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. Advances in Slime Mould Algorithm: A Comprehensive Survey.
- Author
-
Wei, Yuanfei, Othman, Zalinda, Daud, Kauthar Mohd, Luo, Qifang, and Zhou, Yongquan
- Subjects
- *
ALGORITHMS , *MACHINE learning , *IMAGE segmentation , *RESEARCH personnel - Abstract
The slime mould algorithm (SMA) is a new swarm intelligence algorithm inspired by the oscillatory behavior of slime moulds during foraging. Numerous researchers have widely applied the SMA and its variants in various domains in the field and proved its value by conducting various literatures. In this paper, a comprehensive review of the SMA is introduced, which is based on 130 articles obtained from Google Scholar between 2022 and 2023. In this study, firstly, the SMA theory is described. Secondly, the improved SMA variants are provided and categorized according to the approach used to apply them. Finally, we also discuss the main applications domains of the SMA, such as engineering optimization, energy optimization, machine learning, network, scheduling optimization, and image segmentation. This review presents some research suggestions for researchers interested in this algorithm, such as conducting additional research on multi-objective and discrete SMAs and extending this to neural networks and extreme learning machining. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. Hybrid Deep Learning and Sensitivity Operator-Based Algorithm for Identification of Localized Emission Sources.
- Author
-
Penenko, Alexey, Emelyanov, Mikhail, Rusin, Evgeny, Tsybenova, Erjena, and Shablyko, Vasily
- Subjects
- *
DEEP learning , *ALGORITHMS , *CONVOLUTIONAL neural networks , *INVERSE problems , *MACHINE learning , *REMOTE sensing - Abstract
Hybrid approaches combining machine learning with traditional inverse problem solution methods represent a promising direction for the further development of inverse modeling algorithms. The paper proposes an approach to emission source identification from measurement data for advection–diffusion–reaction models. The approach combines general-type source identification and post-processing refinement: first, emission source identification by measurement data is carried out by a sensitivity operator-based algorithm, and then refinement is done by incorporating a priori information about unknown sources. A general-type distributed emission source identified at the first stage is transformed into a localized source consisting of multiple point-wise sources. The second, refinement stage consists of two steps: point-wise source localization and emission rate estimation. Emission source localization is carried out using deep learning with convolutional neural networks. Training samples are generated using a sensitivity operator obtained at the source identification stage. The algorithm was tested in regional remote sensing emission source identification scenarios for the Lake Baikal region and was able to refine the emission source reconstruction results. Hence, the aggregates used in traditional inverse problem solution algorithms can be successfully applied within machine learning frameworks to produce hybrid algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. A survey on semi-supervised graph clustering.
- Author
-
Daneshfar, Fatemeh, Soleymanbaigi, Sayvan, Yamini, Pedram, and Amini, Mohammad Sadra
- Subjects
- *
SUPERVISED learning , *INTERSECTION graph theory , *DATA analysis , *MACHINE learning , *ALGORITHMS - Abstract
Semi-Supervised Graph Clustering (SSGC) has emerged as a pivotal field at the intersection of graph clustering and semi-supervised learning (SSL), offering innovative solutions to intricate data analysis problems. However, despite its significance and wide-ranging applications, there exists a notable void in the literature—a comprehensive survey specifically dedicated to SSGC techniques and their diverse applications remains conspicuously absent. Addressing this gap, this paper presents a systematic and comprehensive review of SSGC methodologies, spanning from well-established approaches to cutting-edge developments. Through a meticulous categorization, critical examination, and insightful discussion of these techniques, this survey not only illuminates the current landscape of SSGC but also identifies unexplored avenues for exploration and innovation. In this paper we present a comprehensive survey of conventional graph construction, graph clustering, SSL methods, evaluation metrics and their primary development process. We then provide a taxonomy of SSGC techniques based on their structures and principles and thoroughly analyze and discuss the related models. Furthermore, we review the applications of SSGC in various fields. Lastly, we summarize the limitations of current SSGC algorithms and discuss the future directions of the field. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. A survey on semi-supervised graph clustering.
- Author
-
Daneshfar, Fatemeh, Soleymanbaigi, Sayvan, Yamini, Pedram, and Amini, Mohammad Sadra
- Subjects
- *
SUPERVISED learning , *INTERSECTION graph theory , *DATA analysis , *MACHINE learning , *ALGORITHMS - Abstract
Semi-Supervised Graph Clustering (SSGC) has emerged as a pivotal field at the intersection of graph clustering and semi-supervised learning (SSL), offering innovative solutions to intricate data analysis problems. However, despite its significance and wide-ranging applications, there exists a notable void in the literature—a comprehensive survey specifically dedicated to SSGC techniques and their diverse applications remains conspicuously absent. Addressing this gap, this paper presents a systematic and comprehensive review of SSGC methodologies, spanning from well-established approaches to cutting-edge developments. Through a meticulous categorization, critical examination, and insightful discussion of these techniques, this survey not only illuminates the current landscape of SSGC but also identifies unexplored avenues for exploration and innovation. In this paper we present a comprehensive survey of conventional graph construction, graph clustering, SSL methods, evaluation metrics and their primary development process. We then provide a taxonomy of SSGC techniques based on their structures and principles and thoroughly analyze and discuss the related models. Furthermore, we review the applications of SSGC in various fields. Lastly, we summarize the limitations of current SSGC algorithms and discuss the future directions of the field. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. Ensemble methods and semi-supervised learning for information fusion: A review and future research directions.
- Author
-
Garrido-Labrador, José Luis, Serrano-Mamolar, Ana, Maudes-Raedo, Jesús, Rodríguez, Juan J., and García-Osorio, César
- Subjects
- *
LITERATURE reviews , *MACHINE learning , *ALGORITHMS - Abstract
Advances over the past decade at the intersection of information fusion methods and Semi-Supervised Learning (SSL) are investigated in this paper that grapple with challenges related to limited labelled data. To do so, a bibliographic review of papers published since 2013 is presented, in which ensemble methods are combined with new machine learning algorithms. A total of 128 new proposals using SSL algorithms for ensemble construction are identified and classified. All the methods are categorised by approach, ensemble type, and base classifier. Experimental protocols, pre-processing, dataset usage, unlabelled ratios, and statistical tests are also assessed, underlining the major trends, and some shortcomings of particular studies. It is evident from this literature review that foundational algorithms such as self-training and co-training are influencing current developments, and that innovative ensemble techniques are continuing to emerge. Additionally, valuable guidelines are identified in the review for improving research into intrinsically semi-supervised and unsupervised pre-processing methods, especially for regression tasks. • A set of 128 recent semi-supervised ensemble methods is identified and analysed. • Semi-supervised approach, ensemble, base and preprocessing methods are identified. • The experimental protocol, settings, and data set properties are also categorised. • Some trends and shortcomings are identified in these methods and studies. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. A robust distribution network state estimation method based on enhanced clustering Algorithm: Accounting for multiple DG output modes and data loss.
- Author
-
Yu, Yue, Jin, Zhaoyang, Ćetenović, Dragan, Ding, Lei, Levi, Victor, and Terzija, Vladimir
- Subjects
- *
DATA recovery , *PARTICLE swarm optimization , *ALGORITHMS , *DISTRIBUTED power generation , *RADIAL distribution function , *KALMAN filtering - Abstract
• Distinct DG Output Modes Validation: The simulation validates distinct DG output modes, impacting prediction. • Mode-Based State Estimation: The paper proposes an innovative mode-based state estimation using IPSO-DBSCAN clustering, overcoming k-means limitations. • Enhanced Data Recovery using DBSCAN: Enhanced data recovery employs DBSCAN with improved secondary clustering, guided by similarity and homogeneity, outperforming traditional methods. • BiGRU-PF State Estimation: BiGRU-PF enhances state estimation accuracy and robustness in distribution networks by detecting DG output modes, addressing k-means limitations, and improving data recovery. This paper proposes a new forecasting-aided state estimation (FASE) method for distribution systems that mitigates issues with uncertain distributed generation (DG) and lost measurements. We utilize an Improved Particle Swarm Optimization (IPSO)-optimized Density-Based Spatial Clustering of Applications with Noise (DBSCAN) for examination of historical DG output data. Based on identified DG output modes, it facilitates precise state prediction and data reconstruction,. The proposed method employs a Bidirectional Gated Recurrent Unit (BiGRU) neural network for state prediction and a particle filter (PF) for final state filtering. The method verification is provided through Python simulations of the Distribution Transformer Unit (DTU)7k distribution network system, demonstrating improved accuracy and robustness against sudden load change and bad data in measurements. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. An improved binary dandelion algorithm using sine cosine operator and restart strategy for feature selection.
- Author
-
Dong, Junwei, Li, Xiaobo, Zhao, Yuxin, Ji, Jingchao, Li, Shaolang, and Chen, Hui
- Subjects
- *
METAHEURISTIC algorithms , *FEATURE selection , *ALGORITHMS , *DANDELIONS , *MACHINE learning , *DATABASES , *CLASSIFICATION algorithms , *DATA mining - Abstract
• A feature selection method based on binary dandelion algorithm is proposed. • The algorithm applies a sine-cosine operator and a restart strategy. • Mutual information and quick bit mutation improve the performance of the algorithm. • The algorithm performs well on datasets with larger dimensions. Feature selection (FS) is an important data preprocessing technology for machine learning and data mining. Metaheuristic algorithm (MH) has been widely used in feature selection because of its powerful search function. This paper presents an improved Binary Dandelion Algorithm using Sine Cosine operator and Restart strategy (SCRBDA) for feature selection. First, the sine cosine operator is used in the radius formula of the core dandelions (CD), which significantly enhances the ability of algorithm development and exploration. Secondly, the algorithm uses a restart strategy to increase its ability to get rid of local optimum. Thirdly, mutual information is used to guide the generation of some dandelions, which pays more attention to the correlation between the selected features and categories. Finally, quick bit mutation is used as the mutation strategy to improve the diversity of the population. The SCRBDA proposed in this paper was tested on 18 datasets of different sizes from UCI machine learning database. The SCRBDA was compared with 8 other classical feature selection algorithms, and the performance of the proposed algorithm was evaluated through feature subset size, classification accuracy, fitness value, and F1-score. The experimental results show that SCRBDA achieves the best performance, which has stronger feature reduction ability and achieves better overall performance on most datasets. Especially on large-scale datasets, SCRBDA can obtain extremely smaller feature subsets while maintaining much higher classification accuracy, and satisfactory F1-score. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. A fast local citation recommendation algorithm scalable to multi-topics.
- Author
-
Yin, Maxwell J., Wang, Boyu, and Ling, Charles
- Subjects
- *
NATURAL language processing , *VECTOR spaces , *ALGORITHMS , *RESEARCH personnel - Abstract
In the era of rapid paper publications in various venues, automatic citation recommendations would be highly useful to researchers when they write papers. Local citation recommendation aims to recommend possible papers to cite given local citation contexts. Previous work mainly computes the similarity score between citation contexts and cited papers on a one-to-one basis, which is quite time-consuming. We train a pair of neural network encoders that map citation contexts and all possible cited papers to the same vector space, respectively. After that, we index the positions of all cited papers in the vector space. This makes our process for searching recommended papers considerably faster. On the other hand, existing methods tend to recommend papers that are highly similar to each other, which makes recommendations lack diversity. Therefore, we extend our algorithm to perform multi-topic recommendations. We generate multi-topic training examples based on the index we mentioned earlier. Furthermore, we specially design a multi-group contrastive learning method to train our model so that it can distinguish different topics. Empirical experiments show that our model outperforms previous methods by a wide margin. Our model is also light weighted and has been deployed online so that researchers can use it to obtain recommended citations for their own paper in real-time. • Proposed FLCR algorithm with sentence-transformer & k-d tree. • Introduced multi-topic citation recommendation for diverse contexts. • Developed large-scale dataset with 1.7 million citation contexts for evaluation. • Demonstrated significant performance improvement over previous methods. • Deployed demo for real-time citation recommendations. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. Development of Intelligent Fault-Tolerant Control Systems with Machine Learning, Deep Learning, and Transfer Learning Algorithms: A Review.
- Author
-
Amin, Arslan Ahmed, Sajid Iqbal, Muhammad, and Hamza Shahbaz, Muhammad
- Subjects
- *
DEEP learning , *MACHINE learning , *FAULT-tolerant control systems , *INTELLIGENT control systems , *TRANSFER of training , *DATABASES - Abstract
Intelligent Fault-Tolerant Control (IFTC) refers to the applications of machine learning algorithms for fault diagnosis and design of Fault-Tolerant Control (FTC). The overall goal of the FTC is to accommodate defects in the system components while they are in use and maintain stability with little to no performance reduction. These systems are crucial for mission-critical and safety-related applications where the safety of people is at stake and service continuity is crucial. In this review paper, a systematic study has been done for the development of FTC with machine learning, deep learning, and transfer learning algorithms. The challenges and limitations faced with their possible solutions through machine learning theories for the IFTC model are lined up. This paper guides researchers on the different possible types of machine learning algorithms and their advanced forms like deep learning and transfer learning. The differences among these are highlighted by the challenges and limitations of each. The paper is significant such that most of the important literature references from the Scopus database particularly related to important electrical and mechanical industrial problems have been discussed to guide the researchers who want to apply IFTC for specific industrial problems, being the research gap. Finally, future research directions for the development of IFTC are highlighted. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
41. FedMMD: A Federated weighting algorithm considering Non-IID and Local Model Deviation.
- Author
-
Hu, Kai, Li, Yaogen, Zhang, Shuai, Wu, Jiasheng, Gong, Sheng, Jiang, Shanshan, and Weng, Liguo
- Subjects
- *
ERROR analysis in mathematics , *ALGORITHMS , *MACHINE learning , *ENTROPY - Abstract
Federated learning (FL) is a distributed machine learning method to protect users' privacy and security. It currently faces the following two problems (McMahan et al., 2017): (1) The performance of the global model degrades when dealing with Non-Independent Identically Distributed (Non-IID) data. Other existing classical methods do not have a rigorous theory based on error processing ; (2) In the process of global model aggregation, the classical FL algorithm either directly averages local models or solely considers the proportion of local data to assign weights to the models, without accounting for the discrepancies between the local models. In this paper, a federated aggregation algorithm Federated Maximum Mean Discrepancy (FedMMD) is proposed to address the deviation between the local models and Non-IID. First of all, this paper utilizes the Dilated Convolution Meet Transformer (DCMT) model for local model feature extraction. This approach aims to capture more feature information and minimize the impact of Non-IID scenarios. Secondly, the learning stability of the local model data participating in the aggregation is compared with Maximum Mean Discrepancy (MMD). The weights of the local models involved in the aggregation are determined using the SKNQ (Student–Keuls–Newman-Q) method and the entropy weight method. The SKNQ method calculates and compares the MMD across multiple clients, while the entropy weight method assigns different weights to clients based on their deviations. On the standard dataset, the FedAvg algorithm (McMahan et al., 2017), the FedProx algorithm (Li et al., 2020) and the FedMMD algorithm are compared. The experimental results demonstrate that the FedMMD algorithm, used in training the global model, not only enhances the accuracy of learning IID and Non-IID data but also improves the generalization capability of the global model. • An DCMT model is proposed to alleviate the model fluctuations caused by NonIID data. • An MMD method is proposed to compared horizontally before the model is aggregated. • Entropy weight method and the SKNQ method are combined to allocate local models' weight when they are fused into global model. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. Intelligent adaptive lighting algorithm: Integrating reinforcement learning and fuzzy logic for personalized interior lighting.
- Author
-
Vashishtha, Kritika, Saad, Anas, Faieghi, Reza, and Xi, Fengfeng
- Subjects
- *
INTERIOR lighting , *FUZZY logic , *MACHINE learning , *AIRCRAFT cabins , *ALGORITHMS , *INTELLIGENT tutoring systems , *REINFORCEMENT learning , *ONLINE algorithms - Abstract
The lighting requirements are subjective and one light setting cannot work for all. However, there is little work on developing smart lighting algorithms that can adapt to user preferences. To address this gap, this paper uses fuzzy logic and reinforcement learning to develop an adaptive lighting algorithm. In particular, we develop a baseline fuzzy inference system (FIS) using the domain knowledge, generating light recommendation based on a set of intuitive rules. These rules, derived from existing literature, are based on environmental conditions i.e. daily glare index, and user information including age, activity, and chronotype. Through a feedback mechanism, the user interacts with the algorithm, correcting the algorithm output to their preferences. We interpret these corrections as rewards to a Q-learning algorithm, which tunes the FIS parameters online to match the user preferences. The Q-learning is a model-free learning algorithm that learns to act optimally by interacting with the user and the rewards it receives. This allows the proposed algorithm to work in a model-free manner, effectively handling the uncertainties that might arise from the individualistic preferences of users. To the authors' best knowledge, this algorithm is pioneering work in designing intelligent algorithms for personalized lighting control, featuring several elements of novelty, including the number of environmental and user-related inputs, the continuous control of light intensity as opposed to common on/off control, and the ability to learn user preferences. The algorithm is implemented in a real aircraft cabin and is evaluated in an extensive user study. The implementation results demonstrate that the developed algorithm possesses the capability to learn user preferences while successfully adapting to a wide range of environmental conditions and user characteristics. This underscores its viability as a potent solution for intelligent light management, featuring advanced learning capabilities. • Intelligent lighting algorithm with the ability to learn from and adapt to user preferences. • A fuzzy inference system tunes lighting based on environment and user traits. • The developed algorithm is tested and verified via an in-depth user study. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
43. Comparative analysis of grid-interactive building control algorithms: From model-based to learning-based approaches.
- Author
-
Biagioni, David, Zhang, Xiangyu, Adcock, Christiane, Sinner, Michael, Graf, Peter, and King, Jennifer
- Subjects
- *
ARTIFICIAL intelligence , *SOFTWARE frameworks , *COMPARATIVE studies , *RESEARCH personnel , *ALGORITHMS , *KNOWLEDGE gap theory , *MACHINE learning - Abstract
Grid-interactive building control poses a critical challenge in the context of grid modernization and decarbonization. Recently, various artificial intelligence-based optimal control approaches have been proposed, providing innovative solutions to optimal building control problems. However, researchers and practitioners are now confronted with a dilemma when selecting appropriate control strategies, ranging from model-based (knowledge-incorporating) to learning-based (data-driven) approaches, and hybrid methods combining them. Although each algorithm in existing literature claims superiority over specific baselines, their performance has never been systematically compared and analyzed, owing to the absence of a unified platform for comprehensive evaluation. To fill this knowledge gap, we identify and implement all state-of-the-art approaches within a modular training and evaluation framework, assessing their efficacy in a grid-interactive building control problem. In this paper, we also introduce a streamlined hybrid method that complements existing hybrid approaches. Our comparative study reveals and quantifies the advantages of hybrid methods: on average, they achieve near-optimal control while requiring less than 14% of the online computation of traditional model-based methods. To achieve this performance, they need as few as 2% of the training samples of purely learning-based methods. Finally, we provide insights into the merits, limitations, and implementation of each method to help researchers better understand the state of the art and future directions. The software framework implemented in this study is open-sourced to facilitate future research. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. An efficient skeleton learning approach-based hybrid algorithm for identifying Bayesian network structure.
- Author
-
Wang, Niantai, Liu, Haoran, Zhang, Liyue, Cai, Yanbin, and Shi, Qianrui
- Subjects
- *
BAYESIAN analysis , *BLENDED learning , *MACHINE learning , *SKELETON , *ALGORITHMS , *LEARNING strategies - Abstract
Bayesian network (BN) structure learning is the basis of BN applications and plays a pivotal role in many machine learning tasks. Whereas remarkable progress in structure learning has been achieved in the past, making further improvements in the efficiency and accuracy of structure learning is a significant challenge. In this paper, we propose an efficient skeleton learning approach-based hybrid algorithm (ESLH), which consists of two phases. In the constraint-based phase, a dynamic threshold (DTH) strategy and a skeleton learning method based on triangle breaking (SLTB) are proposed to learn the skeleton of a BN structure efficiently. DTH designs a dynamic threshold to remove redundant edges in the initial skeleton with little time overhead. By the result of DTH, SLTB first finds, tests and breaks triangles in the initial skeleton to efficiently remove redundant edges and then removes the remaining redundant edges to discover the final skeleton. In the score-and-search phase, ESLH employs the hill-climbing algorithm to find the highest-scored structure. We propose a novel strategy to divide this phase into three steps, both utilizing the learned skeleton to constrain the search space and preventing the errors of the learned skeleton from reducing the quality of the final learned structure. Extensive experiments on benchmark BNs validate the effectiveness of DTH and SLTB and demonstrate that ESLH is more than five times faster than the state-of-the-art structure learning algorithms while maintaining the highest average accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
45. An efficient skeleton learning approach-based hybrid algorithm for identifying Bayesian network structure.
- Author
-
Wang, Niantai, Liu, Haoran, Zhang, Liyue, Cai, Yanbin, and Shi, Qianrui
- Subjects
- *
BAYESIAN analysis , *BLENDED learning , *MACHINE learning , *SKELETON , *ALGORITHMS , *LEARNING strategies - Abstract
Bayesian network (BN) structure learning is the basis of BN applications and plays a pivotal role in many machine learning tasks. Whereas remarkable progress in structure learning has been achieved in the past, making further improvements in the efficiency and accuracy of structure learning is a significant challenge. In this paper, we propose an efficient skeleton learning approach-based hybrid algorithm (ESLH), which consists of two phases. In the constraint-based phase, a dynamic threshold (DTH) strategy and a skeleton learning method based on triangle breaking (SLTB) are proposed to learn the skeleton of a BN structure efficiently. DTH designs a dynamic threshold to remove redundant edges in the initial skeleton with little time overhead. By the result of DTH, SLTB first finds, tests and breaks triangles in the initial skeleton to efficiently remove redundant edges and then removes the remaining redundant edges to discover the final skeleton. In the score-and-search phase, ESLH employs the hill-climbing algorithm to find the highest-scored structure. We propose a novel strategy to divide this phase into three steps, both utilizing the learned skeleton to constrain the search space and preventing the errors of the learned skeleton from reducing the quality of the final learned structure. Extensive experiments on benchmark BNs validate the effectiveness of DTH and SLTB and demonstrate that ESLH is more than five times faster than the state-of-the-art structure learning algorithms while maintaining the highest average accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
46. Unmanned Aerial Vehicle-enabled grassland restoration with energy-sensitive of trajectory design and restoration areas allocation via a cooperative memetic algorithm.
- Author
-
Jiao, Dongbin, Wang, Lingyu, Yang, Peng, Yang, Weibo, Peng, Yu, Shang, Zhanhuan, and Ren, Fengyuan
- Subjects
- *
GRASSLAND restoration , *TRAVELING salesman problem , *FOREST restoration , *MATHEMATICAL programming , *MACHINE learning , *TECHNOLOGICAL innovations , *ALGORITHMS , *DRONE aircraft , *VERTICALLY rising aircraft - Abstract
Grassland restoration is a crucial method for preventing ecological degradation in grasslands. Unmanned Aerial Vehicles (UAVs) offer a promising solution to reduce extensive human labor and enhance restoration efficiency, given their fully automatic capabilities, yet their full potential remains exploited. This paper progresses this emerging technology for planning the grassland restoration. We undertake the first attempt to mathematically model the UAV-enabled restoration process as the maximization of restoration areas problem (MRAP). This model considers factors including limited UAV battery energy, grass seed weight, the number of restored areas, and their sizes. The MRAP is a composite problem involving trajectory design and area allocation, which are highly coupled and conflicting. Consequently, it requires solving two NP-hard subproblems: the variant Traveling Salesman Problem (TSP) and the Multidimensional Knapsack Problem (MKP) simultaneously. To address this complex problem, we introduce a novel cooperative memetic algorithm. The algorithm integrates an efficient heuristic algorithm, variant population-based incremental learning (PBIL), and a maximum-residual-energy-based local search (MRELS) strategy, referred to as CHAPBILM. The algorithm solves the two subproblems interlacedly by leveraging the interdependencies and inherent knowledge between them. The simulation results demonstrate that CHAPBILM successfully solves the MRAP on multiple instances in a near-optimal way. It also confirms the conflicts between trajectory design and area allocation. The effectiveness of CHAPBILM is further supported by comparisons with traditional optimization methods that do not exploit the interdependencies between the two subproblems. The proposed model and solution have the potential to be extended to other complex optimization problems in ecological protection and precision agriculture. • The maximization of restoration areas problem is first presented for the UAV-enabled grassland restoration method. • An energy-sensitive mathematical programming model is formulated for the maximization of restoration areas problem under the realistic constraints. • A novel cooperative memetic algorithm CHAPBILM is explored to effectively solve the maximization restoration areas problem, without ignoring the dependence between the two stages. • The simulation results demonstrate that CHAPBILM performs significantly better than the noncooperative optimization method for the problem, which also confirms the dependency relationship. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
47. Explainable AI Insights for Symbolic Computation: A case study on selecting the variable ordering for cylindrical algebraic decomposition.
- Author
-
Pickering, Lynn, del Río Almajano, Tereso, England, Matthew, and Cohen, Kelly
- Subjects
- *
MACHINE learning , *ARTIFICIAL intelligence , *SYMBOLIC computation , *COMPUTER systems , *ALGORITHMS - Abstract
In recent years there has been increased use of machine learning (ML) techniques within mathematics, including symbolic computation where it may be applied safely to optimise or select algorithms. This paper explores whether using explainable AI (XAI) techniques on such ML models can offer new insight for symbolic computation, inspiring new implementations within computer algebra systems that do not directly call upon AI tools. We present a case study on the use of ML to select the variable ordering for cylindrical algebraic decomposition. It has already been demonstrated that ML can make the choice well, but here we show how the SHAP tool for explainability can be used to inform new heuristics of a size and complexity similar to those human-designed heuristics currently commonly used in symbolic computation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
48. Enhancing cell delay accuracy in post-placed netlists using ensemble tree-based algorithms.
- Author
-
Attaoui, Yassine, Chentouf, Mohamed, Ismaili, Zine El Abidine Alaoui, and El Mourabit, Aimad
- Subjects
- *
ALGORITHMS , *RANDOM forest algorithms , *INDUSTRIAL design , *TIME delay estimation , *INFORMATION design - Abstract
Nowadays, the ASIC design is increasing in complexity, and PPA targets are pushed to the limit. The lack of physical information at the early design stages hinders precise timing predictions and may lead to design re-spins. In previous work, we successfully improved timing prediction at the post-placement stage using the Random Forest model, achieving 91.25% cell delay accuracy. Building upon this, we further investigate the potential of Ensemble Tree-based algorithms, specifically focusing on " Extremely Randomized Trees " and " Gradient Boosting ", to close the gap in cell delay accuracy. In this paper, we enrich the training dataset with new 16 nm industrial designs. The results demonstrate a substantial improvement, with an average cell delay accuracy of 92.01% and 84.26% on unseen data. The average Root-Mean-Square-Error is significantly reduced from 12.11 to 3.23 and 7.76 on unseen data. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
49. Knowledge graph-based image classification.
- Author
-
Mbiaya, Franck Anaël, Vrain, Christel, Ros, Frédéric, Dao, Thi-Bich-Hanh, and Lucas, Yves
- Subjects
- *
IMAGE recognition (Computer vision) , *MACHINE learning , *DEEP learning , *KNOWLEDGE graphs , *IMAGE databases , *ALGORITHMS - Abstract
This paper introduces a deep learning method for image classification that leverages knowledge formalized as a graph created from information represented by pairs attribute/value. The proposed method investigates a loss function that adaptively combines the classical cross-entropy commonly used in deep learning with a novel penalty function. The novel loss function is derived from the representation of nodes after embedding the knowledge graph and incorporates the proximity between class and image nodes. Its formulation enables the model to focus on identifying the boundary between the most challenging classes to distinguish. Experimental results on several image databases demonstrate improved performance compared to state-of-the-art methods, including classical deep learning algorithms and recent algorithms that incorporate knowledge represented by a graph. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
50. Computationally efficient solution of mixed integer model predictive control problems via machine learning aided Benders Decomposition.
- Author
-
Mitrai, Ilias and Daoutidis, Prodromos
- Subjects
- *
MACHINE learning , *PREDICTION models , *CHEMICAL processes , *ALGORITHMS , *DIOPHANTINE equations - Abstract
Mixed integer Model Predictive Control (MPC) problems arise in the operation of systems where discrete and continuous decisions must be taken simultaneously to compensate for disturbances. The efficient solution of mixed integer MPC problems requires the computationally efficient online solution of mixed integer optimization problems, which are generally difficult to solve. In this paper, we propose a machine learning-based branch and check Generalized Benders Decomposition algorithm for the efficient solution of such problems. We use machine learning to approximate the effect of the complicating variables on the subproblem by approximating the Benders cuts without solving the subproblem, therefore, alleviating the need to solve the subproblem multiple times. The proposed approach is applied to a mixed integer economic MPC case study on the operation of chemical processes. We show that the proposed algorithm always finds feasible solutions to the optimization problem, given that the mixed integer MPC problem is feasible, and leads to a significant reduction in solution time (up to 97% or 50 ×) while incurring small error (in the order of 1%) compared to the application of standard and accelerated Generalized Benders Decomposition. • A machine learning-aided branch and check algorithm is proposed for mixed integer MPC • Benders cuts are approximated via machine learning-based surrogate models • The returned solution is feasible, given that the optimization problem is feasible • Case studies highlight the computational efficiency of the proposed algorithm [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.