102 results
Search Results
2. Anomaly Detection in Blockchain Networks Using Unsupervised Learning: A Survey.
- Author
-
Cholevas, Christos, Angeli, Eftychia, Sereti, Zacharoula, Mavrikos, Emmanouil, and Tsekouras, George E.
- Subjects
- *
DATA structures , *MACHINE learning , *PRIVATE networks , *BLOCKCHAINS , *ALGORITHMS - Abstract
In decentralized systems, the quest for heightened security and integrity within blockchain networks becomes an issue. This survey investigates anomaly detection techniques in blockchain ecosystems through the lens of unsupervised learning, delving into the intricacies and going through the complex tapestry of abnormal behaviors by examining avant-garde algorithms to discern deviations from normal patterns. By seamlessly blending technological acumen with a discerning gaze, this survey offers a perspective on the symbiotic relationship between unsupervised learning and anomaly detection by reviewing this problem with a categorization of algorithms that are applied to a variety of problems in this field. We propose that the use of unsupervised algorithms in blockchain anomaly detection should be viewed not only as an implementation procedure but also as an integration procedure, where the merits of these algorithms can effectively be combined in ways determined by the problem at hand. In that sense, the main contribution of this paper is a thorough study of the interplay between various unsupervised learning algorithms and how this can be used in facing malicious activities and behaviors within public and private blockchain networks. The result is the definition of three categories, the characteristics of which are recognized in terms of the way the respective integration takes place. When implementing unsupervised learning, the structure of the data plays a pivotal role. Therefore, this paper also provides an in-depth presentation of the data structures commonly used in unsupervised learning-based blockchain anomaly detection. The above analysis is encircled by a presentation of the typical anomalies that have occurred so far along with a description of the general machine learning frameworks developed to deal with them. Finally, the paper spotlights challenges and directions that can serve as a comprehensive compendium for future research efforts. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Cognitive decline assessment using semantic linguistic content and transformer deep learning architecture.
- Author
-
PL, Rini and KS, Gayathri
- Subjects
- *
DIAGNOSIS of dementia , *COGNITION disorders diagnosis , *SPEECH evaluation , *CROSS-sectional method , *PREDICTION models , *TASK performance , *DESCRIPTIVE statistics , *NATURAL language processing , *LINGUISTICS , *EXPERIMENTAL design , *DEEP learning , *COMPUTER-aided diagnosis , *LATENT semantic analysis , *NEUROPSYCHOLOGICAL tests , *RESEARCH , *SEMANTIC memory , *EARLY diagnosis , *COMPARATIVE studies , *MACHINE learning , *FACTOR analysis , *ALGORITHMS , *DEMENTIA patients - Abstract
Background: Dementia is a cognitive decline that leads to the progressive deterioration of an individual's ability to perform daily activities independently. As a result, a considerable amount of time and resources are spent on caretaking. Early detection of dementia can significantly reduce the effort and resources needed for caretaking. Aims: This research proposes an approach for assessing cognitive decline by analysing speech data, specifically focusing on speech relevance as a crucial indicator for memory recall. Methods & Procedures: This is a cross‐sectional, online, self‐administered. The proposed method used deep learning architecture based on transformers, with BERT (Bidirectional Encoder Representations from Transformers) and Sentence‐Transformer to derive encoded representations of speech transcripts. These representations provide contextually descriptive information that is used to analyse the relevance of sentences in their respective contexts. The encoded information is then compared using cosine similarity metrics to measure the relevance of uttered sequences of sentences. The study uses the Pitt Corpus Dementia dataset for experimentation, which consists of speech data from individuals with and without dementia. The accuracy of the proposed multi‐QA‐MPNet (Multi‐Query Maximum Inner Product Search Pretraining) model is compared with other pretrained transformer models of Sentence‐Transformer. Outcomes & Results: The results show that the proposed approach outperforms the other models in capturing context level information, particularly semantic memory. Additionally, the study explores the suitability of different similarity measures to evaluate the relevance of uttered sequences of sentences. The experimentation reveals that cosine similarity is the most appropriate measure for this task. Conclusions & Implications: This finding has significant implications for the early warning signs of dementia, as it suggests that cosine similarity metrics can effectively capture the semantic relevance of spoken language. The persistent cognitive decline over time acts as one of the indicators for prevalence of dementia. Additionally early dementia could be recognised by analysis on other modalities like speech and brain images. WHAT THIS PAPER ADDS: What is already known on this subject: It is already known that speech‐ and language‐based detection methods can be useful for dementia diagnosis, as language difficulties are often early signs of the disease. Additionally, deep learning algorithms have shown promise in detecting and diagnosing dementia through analysing large datasets, particularly in speech‐ and language‐based detection methods. However, further research is needed to validate the performance of these algorithms on larger and more diverse datasets and to address potential biases and limitations. What this paper adds to existing knowledge: This study presents a unique and effective approach for cognitive decline assessment through analysing speech data. The study provides valuable insights into the importance of context and semantic memory in accurately detecting the potential in dementia and demonstrates the applicability of deep learning models for this purpose. The findings of this study have important clinical implications and can inform future research and development in the field of dementia detection and care. What are the potential or actual clinical implications of this work?: The proposed approach for cognitive decline assessment using speech data and deep learning models has significant clinical implications. It has the potential to improve the accuracy and efficiency of dementia diagnosis, leading to earlier detection and more effective treatments, which can improve patient outcomes and quality of life. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Channel Prediction for Underwater Acoustic Communication: A Review and Performance Evaluation of Algorithms.
- Author
-
Liu, Haotian, Ma, Lu, Wang, Zhaohui, and Qiao, Gang
- Subjects
- *
DEEP learning , *UNDERWATER acoustic communication , *MACHINE learning , *ALGORITHMS , *TELECOMMUNICATION systems , *FORECASTING - Abstract
Underwater acoustic (UWA) channel prediction technology, as an important topic in UWA communication, has played an important role in UWA adaptive communication network and underwater target perception. Although many significant advancements have been achieved in underwater acoustic channel prediction over the years, a comprehensive summary and introduction is still lacking. As the first comprehensive overview of UWA channel prediction, this paper introduces past works and algorithm implementation methods of channel prediction from the perspective of linear, kernel-based, and deep learning approaches. Importantly, based on available at-sea experiment datasets, this paper compares the performance of current primary UWA channel prediction algorithms under a unified system framework, providing researchers with a comprehensive and objective understanding of UWA channel prediction. Finally, it discusses the directions and challenges for future research. The survey finds that linear prediction algorithms are the most widely applied, and deep learning, as the most advanced type of algorithm, has moved this field into a new stage. The experimental results show that the linear algorithms have the lowest computational complexity, and when the training samples are sufficient, deep learning algorithms have the best prediction performance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. VIS-SLAM: A Real-Time Dynamic SLAM Algorithm Based on the Fusion of Visual, Inertial, and Semantic Information.
- Author
-
Wang, Yinglong, Liu, Xiaoxiong, Zhao, Minkun, and Xu, Xinlong
- Subjects
- *
MOBILE robots , *MACHINE learning , *MOBILE learning , *DEEP learning , *ALGORITHMS , *INFORMATION measurement , *PROBABILITY theory , *GEOMETRY - Abstract
A deep learning-based Visual Inertial SLAM technique is proposed in this paper to ensure accurate autonomous localization of mobile robots in environments with dynamic objects. Addressing the limitations of real-time performance in deep learning algorithms and the poor robustness of pure visual geometry algorithms, this paper presents a deep learning-based Visual Inertial SLAM technique. Firstly, a non-blocking model is designed to extract semantic information from images. Then, a motion probability hierarchy model is proposed to obtain prior motion probabilities of feature points. For image frames without semantic information, a motion probability propagation model is designed to determine the prior motion probabilities of feature points. Furthermore, considering that the output of inertial measurements is unaffected by dynamic objects, this paper integrates inertial measurement information to improve the estimation accuracy of feature point motion probabilities. An adaptive threshold-based motion probability estimation method is proposed, and finally, the positioning accuracy is enhanced by eliminating feature points with excessively high motion probabilities. Experimental results demonstrate that the proposed algorithm achieves accurate localization in dynamic environments while maintaining real-time performance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. An Algorithm of Complete Coverage Path Planning for Deep‐Sea Mining Vehicle Clusters Based on Reinforcement Learning.
- Author
-
Xing, Bowen, Wang, Xiao, and Liu, Zhenchong
- Subjects
- *
DEEP reinforcement learning , *MACHINE learning , *OCEAN mining , *ALGORITHMS - Abstract
This paper proposes a deep reinforcement learning algorithm to achieve complete coverage path planning for deep‐sea mining vehicle clusters. First, the mining vehicles and the deep‐sea mining environment are modeled. Then, this paper implements a series of algorithm designs and optimizations based on Deep Q Networks (DQN). The map fusion mechanism can integrate the grid matrix data from multiple mining vehicles to get the state matrix of the complete environment. In this paper, a preprocessing method for the state matrix is also designed to provide suitable training data for the neural network. The reward function and action selection mechanism of the algorithm are also optimized according to the requirements of cluster cooperative operation. Furthermore, the algorithm uses distance constraints to prevent the entanglement of underwater hoses. To improve the training efficiency of the neural network, the algorithm filters and extracts training samples for training through the sample quality score. Considering the requirement of cluster complete coverage mission, this paper introduces Long Short‐Term Memory (LSTM) based on the neural network to achieve a better training effect. After completing the above optimization and design, the algorithm proposed in this paper is verified through simulation experiments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Research on health monitoring and damage recognition algorithm of building structures based on image processing.
- Author
-
Tang, Sicong and Wang, Hailong
- Subjects
- *
IMAGE processing , *MACHINE learning , *PARAMETER identification , *NOISE control , *ALGORITHMS , *IMAGE encryption , *DIGITAL images - Abstract
With the continuous deepening of the urbanization process and the progress of science and technology, people transform nature and develop nature on a larger and larger scale, among which the most iconic transformation is a variety of building structures built by people. And with the passage of time, the building structure in the perennial wind and sun, there will be signs of "illness", if not timely treatment, it will have a huge impact on the stability and safety of the building structure. Based on this, in this paper, according to the characteristics of crack identification on the surface of concrete structure, background subtraction algorithm is selected for image noise reduction processing. Through three steps of digital image noise reduction, crack extraction and crack parameter identification, the quantitative recognition of cracks is completed and a complete system of crack parameter identification is formed. The experimental results show that the machine learning model of building structure health monitoring and damage recognition algorithm proposed in this paper has excellent statistical performance, and the relative error accuracy of recognition can be controlled within 10%. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. Classification of high-dimensional imbalanced biomedical data based on spectral clustering SMOTE and marine predators algorithm.
- Author
-
Qin, Xiwen, Zhang, Siqi, Dong, Xiaogang, Shi, Hongyu, and Yuan, Liping
- Subjects
- *
LINEAR operators , *CLASSIFICATION , *ALGORITHMS , *LEARNING strategies , *FEATURE selection , *LOTKA-Volterra equations , *MACHINE learning , *RANDOM forest algorithms - Abstract
The research of biomedical data is crucial for disease diagnosis, health management, and medicine development. However, biomedical data are usually characterized by high dimensionality and class imbalance, which increase computational cost and affect the classification performance of minority class, making accurate classification difficult. In this paper, we propose a biomedical data classification method based on feature selection and data resampling. First, use the minimal-redundancy maximal-relevance (mRMR) method to select biomedical data features, reduce the feature dimension, reduce the computational cost, and improve the generalization ability; then, a new SMOTE oversampling method (Spectral-SMOTE) is proposed, which solves the noise sensitivity problem of SMOTE by an improved spectral clustering method; finally, the marine predators algorithm is improved using piecewise linear chaotic maps and random opposition-based learning strategy to improve the algorithm's optimization seeking ability and convergence speed, and the key parameters of the spectral-SMOTE are optimized using the improved marine predators algorithm, which effectively improves the performance of the over-sampling approach. In this paper, five real biomedical datasets are selected to test and evaluate the proposed method using four classifiers, and three evaluation metrics are used to compare with seven data resampling methods. The experimental results show that the method effectively improves the classification performance of biomedical data. Statistical test results also show that the proposed PRMPA-Spectral-SMOTE method outperforms other data resampling methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Data understanding and preparation in business domain: Importance of meta-features characterization.
- Author
-
Oreški, Dijana and Pihir, Igor
- Subjects
- *
MACHINE learning , *ALGORITHMS , *DEEP learning , *EXPERTISE - Abstract
Various machine learning algorithms are developed with an aim to create precise and trustworthy models and extract knowledge from data sources. Deep expertise in the field of machine learning is required for the challenging task of choosing the right algorithms for a specific dataset. There is no single algorithm that outperforms all others across all applications and different datasets. The difficulty of choosing an appropriate algorithm for a specific task in specific domain is related to the properties of the dataset. Properties of the dataset are measured through meta-features. Meta-features describe task and can provide explanation how one machine learning approach outperforms other algorithms on a given dataset. Learning about the effectiveness of learning algorithms, or meta-learning was developed to deal with this issue. Focus is required because previous research papers have not successfully identified meta-features in particular domains. In this research, we have evaluated various meta-feature characterization methodologies and have concentrated on basic meta-features. Business domain data is in the focus of this paper. We computed basic (general) meta-features and illustrated several use cases for their applications. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. A modified fuzzy K-nearest neighbor using sine cosine algorithm for two-classes and multi-classes datasets.
- Author
-
Zheng, Chengfeng, Kasihmuddin, Mohd Shareduwan Mohd, Mansor, Mohd. Asyraf, Jamaludin, Siti Zulaikha Mohd, and Zamri, Nur Ezlin
- Subjects
- *
K-nearest neighbor classification , *MACHINE learning , *ALGORITHMS , *COSINE function - Abstract
The sine and cosine algorithm has become a widely researched swarm optimization method in recent years due to its simplicity and effectiveness. Based on the advantages, the study in this paper delves deeper into the key parameters that influence the performance of the algorithm, and has implemented modifications such as integrating the reverse learning algorithm and adding elite opposition solution to create the modified Sine and Cosine Algorithm (the modified SCA). Furthermore, by combining the fuzzy k-nearest neighbor method with the modified SCA, the study simulates numeric datasets with two or multiple classes, and analyzes the results. The accuracy rate (ACC) achieved by the modified SCA FKNN in this paper is compared to other models, with data comparison results and tables presented for each. The modified SCA FKNN proposed in this paper has obvious advantages on accuracy rate(ACC). [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Comparative analysis of algorithms used for Twitter spam drift detection.
- Author
-
Thomas, Libina, Nirvinda, Mona, Mounika, Lalitha, and Hulipalled, Vishwanath
- Subjects
- *
SPAM email , *ALGORITHMS , *COMPARATIVE studies , *SOCIAL networks , *SOCIAL interaction , *MACHINE learning - Abstract
Twitter is known to be one of the familiar social networking platform these days, among many others, with a lot of user engagement. This microblogging site encourages social interactions, allowing users to stay up to date on the latest news and events and share them with others in real time. Tweets are limited to 280 characters and is allowed to include links to related websites and tools. With a platform having such wide reach, it is prone to be targeted negatively and spams are one way to do it. Spammers use this platform to display malicious content that is inappropriate and harmful to users worldwide. Machine Learning uses various approaches that can be used to detect spam and overcome it. However, with the advent of recent technologies it has been observed that the properties of tweets vary overtime making it difficult to detect spam leading to the "Twitter Spam Drift" problem. This paper reviews the papers published since 2018 that have focused on the spam drift problem and gives a comparative analysis of the different algorithms that are utilized on the various data sets to tackle such a problem. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Performance analysis of ensemble learning algorithms in intrusion detection systems: A survey.
- Author
-
Anitha and Gandhi, Rajiv
- Subjects
- *
MACHINE learning , *INTRUSION detection systems (Computer security) , *COMPUTER systems , *INTERNET security , *ALGORITHMS - Abstract
The quick development of technology not only makes life easier but also raises several security concerns, so cyber security has become very important and vital research area, rather an inevitable part of computer system. Still, various research being done on the development of effective intrusion detection system (IDS). An IDS is one of the suspicious network activities. An IDS is used to identify many types of malicious actions that can undermine a computer system's protection and confidence. Recently, ensemble algorithms are applied in IDS in order to identify and classify the security threats. In this paper author intends to do a brief review of various Ensemble learning Algorithms in ML, which are most frequently used in IDS for several applications; with specific interest in dataset and metric. This work provides broad study and investigation on current literature, the gap for improving and creating efficient IDS can be determined. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. A review on kidney tumor segmentation and detection using different artificial intelligence algorithms.
- Author
-
Patel, Vinitkumar Vasantbhai and Yadav, Arvind R.
- Subjects
- *
ARTIFICIAL intelligence , *KIDNEY tumors , *ALGORITHMS , *DEEP learning , *DATA warehousing , *MACHINE learning - Abstract
Kidney is one of the significant organs in the human body which performs filtering out blood, balances fluid, removes the waste, maintains the level of electrolytes and hormone levels. So, any disorder or dysfunction in kidney needs to be detected on time in order to preserve life. Segmentation on kidney tumor in medical field is a critical task and many conventional methods have been employed for early prediction of kidney abnormalities but with limitations such as high cost, extended time for computation and analysis with huge amount of data. Due to all such problems, the prediction rate and accuracy has reduced considerably. In order to overcome the challenges, Artificial Intelligence (AI) technology has penetrated into the field of medicine particularly in the renal department. The evolution of AI in kidney therapies improve the process of diagnosis through several Machine Learning (ML) and Deep Learning (DL) algorithms. It has the capability of improving and influencing on the status with its capacity of learning from the massive data and apply them accordingly to differentiate on the circumstances. The storage of larger data and segmentation with AI assistance are highly helpful for the analysis of occurrence of the disease. AI algorithms have predicted the severity of tumor stages with effective accuracies. Hence, this paper provides a critical review of different AI based algorithms being used in the kidney tumor prognostication. Its numerous benefits in field of segmentation have been researched from the existing works and provides an insight on the contribution of AI in the kidney disease prediction. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. A term extraction algorithm based on machine learning and comprehensive feature strategy.
- Author
-
Gong, Xiuliang, Cheng, Bo, Hu, Xiaomei, and Bo, Wen
- Subjects
- *
MACHINE learning , *NATURAL language processing , *ALGORITHMS , *RANDOM fields , *ONTOLOGIES (Information retrieval) , *DATABASES , *MACHINE translating - Abstract
Manual term extraction is similar to literal meaning: A translator browses text, classifies words, and prepares for translation. Terminology, as a centralized carrier of expertise, creation, popularization, and disappearance, dynamically reflects the development and evolution of an industry. The automatic extraction of terminology is a key technology for creating a professional terminology database, and it is also a key topic in the field of natural language processing. The purpose of this paper is to study how to analyse a term extraction algorithm based on machine learning and a comprehensive feature strategy. Focusing on the problems of poor generality and single statistical features of current term extraction algorithms, this paper proposes an improved domain ontology term extraction algorithm based on a comprehensive feature strategy. Moreover, automatic term extraction experiments based on a word-based maximum entropy model and a conditional random field model based on machine learning are conducted in this paper. Its word-based conditional random field model outperforms the maximum entropy model. The experimental results show that the algorithm based on the comprehensive feature strategy improves the accuracy by 8.6% compared with the TF-IDF algorithm and the C-value term extraction algorithm. This algorithm can be used to effectively extract the terms in a text and has good generality. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Machine learning techniques for emotion detection and sentiment analysis: current state, challenges, and future directions.
- Author
-
Alslaity, Alaa and Orji, Rita
- Subjects
- *
SENTIMENT analysis , *DEEP learning , *USER interfaces , *MACHINE learning , *TREATMENT effectiveness , *BEHAVIORAL objectives (Education) , *COMPARATIVE studies , *COMMUNICATION , *FACTOR analysis , *RESEARCH funding , *EMOTIONS , *THEMATIC analysis , *BEHAVIOR modification , *ALGORITHMS ,RESEARCH evaluation - Abstract
Emotion detection and Sentiment analysis techniques are used to understand polarity or emotions expressed by people in many cases, especially during interactive systems use. Recognizing users' emotions is an important topic for human–computer interaction. Computers that recognize emotions would provide more natural interactions. Also, emotion detection helps design human-centred systems that provide adaptable behaviour change interventions based on users' emotions. The growing capability of machine learning to analyze big data and extract emotions therein has led to a surge in research in this domain. With this increased attention, it becomes essential to investigate this research area and provide a comprehensive review of the current state. In this paper, we conduct a systematic review of 123 papers on machine learning-based emotion detection to investigate research trends along many themes, including machine learning approaches, application domain, data, evaluation, and outcome. The results demonstrate: 1) increasing interest in this domain, 2) supervised machine learning (namely, SVM and Naïve Bayes) are the most popular algorithms, 3) Text datasets in the English language are the most common data source, and 4) most research use Accuracy to evaluate performance. Based on the findings, we suggest future directions and recommendations for developing human-centred systems. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. Predicting Money Laundering Using Machine Learning and Artificial Neural Networks Algorithms in Banks.
- Author
-
Lokanan, Mark E.
- Subjects
- *
ARTIFICIAL neural networks , *MONEY laundering , *MACHINE learning , *ALGORITHMS , *RANDOM forest algorithms - Abstract
This paper aims to build a machine learning and a neural network model to detect the probability of money laundering in banks. The paper's data came from a simulation of actual transactions flagged for money laundering in Middle Eastern banks. The main findings highlight that criminal networks mainly use the integration stage to integrate money into the financial system. Fraudsters prefer to launder funds in the early hours, morning followed by the business day's afternoon time intervals. Additionally, the Naïve Bayes and Random Forest classifiers were identified as the two best-performing models to predict bank money laundering transactions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Early Breast Cancer Risk Assessment: Integrating Histopathology with Artificial Intelligence.
- Author
-
Ivanova, Mariia, Pescia, Carlo, Trapani, Dario, Venetis, Konstantinos, Frascarelli, Chiara, Mane, Eltjona, Cursano, Giulia, Sajjadi, Elham, Scatena, Cristian, Cerbelli, Bruna, d'Amati, Giulia, Porta, Francesca Maria, Guerini-Rocco, Elena, Criscitiello, Carmen, Curigliano, Giuseppe, and Fusco, Nicola
- Subjects
- *
BREAST tumor risk factors , *RISK assessment , *MEDICAL protocols , *CANCER relapse , *ARTIFICIAL intelligence , *EARLY detection of cancer , *CYTOCHEMISTRY , *TUMOR markers , *DECISION making in clinical medicine , *IMMUNOHISTOCHEMISTRY , *PATIENT-centered care , *DEEP learning , *ARTIFICIAL neural networks , *MACHINE learning , *ONCOLOGISTS , *INDIVIDUALIZED medicine , *MOLECULAR pathology , *HEALTH care teams , *ALGORITHMS , *DISEASE risk factors - Abstract
Simple Summary: Risk assessment in early breast cancer is critical for clinical decisions, but defining risk categories poses a significant challenge. The integration of conventional histopathology and biomarkers with artificial intelligence (AI) techniques, including machine learning and deep learning, has the potential to offer more precise information. AI applications extend beyond detection to histological subtyping, grading, and molecular feature identification. The successful integration of AI into clinical practice requires collaboration between histopathologists, molecular pathologists, computational pathologists, and oncologists to optimize patient outcomes. Effective risk assessment in early breast cancer is essential for informed clinical decision-making, yet consensus on defining risk categories remains challenging. This paper explores evolving approaches in risk stratification, encompassing histopathological, immunohistochemical, and molecular biomarkers alongside cutting-edge artificial intelligence (AI) techniques. Leveraging machine learning, deep learning, and convolutional neural networks, AI is reshaping predictive algorithms for recurrence risk, thereby revolutionizing diagnostic accuracy and treatment planning. Beyond detection, AI applications extend to histological subtyping, grading, lymph node assessment, and molecular feature identification, fostering personalized therapy decisions. With rising cancer rates, it is crucial to implement AI to accelerate breakthroughs in clinical practice, benefiting both patients and healthcare providers. However, it is important to recognize that while AI offers powerful automation and analysis tools, it lacks the nuanced understanding, clinical context, and ethical considerations inherent to human pathologists in patient care. Hence, the successful integration of AI into clinical practice demands collaborative efforts between medical experts and computational pathologists to optimize patient outcomes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. ULG-SLAM: A Novel Unsupervised Learning and Geometric Feature-Based Visual SLAM Algorithm for Robot Localizability Estimation.
- Author
-
Huang, Yihan, Xie, Fei, Zhao, Jing, Gao, Zhilin, Chen, Jun, Zhao, Fei, and Liu, Xixiang
- Subjects
- *
MACHINE learning , *VISUAL learning , *ALGORITHMS , *ROBOTS , *FEATURE extraction , *WALKING speed - Abstract
Indoor localization has long been a challenging task due to the complexity and dynamism of indoor environments. This paper proposes ULG-SLAM, a novel unsupervised learning and geometric-based visual SLAM algorithm for robot localizability estimation to improve the accuracy and robustness of visual SLAM. Firstly, a dynamic feature filtering based on unsupervised learning and moving consistency checks is developed to eliminate the features of dynamic objects. Secondly, an improved line feature extraction algorithm based on LSD is proposed to optimize the effect of geometric feature extraction. Thirdly, geometric features are used to optimize localizability estimation, and an adaptive weight model and attention mechanism are built using the method of region delimitation and region growth. Finally, to verify the effectiveness and robustness of localizability estimation, multiple indoor experiments using the EuRoC dataset and TUM RGB-D dataset are conducted. Compared with ORBSLAM2, the experimental results demonstrate that absolute trajectory accuracy can be improved by 95% for equivalent processing speed in walking sequences. In fr3/walking_xyz and fr3/walking_half, ULG-SLAM tracks more trajectories than DS-SLAM, and the ATE RMSE is improved by 36% and 6%, respectively. Furthermore, the improvement in robot localizability over DynaSLAM is noteworthy, coming in at about 11% and 3%, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. An Intelligent Decision Algorithm for a Greenhouse System Based on a Rough Set and D-S Evidence Theory.
- Author
-
Lina Wang, Mengjie Xu, and Ying Zhang
- Subjects
- *
GREENHOUSES , *MACHINE learning , *ROUGH sets , *EXPERT evidence , *SUPPORT vector machines , *THEORY of knowledge , *ALGORITHMS , *SOFT sets - Abstract
This paper presents a decision-making approach grounded in rough set theory and evidential reasoning to address the demand for expert decision-making in greenhouse environmental control systems. Furthermore, a decision-making model is developed by integrating the D-S evidence theory with an expert knowledge table for greenhouse environmental control systems. The model's reasoning process encompasses continuous attribute discretization, expert decision table formation, attribute reduction, and evidence combination reasoning. Firstly, the fuzzy C-means clustering algorithm is employed to discretize the original environmental data and cluster it. Subsequently, an attribute reduction algorithm based on information entropy is utilized to optimize the decision table by eliminating unnecessary conditional attributes in expert knowledge. The reduced indicators are then combined using evidential theory. Finally, suitable greenhouse control methods are determined by the confidence decision proposed by the D-S evidence theory. To assess the efficacy of this intelligent decision-making algorithm based on rough set and D-S evidence theory, its performance is compared with traditional SVM algorithms and small-shot learning algorithms. The results indicate that this proposed method significantly enhances the credibility of control decision-making processes, with an average running time of 0.002378s for the fusion decision algorithm and 0.017939s for the support vector machine (SVM) algorithm, respectively. The SVM accuracy rate after testing and training stands at 90.34%. Moreover, retraining based on information entropy attribute reduction leads to a correct decision rate increase of up to 100%. This method notably improves confidence levels in decision-making processes while reducing uncertainty and demonstrates reliability when applied in making decisions regarding greenhouse environments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
20. Time-discrete momentum consensus-based optimization algorithm and its application to Lyapunov function approximation.
- Author
-
Ha, Seung-Yeal, Hwang, Gyuyoung, and Kim, Sungyoon
- Subjects
- *
OPTIMIZATION algorithms , *LYAPUNOV functions , *DISTRIBUTED algorithms , *GLOBAL optimization , *APPROXIMATION algorithms , *MATHEMATICS , *ALGORITHMS - Abstract
In this paper, we study a discrete momentum consensus-based optimization (Momentum-CBO) algorithm which corresponds to a second-order generalization of the discrete first-order CBO [S.-Y. Ha, S. Jin and D. Kim, Convergence of a first-order consensus-based global optimization algorithm, Math. Models Methods Appl. Sci. 30 (2020) 2417–2444]. The proposed algorithm can be understood as the modification of ADAM-CBO, replacing the normalization term by unity. For the proposed Momentum-CBO, we provide a sufficient framework which guarantees the convergence of algorithm toward a global minimum of the objective function. Moreover, we present several experimental results showing that Momentum-CBO has an improved success rate of finding the global minimum compared to vanilla-CBO and show the stability of Momentum-CBO under different initialization schemes. We also show that Momentum-CBO can be used as the alternative of ADAM-CBO which does not have a proper convergence analysis. Finally, we give an application of Momentum-CBO for Lyapunov function approximation using symbolic regression techniques. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. Malnutrition risk assessment using a machine learning‐based screening tool: A multicentre retrospective cohort.
- Author
-
Parchure, Prathamesh, Besculides, Melanie, Zhan, Serena, Cheng, Fu‐yuan, Timsina, Prem, Cheertirala, Satya Narayana, Kersch, Ilana, Wilson, Sara, Freeman, Robert, Reich, David, Mazumdar, Madhu, and Kia, Arash
- Subjects
- *
MALNUTRITION diagnosis , *RISK assessment , *DIETETICS , *MALNUTRITION , *MEDICAL quality control , *HUMAN services programs , *HOSPITAL care , *NUTRITIONAL assessment , *ARTIFICIAL intelligence , *RETROSPECTIVE studies , *DESCRIPTIVE statistics , *LONGITUDINAL method , *PRE-tests & post-tests , *RESEARCH , *METROPOLITAN areas , *MACHINE learning , *QUALITY assurance , *LENGTH of stay in hospitals , *ALGORITHMS , *DISEASE risk factors ,ELECTRONIC health record standards - Abstract
Background: Malnutrition is associated with increased morbidity, mortality, and healthcare costs. Early detection is important for timely intervention. This paper assesses the ability of a machine learning screening tool (MUST‐Plus) implemented in registered dietitian (RD) workflow to identify malnourished patients early in the hospital stay and to improve the diagnosis and documentation rate of malnutrition. Methods: This retrospective cohort study was conducted in a large, urban health system in New York City comprising six hospitals serving a diverse patient population. The study included all patients aged ≥ 18 years, who were not admitted for COVID‐19 and had a length of stay of ≤ 30 days. Results: Of the 7736 hospitalisations that met the inclusion criteria, 1947 (25.2%) were identified as being malnourished by MUST‐Plus‐assisted RD evaluations. The lag between admission and diagnosis improved with MUST‐Plus implementation. The usability of the tool output by RDs exceeded 90%, showing good acceptance by users. When compared pre‐/post‐implementation, the rate of both diagnoses and documentation of malnutrition showed improvement. Conclusion: MUST‐Plus, a machine learning‐based screening tool, shows great promise as a malnutrition screening tool for hospitalised patients when used in conjunction with adequate RD staffing and training about the tool. It performed well across multiple measures and settings. Other health systems can use their electronic health record data to develop, test and implement similar machine learning‐based processes to improve malnutrition screening and facilitate timely intervention. Key points/Highlights: Malnutrition is prevalent among hospitalised patients and frequently goes unrecognised, with the potential for severe sequelae. Accurate diagnosis, documentation and treatment of malnutrition have the potential of having a positive impact on morbidity rate, mortality rate, length of inpatient stay, readmission rate and hospital revenue. The tool's successful application highlights its potential to optimise malnutrition screening in healthcare systems, offering potential benefits for patient outcomes and hospital finances. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. Novel Imaging Approaches for Glioma Classification in the Era of the World Health Organization 2021 Update: A Scoping Review.
- Author
-
Richter, Vivien, Ernemann, Ulrike, and Bender, Benjamin
- Subjects
- *
GLIOMAS , *RADIOMICS , *MAGNETIC resonance imaging , *DESCRIPTIVE statistics , *SYSTEMATIC reviews , *LITERATURE reviews , *DEEP learning , *GENETIC mutation , *NEURORADIOLOGY , *MACHINE learning , *DATA analysis software , *ALGORITHMS - Abstract
Simple Summary: The 2021 WHO classification of central nervous system (CNS) tumors is challenging for neuroradiologists due to the central role of the molecular profile of tumors. We performed a scoping review of recent literature to assess the existing data on the power of novel data analysis tools to predict new tumor classes by imaging. We found room for performance improvement for subgroups with lower incidence (e.g., 1p/19q codeleted or IDH1/2 mutated gliomas) and patients with rare diagnoses (e.g., pediatric gliomas, midline gliomas). More data regarding functional MRI techniques need to be collected. Studies explicitly designed to assess the generalizability of AI-aided tools for predicting molecular tumor subgroups are lacking. The 2021 WHO classification of CNS tumors is a challenge for neuroradiologists due to the central role of the molecular profile of tumors. The potential of novel data analysis tools in neuroimaging must be harnessed to maintain its role in predicting tumor subgroups. We performed a scoping review to determine current evidence and research gaps. A comprehensive literature search was conducted regarding glioma subgroups according to the 2021 WHO classification and the use of MRI, radiomics, machine learning, and deep learning algorithms. Sixty-two original articles were included and analyzed by extracting data on the study design and results. Only 8% of the studies included pediatric patients. Low-grade gliomas and diffuse midline gliomas were represented in one-third of the research papers. Public datasets were utilized in 22% of the studies. Conventional imaging sequences prevailed; data on functional MRI (DWI, PWI, CEST, etc.) are underrepresented. Multiparametric MRI yielded the best prediction results. IDH mutation and 1p/19q codeletion status prediction remain in focus with limited data on other molecular subgroups. Reported AUC values range from 0.6 to 0.98. Studies designed to assess generalizability are scarce. Performance is worse for smaller subgroups (e.g., 1p/19q codeleted or IDH1/2 mutated gliomas). More high-quality study designs with diversity in the analyzed population and techniques are needed. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. Investigating Machine Learning Techniques for Predicting Risk of Asthma Exacerbations: A Systematic Review.
- Author
-
Darsha Jayamini, Widana Kankanamge, Mirza, Farhaan, Asif Naeem, M., and Chan, Amy Hai Yan
- Subjects
- *
ASTHMA risk factors , *ASTHMA prevention , *DISEASE exacerbation , *RISK assessment , *PREDICTION models , *DECISION making , *SYSTEMATIC reviews , *MACHINE learning , *SOCIODEMOGRAPHIC factors , *ALGORITHMS - Abstract
Asthma, a common chronic respiratory disease among children and adults, affects more than 200 million people worldwide and causes about 450,000 deaths each year. Machine learning is increasingly applied in healthcare to assist health practitioners in decision-making. In asthma management, machine learning excels in performing well-defined tasks, such as diagnosis, prediction, medication, and management. However, there remain uncertainties about how machine learning can be applied to predict asthma exacerbation. This study aimed to systematically review recent applications of machine learning techniques in predicting the risk of asthma attacks to assist asthma control and management. A total of 860 studies were initially identified from five databases. After the screening and full-text review, 20 studies were selected for inclusion in this review. The review considered recent studies published from January 2010 to February 2023. The 20 studies used machine learning techniques to support future asthma risk prediction by using various data sources such as clinical, medical, biological, and socio-demographic data sources, as well as environmental and meteorological data. While some studies considered prediction as a category, other studies predicted the probability of exacerbation. Only a group of studies applied prediction windows. The paper proposes a conceptual model to summarise how machine learning and available data sources can be leveraged to produce effective models for the early detection of asthma attacks. The review also generated a list of data sources that other researchers may use in similar work. Furthermore, we present opportunities for further research and the limitations of the preceding studies. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. A Multi-Agent RL Algorithm for Dynamic Task Offloading in D2D-MEC Network with Energy Harvesting †.
- Author
-
Mi, Xin, He, Huaiwen, and Shen, Hong
- Subjects
- *
ENERGY harvesting , *MACHINE learning , *ALGORITHMS , *INTEGER programming , *DYNAMIC loads , *MOBILE computing , *NONLINEAR programming - Abstract
Delay-sensitive task offloading in a device-to-device assisted mobile edge computing (D2D-MEC) system with energy harvesting devices is a critical challenge due to the dynamic load level at edge nodes and the variability in harvested energy. In this paper, we propose a joint dynamic task offloading and CPU frequency control scheme for delay-sensitive tasks in a D2D-MEC system, taking into account the intricacies of multi-slot tasks, characterized by diverse processing speeds and data transmission rates. Our methodology involves meticulous modeling of task arrival and service processes using queuing systems, coupled with the strategic utilization of D2D communication to alleviate edge server load and prevent network congestion effectively. Central to our solution is the formulation of average task delay optimization as a challenging nonlinear integer programming problem, requiring intelligent decision making regarding task offloading for each generated task at active mobile devices and CPU frequency adjustments at discrete time slots. To navigate the intricate landscape of the extensive discrete action space, we design an efficient multi-agent DRL learning algorithm named MAOC, which is based on MAPPO, to minimize the average task delay by dynamically determining task-offloading decisions and CPU frequencies. MAOC operates within a centralized training with decentralized execution (CTDE) framework, empowering individual mobile devices to make decisions autonomously based on their unique system states. Experimental results demonstrate its swift convergence and operational efficiency, and it outperforms other baseline algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. Anomaly detection in IoT environment using machine learning.
- Author
-
Bilakanti, Harini, Pasam, Sreevani, Palakollu, Varshini, and Utukuru, Sairam
- Subjects
- *
ANOMALY detection (Computer security) , *MACHINE learning , *INTERNET of things , *ALGORITHMS - Abstract
This research paper delves into the security concerns within Internet of Things (IoT) networks, emphasizing the need to safeguard the extensive data generated by interconnected physical devices. The presence of anomalies and faults in the sensors and devices deployed within IoT networks can significantly impact the functionality and outcomes of IoT systems. The primary focus of this study is the identification of anomalies in IoT devices arising sensor tampering, with an emphasis on the application of machine learning techniques. While supervised methods like one‐class SVM, Gaussian Naive Bayes, and XG Boost have proven effective in anomaly detection, there has been a noticeable scarcity of research employing unsupervised methods. This scarcity is mainly attributed to the absence of well‐defined ground truths for model training. This research takes an innovative approach by investigating the utility of unsupervised algorithms, including Isolation Forest and Local Outlier Factor, alongside supervised techniques to enhance the precision of anomaly detection. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. Robust and compact maximum margin clustering for high-dimensional data.
- Author
-
Cevikalp, Hakan and Chome, Edward
- Subjects
- *
CLUSTER sampling , *MACHINE learning , *CONJUGATE gradient methods , *ALGORITHMS , *HYPERPLANES - Abstract
In the field of machine learning, clustering has become an increasingly popular research topic due to its critical importance. Many clustering algorithms have been proposed utilizing a variety of approaches. This study focuses on clustering of high-dimensional data using the maximum margin clustering approach. In this paper, two methods are introduced: The first method employs the classical maximum margin clustering approach, which separates data into two clusters with the greatest margin between them. The second method takes cluster compactness into account and searches for two parallel hyperplanes that best fit to the cluster samples while also being as far apart from each other as possible. Additionally, robust variants of these clustering methods are introduced to handle outliers and noise within the data samples. The stochastic gradient algorithm is used to solve the resulting optimization problems, enabling all proposed clustering methods to scale well with large-scale data. Experimental results demonstrate that the proposed methods are more effective than existing maximum margin clustering methods, particularly in high-dimensional clustering problems, highlighting the efficacy of the proposed methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. Quantum state clustering algorithm based on variational quantum circuit.
- Author
-
Fang, Pengpeng, Zhang, Cai, and Situ, Haozhen
- Subjects
- *
QUANTUM states , *ALGORITHMS , *MACHINE learning , *LEARNING communities - Abstract
Clustering, a well-studied problem in the machine learning community, becomes even more intriguing with the emergence of quantum machine learning. Specifically, exploring clustering techniques for quantum data, such as quantum states, holds great interest. This paper introduces a quantum state clustering algorithm that utilizes variational quantum circuits. Our algorithm transforms the clustering problem into a parameter optimization task involving parametric quantum circuits. Each cluster is represented by a variational quantum circuit (VQC), which learns to extract the distinctive feature of its corresponding cluster during the optimization process. To guide the optimization of circuit parameters, we design an objective function that encourages each cluster's feature extractor to produce features similar to states within its own cluster and dissimilar to states in other clusters. We construct four quantum state datasets for testing the effectiveness of our algorithm. The numerical results demonstrate that our algorithm can achieve satisfying performance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. Insider employee-led cyber fraud (IECF) in Indian banks: from identification to sustainable mitigation planning.
- Author
-
Roy, Neha Chhabra and Prabhakaran, Sreeleakha
- Subjects
- *
BANKING laws , *FRAUD prevention , *CORRUPTION , *ORGANIZATIONAL behavior , *RISK assessment , *DATA security , *RANDOM forest algorithms , *COMPUTERS , *FOCUS groups , *DATA security failures , *INTERVIEWING , *DEBT , *QUESTIONNAIRES , *ARTIFICIAL intelligence , *LOGISTIC regression analysis , *IDENTITY theft , *SECURITY systems , *FINANCIAL stress , *RESEARCH methodology , *CONCEPTUAL structures , *JOB stress , *ARTIFICIAL neural networks , *MACHINE learning , *ALGORITHMS - Abstract
This paper explores the different insider employee-led cyber frauds (IECF) based on the recent large-scale fraud events of prominent Indian banking institutions. Examining the different types of fraud and appropriate control measures will protect the banking industry from fraudsters. In this study, we identify and classify Cyber Fraud (CF), map the severity of the fraud on a scale of priority, test the mitigation effectiveness, and propose optimal mitigation measures. The identification and classification of CF losses were based on a literature review and focus group discussions with risk and vigilance officers and cyber cell experts. The CF was analyzed using secondary data. We predicted and prioritized CF based on machine learning-derived Random Forest (RF). An efficient fraud mitigation model was developed based on an offender-victim-centric approach. Mitigation is advised both before and after fraud occurs. Through the findings of this research, banks and fraud investigators can prevent CF by detecting it quickly and controlling it on time. This study proposes a structured, sustainable CF mitigation plan that protects banks, employees, regulators, customers, and the economy, thus saving time, resources, and money. Further, these mitigation measures will improve the reputation of the Indian banking industry and ensure its survival. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. AC-PLT: An algorithm for computer-assisted coding of semantic property listing data.
- Author
-
Ramos, Diego, Moreno, Sebastián, Canessa, Enrique, Chaigneau, Sergio E., and Marchant, Nicolás
- Subjects
- *
NATURAL language processing , *ALGORITHMS , *MACHINE learning - Abstract
In this paper, we present a novel algorithm that uses machine learning and natural language processing techniques to facilitate the coding of feature listing data. Feature listing is a method in which participants are asked to provide a list of features that are typically true of a given concept or word. This method is commonly used in research studies to gain insights into people's understanding of various concepts. The standard procedure for extracting meaning from feature listings is to manually code the data, which can be time-consuming and prone to errors, leading to reliability concerns. Our algorithm aims at addressing these challenges by automatically assigning human-created codes to feature listing data that achieve a quantitatively good agreement with human coders. Our preliminary results suggest that our algorithm has the potential to improve the efficiency and accuracy of content analysis of feature listing data. Additionally, this tool is an important step toward developing a fully automated coding algorithm, which we are currently preliminarily devising. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
30. Personalized Treatment Policies with the Novel Buckley-James Q-Learning Algorithm.
- Author
-
Lee, Jeongjin and Kim, Jong-Min
- Subjects
- *
MACHINE learning , *ALGORITHMS , *SURVIVAL analysis (Biometry) , *TIME management , *PATIENT care , *REINFORCEMENT learning - Abstract
This research paper presents the Buckley-James Q-learning (BJ-Q) algorithm, a cutting-edge method designed to optimize personalized treatment strategies, especially in the presence of right censoring. We critically assess the algorithm's effectiveness in improving patient outcomes and its resilience across various scenarios. Central to our approach is the innovative use of the survival time to impute the reward in Q-learning, employing the Buckley-James method for enhanced accuracy and reliability. Our findings highlight the significant potential of personalized treatment regimens and introduce the BJ-Q learning algorithm as a viable and promising approach. This work marks a substantial advancement in our comprehension of treatment dynamics and offers valuable insights for augmenting patient care in the ever-evolving clinical landscape. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. Human activity recognition using ensemble machine learning classifiers.
- Author
-
Henna, Shagufta, Aboga, David, Bilal, Muhammad, and Azeez, Stephen
- Subjects
- *
HUMAN activity recognition , *MACHINE learning , *SPHERICAL coordinates , *MANUFACTURING processes , *ALGORITHMS - Abstract
Activity recognition offers a wide range of applications in various industrial processes and healthcare. This work proposes an approach to collect data from a spherical coordinate system using smartphones, then extract the highly efficient features using advanced preprocessing. The paper also proposes an algorithm to recognize activity using various ensemble machine-learning approaches based on extracted features. These approaches are evaluated under various combinations of features to analyze the accuracy, sensitivity, specificity, and training time. Experimental results reveal that weighted KNN performs best among all models by achieving 96.2% accuracy with 12 features. On the other hand, Bagged tree ensemble classifiers perform better than subspace KNN ensemble classifiers with an accuracy of 95.3% using 12 features. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. Reweighted Extreme Learning Machine-Based Clutter Suppression and Range Compensation Algorithm for Non-Side-Looking Airborne Radar.
- Author
-
Liu, Jing, Liao, Guisheng, Zeng, Cao, Tao, Haihong, Xu, Jingwei, Zhu, Shengqi, and Juwono, Filbert H.
- Subjects
- *
RADAR in aeronautics , *MACHINE learning , *ALGORITHMS , *MATHEMATICAL complexes - Abstract
Non-side-looking airborne radar provides important applications on account of its all-round multi-angle airspace coverage. However, it suffers clutter range dependence that makes the samples fail to satisfy the condition of being independent and identically distributed (IID), and it severely degrades traditional approaches to clutter suppression and target detection. In this paper, a novel reweighted extreme learning machine (ELM)-based clutter suppression and range compensation algorithm is proposed for non-side-looking airborne radar. The proposed method involves first designing the pre-processing stage, the special reweighted complex-valued activation function containing an unknown range compensation matrix, and two new objective outputs for constructing an initial reweighted ELM-based network with its training. Then, two other objective outputs, a new loss function, and a reverse feedback framework driven by the specifically designed objectives are proposed for the unknown range compensation matrix. Finally, aiming to estimate and reconstruct the unknown compensation matrix, special processes of the complex-valued structures and the theoretical derivations are designed and analyzed in detail. Consequently, with the updated and compensated samples, further processing including space–time adaptive processing (STAP) can be performed for clutter suppression and target detection. Compared with the classic relevant methods, the proposed algorithm achieves significantly superior performance with reasonable computation time. It provides an obviously higher detection probability and better improvement factor (IF). The simulation results verify that the proposed algorithm is effective and has many advantages. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. A hybrid feature selection algorithm combining information gain and grouping particle swarm optimization for cancer diagnosis.
- Author
-
Yang, Fangyuan, Xu, Zhaozhao, Wang, Hong, Sun, Lisha, Zhai, Mengjiao, and Zhang, Juan
- Subjects
- *
FEATURE selection , *PARTICLE swarm optimization , *MACHINE learning , *CANCER diagnosis , *ALGORITHMS , *SUPPORT vector machines - Abstract
Background: Cancer diagnosis based on machine learning has become a popular application direction. Support vector machine (SVM), as a classical machine learning algorithm, has been widely used in cancer diagnosis because of its advantages in high-dimensional and small sample data. However, due to the high-dimensional feature space and high feature redundancy of gene expression data, SVM faces the problem of poor classification effect when dealing with such data. Methods: Based on this, this paper proposes a hybrid feature selection algorithm combining information gain and grouping particle swarm optimization (IG-GPSO). The algorithm firstly calculates the information gain values of the features and ranks them in descending order according to the value. Then, ranked features are grouped according to the information index, so that the features in the group are close, and the features outside the group are sparse. Finally, grouped features are searched using grouping PSO and evaluated according to in-group and out-group. Results: Experimental results show that the average accuracy (ACC) of the SVM on the feature subset selected by the IG-GPSO is 98.50%, which is significantly better than the traditional feature selection algorithm. Compared with KNN, the classification effect of the feature subset selected by the IG-GPSO is still optimal. In addition, the results of multiple comparison tests show that the feature selection effect of the IG-GPSO is significantly better than that of traditional feature selection algorithms. Conclusion: The feature subset selected by IG-GPSO not only has the best classification effect, but also has the least feature scale (FS). More importantly, the IG-GPSO significantly improves the ACC of SVM in cancer diagnostic. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. Formulation of Feature and Label Space Using Modified Delphi in Support of Developing a Machine-Learning Algorithm to Automate Clash Resolution.
- Author
-
Harode, Ashit, Thabet, Walid, and Leite, Fernanda
- Subjects
- *
LITERATURE reviews , *ALGORITHMS , *EVIDENCE gaps , *MACHINE learning , *CONSTRUCTION projects - Abstract
To improve the current manual and iterative nature of clash resolution on construction projects, current research efforts continue to explore and test the utilization of machine-learning algorithms to automate the process. Though current research shows significant accuracy in automating clash resolution, many have failed to provide clear explanation and justification for the selection of their feature and label space. Since this is critical in developing an effective and explainable solution in machine learning, it is crucial to address this research gap. In this paper, the authors utilize an in-depth literature review and industry interviews to capture domain knowledge on how design clashes are resolved by industry experts. From analysis of the knowledge captured, we identified 23 factors considered by experts when resolving clashes and five alternative solutions/options to resolve a clash. Using a pool of industry experts, a modified Delphi approach was conducted to validate the factors and options and to determine a priority ranking. The authors identified 94 industry experts based on a predetermined qualification matrix to take part in the modified Delphi. Twelve participants responded and took part in the first round, and 11 completed the second round. A consensus was reached on all clash factors and resolution options. Factors including "clashing elements type," "constrained slope," "critical element in the clash," "location of the clash," "code compliance," and "project stage clashing element is in" were ranked as the most important factors, while "clashing element material" and "insulation type" were considered the least important. Participants also showed more preference to the "moving the clashing element with low priority in/along x-y-z directions" option to resolve clashes. These identified factors and options will be utilized to collect specific clash data to train and test effective and explainable machine-learning algorithms toward automating clash resolution. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. Artificial Intelligence Algorithms for Healthcare.
- Author
-
Chumachenko, Dmytro and Yakovlev, Sergiy
- Subjects
- *
ARTIFICIAL intelligence , *DEEP learning , *ALGORITHMS , *MACHINE learning , *INFORMATION technology , *MEDICAL care , *MOTION capture (Human mechanics) , *MEDICAL technology - Abstract
Artificial intelligence (AI) algorithms are playing a crucial role in transforming healthcare by enhancing the quality, accessibility, and efficiency of medical care, research, and operations. These algorithms enable healthcare providers to offer more accurate diagnoses, predict outcomes, and customize treatments to individual patient needs. AI also improves operational efficiency by automating routine tasks and optimizing resource management. However, there are challenges to adopting AI in healthcare, such as data privacy concerns and potential biases in algorithms. Collaboration among stakeholders is necessary to ensure ethical use of AI and its positive impact on the field. AI also has applications in medical research, preventive medicine, and public health. It is important to recognize that AI should augment, not replace, the expertise and compassionate care provided by healthcare professionals. The ethical implications and societal impact of AI in healthcare must be carefully considered, guided by fairness, transparency, and accountability principles. Several research papers in this special issue explore the application of AI algorithms in various aspects of healthcare, such as gait analysis for Parkinson's disease diagnosis, human activity recognition, heart disease prediction, compliance assessment with clinical protocols, epidemic management, neurological complications identification, fall prevention, leukemia diagnosis, and genetic clinical pathways. These studies demonstrate the potential of AI in improving medical diagnostics, patient monitoring, and personalized care. [Extracted from the article]
- Published
- 2024
- Full Text
- View/download PDF
36. Dendritic Growth Optimization: A Novel Nature-Inspired Algorithm for Real-World Optimization Problems.
- Author
-
Priyadarshini, Ishaani
- Subjects
- *
OPTIMIZATION algorithms , *BIOLOGICALLY inspired computing , *DEEP learning , *MACHINE learning , *METAHEURISTIC algorithms , *PROBLEM solving , *ALGORITHMS - Abstract
In numerous scientific disciplines and practical applications, addressing optimization challenges is a common imperative. Nature-inspired optimization algorithms represent a highly valuable and pragmatic approach to tackling these complexities. This paper introduces Dendritic Growth Optimization (DGO), a novel algorithm inspired by natural branching patterns. DGO offers a novel solution for intricate optimization problems and demonstrates its efficiency in exploring diverse solution spaces. The algorithm has been extensively tested with a suite of machine learning algorithms, deep learning algorithms, and metaheuristic algorithms, and the results, both before and after optimization, unequivocally support the proposed algorithm's feasibility, effectiveness, and generalizability. Through empirical validation using established datasets like diabetes and breast cancer, the algorithm consistently enhances model performance across various domains. Beyond its working and experimental analysis, DGO's wide-ranging applications in machine learning, logistics, and engineering for solving real-world problems have been highlighted. The study also considers the challenges and practical implications of implementing DGO in multiple scenarios. As optimization remains crucial in research and industry, DGO emerges as a promising avenue for innovation and problem solving. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. Artificial Intelligence in Pediatrics: Learning to Walk Together.
- Author
-
Demirbaş, Kaan Can, Yıldız, Mehmet, Saygılı, Seha, Canpolat, Nur, and Kasapçopur, Özgür
- Subjects
- *
GENOME editing , *COMPUTER assisted instruction , *ARTIFICIAL intelligence , *PEDIATRICS , *MACHINE learning , *LEARNING strategies , *ROBOTICS , *RISK assessment , *CHILD health services , *EDUCATIONAL technology , *DECISION making in clinical medicine , *PREDICTION models , *ALGORITHMS , *EVALUATION - Abstract
In this era of rapidly advancing technology, artificial intelligence (AI) has emerged as a transformative force, even being called the Fourth Industrial Revolution, along with gene editing and robotics. While it has undoubtedly become an increasingly important part of our daily lives, it must be recognized that it is not an additional tool, but rather a complex concept that poses a variety of challenges. AI, with considerable potential, has found its place in both medical care and clinical research. Within the vast field of pediatrics, it stands out as a particularly promising advancement. As pediatricians, we are indeed witnessing the impactful integration of AI-based applications into our daily clinical practice and research efforts. These tools are being used for simple to more complex tasks such as diagnosing clinically challenging conditions, predicting disease outcomes, creating treatment plans, educating both patients and healthcare professionals, and generating accurate medical records or scientific papers. In conclusion, the multifaceted applications of AI in pediatrics will increase efficiency and improve the quality of healthcare and research. However, there are certain risks and threats accompanying this advancement including the biases that may contribute to health disparities and, inaccuracies. Therefore, it is crucial to recognize and address the technical, ethical, and legal challenges as well as explore the benefits in both clinical and research fields. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. YOLOv7oSAR: A Lightweight High-Precision Ship Detection Model for SAR Images Based on the YOLOv7 Algorithm.
- Author
-
Liu, Yilin, Ma, Yong, Chen, Fu, Shang, Erping, Yao, Wutao, Zhang, Shuyan, and Yang, Jin
- Subjects
- *
SHIP models , *SYNTHETIC aperture radar , *MACHINE learning , *SOLID state drives , *ALGORITHMS , *DEEP learning - Abstract
Researchers have explored various methods to fully exploit the all-weather characteristics of Synthetic aperture radar (SAR) images to achieve high-precision, real-time, computationally efficient, and easily deployable ship target detection models. These methods include Constant False Alarm Rate (CFAR) algorithms and deep learning approaches such as RCNN, YOLO, and SSD, among others. While these methods outperform traditional algorithms in SAR ship detection, challenges still exist in handling the arbitrary ship distributions and small target features in SAR remote sensing images. Existing models are complex, with a large number of parameters, hindering effective deployment. This paper introduces a YOLOv7 oriented bounding box SAR ship detection model (YOLOv7oSAR). The model employs a rotation box detection mechanism, uses the KLD loss function to enhance accuracy, and introduces a Bi-former attention mechanism to improve small target detection. By redesigning the network's width and depth and incorporating a lightweight P-ELAN structure, the model effectively reduces its size and computational requirements. The proposed model achieves high-precision detection results on the public RSDD dataset (94.8% offshore, 66.6% nearshore), and its generalization ability is validated on a custom dataset (94.2% overall detection accuracy). [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. Sea Ice Extraction via Remote Sensing Imagery: Algorithms, Datasets, Applications and Challenges.
- Author
-
Huang, Wenjun, Yu, Anzhu, Xu, Qing, Sun, Qun, Guo, Wenyue, Ji, Song, Wen, Bowei, and Qiu, Chunping
- Subjects
- *
SEA ice , *DEEP learning , *REMOTE sensing , *IMAGE recognition (Computer vision) , *GEOGRAPHIC information systems , *ALGORITHMS - Abstract
Deep learning, which is a dominating technique in artificial intelligence, has completely changed image understanding over the past decade. As a consequence, the sea ice extraction (SIE) problem has reached a new era. We present a comprehensive review of four important aspects of SIE, including algorithms, datasets, applications and future trends. Our review focuses on research published from 2016 to the present, with a specific focus on deep-learning-based approaches in the last five years. We divided all related algorithms into three categories, including the conventional image classification approach, the machine learning-based approach and deep-learning-based methods. We reviewed the accessible ice datasets including SAR-based datasets, the optical-based datasets and others. The applications are presented in four aspects including climate research, navigation, geographic information systems (GIS) production and others. This paper also provides insightful observations and inspiring future research directions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. Reinforcement Learning Algorithms with Selector, Tuner, or Estimator.
- Author
-
Masadeh, Ala'eddin, Wang, Zhengdao, and E. Kamal, Ahmed
- Subjects
- *
MACHINE learning , *INTELLIGENT agents , *ARTIFICIAL intelligence , *REINFORCEMENT learning , *INTUITION , *ALGORITHMS - Abstract
This paper presents a range of novel reinforcement learning algorithms derived from the actor–critic approach. These modified algorithms effectively utilize the available information to enhance performance. Our proposed framework introduces several key components to the traditional actor–critic model, including an underlying model learner, selector, tuner, and estimator. The estimator employs an approximate value function and the learned underlying model to estimate the values of all actions at the next state. The selector approximates the optimal action at the next state, which is then utilized by the actor to optimize its policy. In contrast to the conventional actor–critic algorithm where the actor focuses solely on policy optimization and the critic performs value-function approximation and policy evaluation, our selector–actor–critic algorithm employs a selector to approximate the optimal action at the current state, thereby influencing the actor's policy updates. Furthermore, our tuner–actor–critic algorithm incorporates a critic and a model-learner to approximate the action-value function and the dynamics of the underlying environment, respectively. The tuner then utilizes this information to adjust the value of the current state–action pair. In the estimator–selector–actor–critic algorithm, we develop intelligent agents based on the concepts of lookahead and intuition. Lookahead is utilized in estimating the values of available actions at the next state, while intuition guides the maximization of the probability of selecting the approximate optimal action. Through simulation experiments, we evaluate the performance of these algorithms, and the results demonstrate the superiority of the estimator–selector–actor–critic approach over other existing algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
41. Formulation of Feature and Label Space Using Modified Delphi in Support of Developing a Machine-Learning Algorithm to Automate Clash Resolution.
- Author
-
Harode, Ashit, Thabet, Walid, and Leite, Fernanda
- Subjects
- *
MACHINE learning , *LITERATURE reviews , *ALGORITHMS , *EVIDENCE gaps , *CONSTRUCTION projects - Abstract
To improve the current manual and iterative nature of clash resolution on construction projects, current research efforts continue to explore and test the utilization of machine-learning algorithms to automate the process. Though current research shows significant accuracy in automating clash resolution, many have failed to provide clear explanation and justification for the selection of their feature and label space. Since this is critical in developing an effective and explainable solution in machine learning, it is crucial to address this research gap. In this paper, the authors utilize an in-depth literature review and industry interviews to capture domain knowledge on how design clashes are resolved by industry experts. From analysis of the knowledge captured, we identified 23 factors considered by experts when resolving clashes and five alternative solutions/options to resolve a clash. Using a pool of industry experts, a modified Delphi approach was conducted to validate the factors and options and to determine a priority ranking. The authors identified 94 industry experts based on a predetermined qualification matrix to take part in the modified Delphi. Twelve participants responded and took part in the first round, and 11 completed the second round. A consensus was reached on all clash factors and resolution options. Factors including "clashing elements type," "constrained slope," "critical element in the clash," "location of the clash," "code compliance," and "project stage clashing element is in" were ranked as the most important factors, while "clashing element material" and "insulation type" were considered the least important. Participants also showed more preference to the "moving the clashing element with low priority in/along x-y-z directions" option to resolve clashes. These identified factors and options will be utilized to collect specific clash data to train and test effective and explainable machine-learning algorithms toward automating clash resolution. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. Efficient improvement of energy detection technique in cognitive radio networks using K-nearest neighbour (KNN) algorithm.
- Author
-
Musuvathi, Aneesh Sarjit S., Archbald, Jofin F., Velmurugan, T., Sumathi, D., Renuga Devi, S., and Preetha, K. S.
- Subjects
- *
COGNITIVE radio , *RADIO networks , *MACHINE learning , *WIRELESS channels , *ALGORITHMS , *RESOURCE allocation - Abstract
With the birth of the IoT era, it is evident that the existing number of devices is going to rise exponentially. Any two devices will communicate with each other using the same frequency band with limited availability. Therefore, it is of vital importance that this frequency band used for communication be used efficiently to accommodate the maximum number of devices with the available radio resources. Cognitive radio (CR) technology serves this exact purpose. The stated one is an intelligent radio that is made to automatically identify the optimal wireless channel in the available wireless spectrum at a given instant. An important functionality of CR is spectrum sensing. Energy detection is a very popular algorithm used for spectrum sensing in CR technology for efficient allocation of radio resources to the devices intended to communicate with each other. Energy detection detects the presence of a primary user (PU) signal by continuously monitoring a selected frequency bandwidth. The conventional energy detection technique is known to perform poorly in lower SNR ranges. This paper works towards the improvement of the energy detection algorithm with the help of machine learning (ML). The ML model uses the general properties of the signal as training data and classifies between a PU signal and noise at very low SNR ranges (− 25 to − 10 dB). In this research, a K-nearest neighbours (KNN) model is selected for its versatility and simplicity. Upon testing the model with an out-of-sample dataset, the KNN model produced a detection accuracy of 94.5%. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
43. Custom Loss Functions in XGBoost Algorithm for Enhanced Critical Error Mitigation in Drill-Wear Analysis of Melamine-Faced Chipboard.
- Author
-
Bukowski, Michał, Kurek, Jarosław, Świderski, Bartosz, and Jegorowa, Albina
- Subjects
- *
MELAMINE , *MACHINE learning , *ALGORITHMS , *INDUSTRIAL efficiency - Abstract
The advancement of machine learning in industrial applications has necessitated the development of tailored solutions to address specific challenges, particularly in multi-class classification tasks. This study delves into the customization of loss functions within the eXtreme Gradient Boosting (XGBoost) algorithm, which is a critical step in enhancing the algorithm's performance for specific applications. Our research is motivated by the need for precision and efficiency in the industrial domain, where the implications of misclassification can be substantial. We focus on the drill-wear analysis of melamine-faced chipboard, a common material in furniture production, to demonstrate the impact of custom loss functions. The paper explores several variants of Weighted Softmax Loss Functions, including Edge Penalty and Adaptive Weighted Softmax Loss, to address the challenges of class imbalance and the heightened importance of accurately classifying edge classes. Our findings reveal that these custom loss functions significantly reduce critical errors in classification without compromising the overall accuracy of the model. This research not only contributes to the field of industrial machine learning by providing a nuanced approach to loss function customization but also underscores the importance of context-specific adaptations in machine learning algorithms. The results showcase the potential of tailored loss functions in balancing precision and efficiency, ensuring reliable and effective machine learning solutions in industrial settings. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. Coupling of SME innovation and innovation in regional economic prosperity with machine learning and IoT technologies using XGBoost algorithm.
- Author
-
Wang, Dan and Zhang, Yina
- Subjects
- *
MACHINE learning , *SMALL business , *INTERNET of things , *ALGORITHMS , *ECONOMIC geography , *TECHNOLOGICAL innovations , *TECHNOLOGY convergence - Abstract
In recent years, the convergence of the Internet of Things (IoT) and machine learning technologies has instigated advancements in IoT, and machine learning has revolutionized SMEs, reshaping their role in regional economic growth. These technologies provide SMEs with innovative tools, fostering connectivity within regional ecosystems. This transformation is crucial for China's economic realignment in its 'new normal' phase, with enterprises driving social and economic progress. This study proposed the XGBoost algorithm and IoT data enhance collaboration among SMEs in regional economies, fostering innovation and driving economic development. The role of IoT and machine learning in promoting sustainable regional economic prosperity. Technological innovation is essential for enterprise survival and growth in a competitive landscape. This paper examines the influence of environmental factors and geography on economic development across China's regions. It focuses on SME coupling and coordination, contributing to regional economic progress through performance evaluation. IoT integration provides SMEs with abundant real-time data, enabling deep insights into customer behavior, supply chains, and production. Simultaneously, the XGBoost algorithm efficiently processes and extracts actionable insights from this data. Data from 11 provinces in the Yangtze River economic belt highlights Jiangsu Province's leadership in regional innovation performance for 2015 and 2020. The empirical results, grounded in datasets amalgamating information from these provinces, attest to the potential of this IoT and XGBoost-driven approach. Boasting an impressive accuracy rate of 91.7%, surpassing alternative machine learning techniques like SVM, KNN, DT, RF, and LR, this research underscores the effectiveness of this integrated strategy in optimizing SME operations. Additionally, it computes the coupling coordination degree, innovation environment ranking, mean value, and numerical changes from 2010 to 2020, with an average innovation coupling coordination degree of 0.1624 across 30 Chinese provinces. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
45. A penalized variable selection ensemble algorithm for high-dimensional group-structured data.
- Author
-
Li, Dongsheng, Pan, Chunyan, Zhao, Jing, and Luo, Anfei
- Subjects
- *
LOW birth weight , *STANDARD deviations , *HIGH-dimensional model representation , *MATHEMATICAL variables , *MACHINE learning , *ALGORITHMS - Abstract
This paper presents a multi-algorithm fusion model (StackingGroup) based on the Stacking ensemble learning framework to address the variable selection problem in high-dimensional group structure data. The proposed algorithm takes into account the differences in data observation and training principles of different algorithms. It leverages the strengths of each model and incorporates Stacking ensemble learning with multiple group structure regularization methods. The main approach involves dividing the data set into K parts on average, using more than 10 algorithms as basic learning models, and selecting the base learner based on low correlation, strong prediction ability, and small model error. Finally, we selected the grSubset + grLasso, grLasso, and grSCAD algorithms as the base learners for the Stacking algorithm. The Lasso algorithm was used as the meta-learner to create a comprehensive algorithm called StackingGroup. This algorithm is designed to handle high-dimensional group structure data. Simulation experiments showed that the proposed method outperformed other R2, RMSE, and MAE prediction methods. Lastly, we applied the proposed algorithm to investigate the risk factors of low birth weight in infants and young children. The final results demonstrate that the proposed method achieves a mean absolute error (MAE) of 0.508 and a root mean square error (RMSE) of 0.668. The obtained values are smaller compared to those obtained from a single model, indicating that the proposed method surpasses other algorithms in terms of prediction accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
46. A Novel Learning Approach to Remove Oscillations in First‐Order Takagi–Sugeno Fuzzy System: Gradient Descent‐Based Neuro‐Fuzzy Algorithm Using Smoothing Group Lasso Regularization.
- Author
-
Liu, Yan, Wang, Rui, Liu, Yuanquan, Shao, Qiang, Lv, Yan, and Yu, Yan
- Subjects
- *
FUZZY systems , *MACHINE learning , *OSCILLATIONS , *FUZZY algorithms , *ALGORITHMS , *NONLINEAR systems - Abstract
As a universal approximator, the first order Takagi–Sugeno fuzzy system possesses the capability to approximate widespread nonlinear systems through a group of IF THEN fuzzy rules. Although group lasso regularization has the advantage of inducing group sparsity and handling variable selection issues, it can lead to numerical oscillations and theoretical challenges in calculating the gradient at the origin when employed directly during training. The paper addresses the aforementioned obstacle by invoking a smoothing function to approximate group lasso regularization. On this basis, a gradient‐based neuro fuzzy learning algorithm with smoothing group lasso regularization for the first order Takagi–Sugeno fuzzy system is proposed. The convergence of the proposed algorithm is rigorously proved under gentle conditions. In addition, experimental outcomes acquired on two approximations and two classification simulations demonstrate that the proposed algorithm outperforms the algorithm with original group lasso regularization and L2 regularization in terms of error, pruned neurons, and accuracy. This is particularly evident in significant advancements in pruned neurons due to group sparsity. In comparison to the algorithm with L2 regularization, the proposed algorithm exhibits improvements of 6.3, 5.3, and 142.6 in pruned neurons during sin(πx)$(\pi x)$ function, Gabor function, and Sonar benchmark dataset simulations, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
47. Autonomous Parameter Balance in Population-Based Approaches: A Self-Adaptive Learning-Based Strategy.
- Author
-
Vega, Emanuel, Lemus-Romani, José, Soto, Ricardo, Crawford, Broderick, Löffler, Christoffer, Peña, Javier, and Talbi, El-Gazhali
- Subjects
- *
SELF-adaptive software , *METAHEURISTIC algorithms , *MANUFACTURING cells , *KNAPSACK problems , *ALGORITHMS - Abstract
Population-based metaheuristics can be seen as a set of agents that smartly explore the space of solutions of a given optimization problem. These agents are commonly governed by movement operators that decide how the exploration is driven. Although metaheuristics have successfully been used for more than 20 years, performing rapid and high-quality parameter control is still a main concern. For instance, deciding the proper population size yielding a good balance between quality of results and computing time is constantly a hard task, even more so in the presence of an unexplored optimization problem. In this paper, we propose a self-adaptive strategy based on the on-line population balance, which aims for improvements in the performance and search process on population-based algorithms. The design behind the proposed approach relies on three different components. Firstly, an optimization-based component which defines all metaheuristic tasks related to carry out the resolution of the optimization problems. Secondly, a learning-based component focused on transforming dynamic data into knowledge in order to influence the search in the solution space. Thirdly, a probabilistic-based selector component is designed to dynamically adjust the population. We illustrate an extensive experimental process on large instance sets from three well-known discrete optimization problems: Manufacturing Cell Design Problem, Set covering Problem, and Multidimensional Knapsack Problem. The proposed approach is able to compete against classic, autonomous, as well as IRace-tuned metaheuristics, yielding interesting results and potential future work regarding dynamically adjusting the number of solutions interacting on different times within the search process. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
48. Bayesian network structure learning based on HC-PSO algorithm.
- Author
-
Gao, Wenlong, Zhi, Minqian, Ke, Yongsong, Wang, Xiaolong, Zhuo, Yun, Liu, Anping, and Yang, Yi
- Subjects
- *
BAYESIAN analysis , *PARTICLE swarm optimization , *MACHINE learning , *HAMMING distance , *ALGORITHMS , *HEURISTIC algorithms , *FUZZY graphs , *GENETIC algorithms - Abstract
Structure learning is the core of graph model Bayesian Network learning, and the current mainstream single search algorithm has problems such as poor learning effect, fuzzy initial network, and easy falling into local optimum. In this paper, we propose a heuristic learning algorithm HC-PSO combining the HC (Hill Climbing) algorithm and PSO (Particle Swarm Optimization) algorithm, which firstly uses HC algorithm to search for locally optimal network structures, takes these networks as the initial networks, then introduces mutation operator and crossover operator, and uses PSO algorithm for global search. Meanwhile, we use the DE (Differential Evolution) strategy to select the mutation operator and crossover operator. Finally, experiments are conducted in four different datasets to calculate BIC (Bayesian Information Criterion) and HD (Hamming Distance), and comparative analysis is made with other algorithms, the structure shows that the HC-PSO algorithm is superior in feasibility and accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
49. RUCIB: a novel rule-based classifier based on BRADO algorithm.
- Author
-
Morovatian, Iman, Basiri, Alireza, and Rezaei, Samira
- Subjects
- *
SUPERVISED learning , *DATABASES , *ALGORITHMS , *MACHINE learning , *RANDOM forest algorithms - Abstract
Classification is a widely used supervised learning technique that enables models to discover the relationship between a set of features and a specified label using available data. Its applications span various fields such as engineering, telecommunication, astronomy, and medicine. In this paper, we propose a novel rule-based classifier called RUCIB (RUle-based Classifier Inspired by BRADO), which draws inspiration from the socio-inspired swarm intelligence algorithm known as BRADO. RUCIB introduces two key aspects: the ability to accommodate multiple values for features within a rule and the capability to explore all data features simultaneously. To evaluate the performance of RUCIB, we conducted experiments using ten databases sourced from the UCI machine learning database repository. In terms of classification accuracy, we compared RUCIB to ten well-known classifiers. Our results demonstrate that, on average, RUCIB outperforms Naive Bayes, SVM, PART, Hoeffding Tree, C4.5, ID3, Random Forest, CORER, CN2, and RACER by 9.32%, 8.97%, 7.58%, 7.4%, 7.34%, 7.34%, 7.22%, 5.06%, 5.01%, and 1.92%, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
50. Ship-Fire Net: An Improved YOLOv8 Algorithm for Ship Fire Detection.
- Author
-
Zhang, Ziyang, Tan, Lingye, and Tiong, Robert Lee Kong
- Subjects
- *
FIRE detectors , *MACHINE learning , *OBJECT recognition (Computer vision) , *DEEP learning , *ALGORITHMS , *COMPUTATIONAL complexity , *SHIPS - Abstract
Ship fire may result in significant damage to its structure and large economic loss. Hence, the prompt identification of fires is essential in order to provide prompt reactions and effective mitigation strategies. However, conventional detection systems exhibit limited efficacy and accuracy in detecting targets, which has been mostly attributed to limitations imposed by distance constraints and the motion of ships. Although the development of deep learning algorithms provides a potential solution, the computational complexity of ship fire detection algorithm pose significant challenges. To solve this, this paper proposes a lightweight ship fire detection algorithm based on YOLOv8n. Initially, a dataset, including more than 4000 unduplicated images and their labels, is established before training. In order to ensure the performance of algorithms, both fire inside ship rooms and also fire on board are considered. Then after tests, YOLOv8n is selected as the model with the best performance and fastest speed from among several advanced object detection algorithms. GhostnetV2-C2F is then inserted in the backbone of the algorithm for long-range attention with inexpensive operation. In addition, spatial and channel reconstruction convolution (SCConv) is used to reduce redundant features with significantly lower complexity and computational costs for real-time ship fire detection. For the neck part, omni-dimensional dynamic convolution is used for the multi-dimensional attention mechanism, which also lowers the parameters. After these improvements, a lighter and more accurate YOLOv8n algorithm, called Ship-Fire Net, was proposed. The proposed method exceeds 0.93, both in precision and recall for fire and smoke detection in ships. In addition, the mAP@0.5 reaches about 0.9. Despite the improvement in accuracy, Ship-Fire Net also has fewer parameters and lower FLOPs compared to the original, which accelerates its detection speed. The FPS of Ship-Fire Net also reaches 286, which is helpful for real-time ship fire monitoring. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.