2,199 results
Search Results
2. 卷积融合文本和异质信息网络的 学术论文推荐算法.
- Author
-
吴俊超, 刘柏嵩, 沈小烽, and 张雪垣
- Subjects
- *
INFORMATION networks , *CONVOLUTIONAL neural networks , *MACHINE learning , *PRODUCT design , *ALGORITHMS - Abstract
In view of the problems of data sparsity and the diversity in academic paper recom-mender systems,based on CONVNCF, this paper proposed an algorithm of convolution with word and heterogeneous information network for academic paper recommendation ( WN -APR) . Firstly, WN -APR algorithm learned user and paper' s diverse features from different semantics to alleviate the sparsity problem. Then it designed an outer product fusing way to seamlessly combine user features with paper features. Replacing of 2D CNN, this algorithm applied 3 D convolution to mine the influence of different features on the performance. Finally, it modified the BPR loss function to enhance diversity in recommendations. Experimental results on CiteULike-a and CiteULike-t datasets show that WN-APR improves the performance of accuracy and diversity over the baseline models. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
3. A Machine Learning Model to Predict Citation Counts of Scientific Papers in Otology Field.
- Author
-
Alohali, Yousef A., Fayed, Mahmoud S., Mesallam, Tamer, Abdelsamad, Yassin, Almuhawas, Fida, and Hagr, Abdulrahman
- Subjects
- *
DECISION trees , *SERIAL publications , *NATURAL language processing , *BIBLIOMETRICS , *MACHINE learning , *REGRESSION analysis , *RANDOM forest algorithms , *CITATION analysis , *DESCRIPTIVE statistics , *PREDICTION models , *ARTIFICIAL neural networks , *MEDICAL research , *MEDICAL specialties & specialists , *ALGORITHMS - Abstract
One of the most widely used measures of scientific impact is the number of citations. However, due to its heavy-tailed distribution, citations are fundamentally difficult to predict but can be improved. This study was aimed at investigating the factors and parts influencing the citation number of a scientific paper in the otology field. Therefore, this work proposes a new solution that utilizes machine learning and natural language processing to process English text and provides a paper citation as the predicted results. Different algorithms are implemented in this solution, such as linear regression, boosted decision tree, decision forest, and neural networks. The application of neural network regression revealed that papers' abstracts have more influence on the citation numbers of otological articles. This new solution has been developed in visual programming using Microsoft Azure machine learning at the back end and Programming Without Coding Technology at the front end. We recommend using machine learning models to improve the abstracts of research articles to get more citations. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
4. A BPNN Model-Based AdaBoost Algorithm for Estimating Inside Moisture of Oil–Paper Insulation of Power Transformer.
- Author
-
Liu, Jiefeng, Ding, Zheshi, Fan, Xianhao, Geng, Chuhan, Song, Boshu, Wang, Qingyin, and Zhang, Yiyi
- Subjects
- *
POWER transformers , *TRANSFORMER insulation , *MOISTURE , *ALGORITHMS , *MACHINE learning , *CLASSIFICATION algorithms - Abstract
The traditional method for transformer moisture diagnosis is to establish empirical equations between feature parameters extracted from frequency domain spectroscopy (FDS) and the transformer’s moisture content. However, the established empirical equation may not be applicable to a novel testing environment, resulting in an unreliable evaluation result. In this regard, it is acknowledged that FDS combined with machine learning is more suitable for estimating moisture content in a variety of test environments. Nonetheless, the accuracy of the estimation results obtained using the existing method is limited by the algorithm’s inability to generalize. To address this issue, we propose an AdaBoost algorithm-enhanced back-propagation neural network (BP_AdaBoost). This study creates a database by extracting feature parameters from the FDS that characterize the insulation states of the prepared samples. Then, using the BP_AdaBoost algorithm and the newly constructed database, the moisture estimation models are trained. Finally, the results of the estimation are discussed in terms of laboratory and field transformers. By comparing the proposed BP_AdaBoost algorithm to other intelligence algorithms, it is demonstrated that it not only performs better in generalization, but also maintains a high level of accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
5. SDP-Based Bounds for the Quadratic Cycle Cover Problem via Cutting-Plane Augmented Lagrangian Methods and Reinforcement Learning: INFORMS Journal on Computing Meritorious Paper Awardee.
- Author
-
de Meijer, Frank and Sotirov, Renata
- Subjects
- *
REINFORCEMENT learning , *COMBINATORIAL optimization , *TRAVELING salesman problem , *ALGORITHMS , *SEMIDEFINITE programming , *MACHINE learning , *DIRECTED graphs - Abstract
We study the quadratic cycle cover problem (QCCP), which aims to find a node-disjoint cycle cover in a directed graph with minimum interaction cost between successive arcs. We derive several semidefinite programming (SDP) relaxations and use facial reduction to make these strictly feasible. We investigate a nontrivial relationship between the transformation matrix used in the reduction and the structure of the graph, which is exploited in an efficient algorithm that constructs this matrix for any instance of the problem. To solve our relaxations, we propose an algorithm that incorporates an augmented Lagrangian method into a cutting-plane framework by utilizing Dykstra's projection algorithm. Our algorithm is suitable for solving SDP relaxations with a large number of cutting-planes. Computational results show that our SDP bounds and efficient cutting-plane algorithm outperform other QCCP bounding approaches from the literature. Finally, we provide several SDP-based upper bounding techniques, among which is a sequential Q-learning method that exploits a solution of our SDP relaxation within a reinforcement learning environment. Summary of Contribution: The quadratic cycle cover problem (QCCP) is the problem of finding a set of node-disjoint cycles covering all the nodes in a graph such that the total interaction cost between successive arcs is minimized. The QCCP has applications in many fields, among which are robotics, transportation, energy distribution networks, and automatic inspection. Besides this, the problem has a high theoretical relevance because of its close connection to the quadratic traveling salesman problem (QTSP). The QTSP has several applications, for example, in bioinformatics, and is considered to be among the most difficult combinatorial optimization problems nowadays. After removing the subtour elimination constraints, the QTSP boils down to the QCCP. Hence, an in-depth study of the QCCP also contributes to the construction of strong bounds for the QTSP. In this paper, we study the application of semidefinite programming (SDP) to obtain strong bounds for the QCCP. Our strongest SDP relaxation is very hard to solve by any SDP solver because of the large number of involved cutting-planes. Because of that, we propose a new approach in which an augmented Lagrangian method is incorporated into a cutting-plane framework by utilizing Dykstra's projection algorithm. We emphasize an efficient implementation of the method and perform an extensive computational study. This study shows that our method is able to handle a large number of cuts and that the resulting bounds are currently the best QCCP bounds in the literature. We also introduce several upper bounding techniques, among which is a distributed reinforcement learning algorithm that exploits our SDP relaxations. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
6. Critical Appraisal of a Machine Learning Paper: A Guide for the Neurologist.
- Author
-
Vinny, Pulikottil W., Garg, Rahul, Srivastava, M. V. Padma, Lal, Vivek, and Vishnu, Venugoapalan Y.
- Subjects
- *
DEEP learning , *NEUROLOGISTS , *EVIDENCE-based medicine , *MACHINE learning , *BENCHMARKING (Management) , *TERMS & phrases , *ARTIFICIAL neural networks , *PREDICTION models , *ALGORITHMS - Abstract
Machine learning (ML), a form of artificial intelligence (AI), is being increasingly employed in neurology. Reported performance metrics often match or exceed the efficiency of average clinicians. The neurologist is easily baffled by the underlying concepts and terminologies associated with ML studies. The superlative performance metrics of ML algorithms often hide the opaque nature of its inner workings. Questions regarding ML model's interpretability and reproducibility of its results in real-world scenarios, need emphasis. Given an abundance of time and information, the expert clinician should be able to deliver comparable predictions to ML models, a useful benchmark while evaluating its performance. Predictive performance metrics of ML models should not be confused with causal inference between its input and output. ML and clinical gestalt should compete in a randomized controlled trial before they can complement each other for screening, triaging, providing second opinions and modifying treatment. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
7. Canadian Association of Radiologists White Paper on De-identification of Medical Imaging: Part 2, Practical Considerations.
- Author
-
Parker, William, Jaremko, Jacob L., Cicero, Mark, Azar, Marleine, El-Emam, Khaled, Gray, Bruce G., Hurrell, Casey, Lavoie-Cardinal, Flavie, Desjardins, Benoit, Lum, Andrea, Sheremeta, Lori, Lee, Emil, Reinhold, Caroline, Tang, An, and Bromwich, Rebecca
- Subjects
- *
ALGORITHMS , *ARTIFICIAL intelligence , *DATA encryption , *DATABASE management , *DIAGNOSTIC imaging , *HEALTH services accessibility , *MACHINE learning , *MEDICAL protocols , *DICOM (Computer network protocol) , *COVID-19 pandemic - Abstract
The application of big data, radiomics, machine learning, and artificial intelligence (AI) algorithms in radiology requires access to large data sets containing personal health information. Because machine learning projects often require collaboration between different sites or data transfer to a third party, precautions are required to safeguard patient privacy. Safety measures are required to prevent inadvertent access to and transfer of identifiable information. The Canadian Association of Radiologists (CAR) is the national voice of radiology committed to promoting the highest standards in patient-centered imaging, lifelong learning, and research. The CAR has created an AI Ethical and Legal standing committee with the mandate to guide the medical imaging community in terms of best practices in data management, access to health care data, de-identification, and accountability practices. Part 2 of this article will inform CAR members on the practical aspects of medical imaging de-identification, strengths and limitations of de-identification approaches, list of de-identification software and tools available, and perspectives on future directions. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
8. Physics driven behavioural clustering of free-falling paper shapes.
- Author
-
Howison, Toby, Hughes, Josie, Giardina, Fabio, and Iida, Fumiya
- Subjects
- *
PHYSICS , *SET functions , *MACHINE learning , *PHENOMENOLOGICAL theory (Physics) , *CONTINUUM mechanics - Abstract
Many complex physical systems exhibit a rich variety of discrete behavioural modes. Often, the system complexity limits the applicability of standard modelling tools. Hence, understanding the underlying physics of different behaviours and distinguishing between them is challenging. Although traditional machine learning techniques could predict and classify behaviour well, typically they do not provide any meaningful insight into the underlying physics of the system. In this paper we present a novel method for extracting physically meaningful clusters of discrete behaviour from limited experimental observations. This method obtains a set of physically plausible functions that both facilitate behavioural clustering and aid in system understanding. We demonstrate the approach on the V-shaped falling paper system, a new falling paper type system that exhibits four distinct behavioural modes depending on a few morphological parameters. Using just 49 experimental observations, the method discovered a set of candidate functions that distinguish behaviours with an error of 2.04%, while also aiding insight into the physical phenomena driving each behaviour. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
9. Citation recommendation using modified HITS algorithm.
- Author
-
Kammari, Monachary and Bhavani, S. Durga
- Subjects
- *
DEEP learning , *ALGORITHMS , *COMPUTER performance , *WEBSITES , *MACHINE learning - Abstract
Over the years the number of research publications per year is growing exponentially. Finding research papers of quality from the massive literature of relevant articles is a challenging and time-consuming task. The approaches in the latest literature address citation recommendation by utilizing large bibliographic information and use machine learning and deep learning methods for the task. These techniques clearly require a large amount of training data as well as machines with high processing power. To overcome these issues, we propose a novel method by modifying the popular hyperlink induced topic search (HITS), a web page ranking algorithm, as citation recommendation using hyperlink induced topic search (CR-HITS) that works on a directed and weighted heterogeneous bibliographic network containing diverse types of nodes and edges. We define effective scoring schemes for nodes and edges based on basic bibliographic information like citations of papers, number of publications of an author, etc. Given a few seed papers, the citation recommendation algorithm CR-HITS is run on small neighborhoods of the seed papers and hence the time taken by the execution is very small to yield the final recommendations. To the best of our knowledge, HITS has been used for the first time for the citation recommendation problem. We perform extensive experimentation on DBLP (version-11) and ACM (version-9) datasets and compare the results with many baseline methods in terms of MAP, MRR, and recall@N measures. The performance of the proposed algorithms is superior with respect to the MAP metric and matches the second best for the other two metrics. Since the top two algorithms use deep learning methods and use much larger bibliographic information including abstracts of the papers, we claim that our approach utilizes very low resources, yet yields recommendations that are very close to the top recommendations. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Channel Prediction for Underwater Acoustic Communication: A Review and Performance Evaluation of Algorithms.
- Author
-
Liu, Haotian, Ma, Lu, Wang, Zhaohui, and Qiao, Gang
- Subjects
- *
DEEP learning , *UNDERWATER acoustic communication , *MACHINE learning , *ALGORITHMS , *TELECOMMUNICATION systems , *FORECASTING - Abstract
Underwater acoustic (UWA) channel prediction technology, as an important topic in UWA communication, has played an important role in UWA adaptive communication network and underwater target perception. Although many significant advancements have been achieved in underwater acoustic channel prediction over the years, a comprehensive summary and introduction is still lacking. As the first comprehensive overview of UWA channel prediction, this paper introduces past works and algorithm implementation methods of channel prediction from the perspective of linear, kernel-based, and deep learning approaches. Importantly, based on available at-sea experiment datasets, this paper compares the performance of current primary UWA channel prediction algorithms under a unified system framework, providing researchers with a comprehensive and objective understanding of UWA channel prediction. Finally, it discusses the directions and challenges for future research. The survey finds that linear prediction algorithms are the most widely applied, and deep learning, as the most advanced type of algorithm, has moved this field into a new stage. The experimental results show that the linear algorithms have the lowest computational complexity, and when the training samples are sufficient, deep learning algorithms have the best prediction performance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Anomaly Detection in Blockchain Networks Using Unsupervised Learning: A Survey.
- Author
-
Cholevas, Christos, Angeli, Eftychia, Sereti, Zacharoula, Mavrikos, Emmanouil, and Tsekouras, George E.
- Subjects
- *
DATA structures , *MACHINE learning , *PRIVATE networks , *BLOCKCHAINS , *ALGORITHMS - Abstract
In decentralized systems, the quest for heightened security and integrity within blockchain networks becomes an issue. This survey investigates anomaly detection techniques in blockchain ecosystems through the lens of unsupervised learning, delving into the intricacies and going through the complex tapestry of abnormal behaviors by examining avant-garde algorithms to discern deviations from normal patterns. By seamlessly blending technological acumen with a discerning gaze, this survey offers a perspective on the symbiotic relationship between unsupervised learning and anomaly detection by reviewing this problem with a categorization of algorithms that are applied to a variety of problems in this field. We propose that the use of unsupervised algorithms in blockchain anomaly detection should be viewed not only as an implementation procedure but also as an integration procedure, where the merits of these algorithms can effectively be combined in ways determined by the problem at hand. In that sense, the main contribution of this paper is a thorough study of the interplay between various unsupervised learning algorithms and how this can be used in facing malicious activities and behaviors within public and private blockchain networks. The result is the definition of three categories, the characteristics of which are recognized in terms of the way the respective integration takes place. When implementing unsupervised learning, the structure of the data plays a pivotal role. Therefore, this paper also provides an in-depth presentation of the data structures commonly used in unsupervised learning-based blockchain anomaly detection. The above analysis is encircled by a presentation of the typical anomalies that have occurred so far along with a description of the general machine learning frameworks developed to deal with them. Finally, the paper spotlights challenges and directions that can serve as a comprehensive compendium for future research efforts. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Cognitive decline assessment using semantic linguistic content and transformer deep learning architecture.
- Author
-
PL, Rini and KS, Gayathri
- Subjects
- *
DIAGNOSIS of dementia , *COGNITION disorders diagnosis , *SPEECH evaluation , *CROSS-sectional method , *PREDICTION models , *TASK performance , *DESCRIPTIVE statistics , *NATURAL language processing , *LINGUISTICS , *EXPERIMENTAL design , *DEEP learning , *COMPUTER-aided diagnosis , *LATENT semantic analysis , *NEUROPSYCHOLOGICAL tests , *RESEARCH , *SEMANTIC memory , *EARLY diagnosis , *COMPARATIVE studies , *MACHINE learning , *FACTOR analysis , *ALGORITHMS , *DEMENTIA patients - Abstract
Background: Dementia is a cognitive decline that leads to the progressive deterioration of an individual's ability to perform daily activities independently. As a result, a considerable amount of time and resources are spent on caretaking. Early detection of dementia can significantly reduce the effort and resources needed for caretaking. Aims: This research proposes an approach for assessing cognitive decline by analysing speech data, specifically focusing on speech relevance as a crucial indicator for memory recall. Methods & Procedures: This is a cross‐sectional, online, self‐administered. The proposed method used deep learning architecture based on transformers, with BERT (Bidirectional Encoder Representations from Transformers) and Sentence‐Transformer to derive encoded representations of speech transcripts. These representations provide contextually descriptive information that is used to analyse the relevance of sentences in their respective contexts. The encoded information is then compared using cosine similarity metrics to measure the relevance of uttered sequences of sentences. The study uses the Pitt Corpus Dementia dataset for experimentation, which consists of speech data from individuals with and without dementia. The accuracy of the proposed multi‐QA‐MPNet (Multi‐Query Maximum Inner Product Search Pretraining) model is compared with other pretrained transformer models of Sentence‐Transformer. Outcomes & Results: The results show that the proposed approach outperforms the other models in capturing context level information, particularly semantic memory. Additionally, the study explores the suitability of different similarity measures to evaluate the relevance of uttered sequences of sentences. The experimentation reveals that cosine similarity is the most appropriate measure for this task. Conclusions & Implications: This finding has significant implications for the early warning signs of dementia, as it suggests that cosine similarity metrics can effectively capture the semantic relevance of spoken language. The persistent cognitive decline over time acts as one of the indicators for prevalence of dementia. Additionally early dementia could be recognised by analysis on other modalities like speech and brain images. WHAT THIS PAPER ADDS: What is already known on this subject: It is already known that speech‐ and language‐based detection methods can be useful for dementia diagnosis, as language difficulties are often early signs of the disease. Additionally, deep learning algorithms have shown promise in detecting and diagnosing dementia through analysing large datasets, particularly in speech‐ and language‐based detection methods. However, further research is needed to validate the performance of these algorithms on larger and more diverse datasets and to address potential biases and limitations. What this paper adds to existing knowledge: This study presents a unique and effective approach for cognitive decline assessment through analysing speech data. The study provides valuable insights into the importance of context and semantic memory in accurately detecting the potential in dementia and demonstrates the applicability of deep learning models for this purpose. The findings of this study have important clinical implications and can inform future research and development in the field of dementia detection and care. What are the potential or actual clinical implications of this work?: The proposed approach for cognitive decline assessment using speech data and deep learning models has significant clinical implications. It has the potential to improve the accuracy and efficiency of dementia diagnosis, leading to earlier detection and more effective treatments, which can improve patient outcomes and quality of life. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. VIS-SLAM: A Real-Time Dynamic SLAM Algorithm Based on the Fusion of Visual, Inertial, and Semantic Information.
- Author
-
Wang, Yinglong, Liu, Xiaoxiong, Zhao, Minkun, and Xu, Xinlong
- Subjects
- *
MOBILE robots , *MACHINE learning , *MOBILE learning , *DEEP learning , *ALGORITHMS , *INFORMATION measurement , *PROBABILITY theory , *GEOMETRY - Abstract
A deep learning-based Visual Inertial SLAM technique is proposed in this paper to ensure accurate autonomous localization of mobile robots in environments with dynamic objects. Addressing the limitations of real-time performance in deep learning algorithms and the poor robustness of pure visual geometry algorithms, this paper presents a deep learning-based Visual Inertial SLAM technique. Firstly, a non-blocking model is designed to extract semantic information from images. Then, a motion probability hierarchy model is proposed to obtain prior motion probabilities of feature points. For image frames without semantic information, a motion probability propagation model is designed to determine the prior motion probabilities of feature points. Furthermore, considering that the output of inertial measurements is unaffected by dynamic objects, this paper integrates inertial measurement information to improve the estimation accuracy of feature point motion probabilities. An adaptive threshold-based motion probability estimation method is proposed, and finally, the positioning accuracy is enhanced by eliminating feature points with excessively high motion probabilities. Experimental results demonstrate that the proposed algorithm achieves accurate localization in dynamic environments while maintaining real-time performance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Canadian Association of Radiologists White Paper on Ethical and Legal Issues Related to Artificial Intelligence in Radiology.
- Author
-
Jaremko, Jacob L., Azar, Marleine, Bromwich, Rebecca, Lum, Andrea, Alicia Cheong, Li Hsia, Gibert, Martin, Laviolette, François, Gray, Bruce, Reinhold, Caroline, Cicero, Mark, Chong, Jaron, Shaw, James, Rybicki, Frank J., Hurrell, Casey, Lee, Emil, and Tang, An
- Subjects
- *
ARTIFICIAL intelligence laws , *ACQUISITION of property , *ALGORITHMS , *ARTIFICIAL intelligence , *AUTONOMY (Psychology) , *CONCEPTUAL structures , *MEDICAL ethics , *MEDICAL practice , *MEDICAL specialties & specialists , *PRIVACY , *RADIOLOGISTS , *DATA security - Abstract
Artificial intelligence (AI) software that analyzes medical images is becoming increasingly prevalent. Unlike earlier generations of AI software, which relied on expert knowledge to identify imaging features, machine learning approaches automatically learn to recognize these features. However, the promise of accurate personalized medicine can only be fulfilled with access to large quantities of medical data from patients. This data could be used for purposes such as predicting disease, diagnosis, treatment optimization, and prognostication. Radiology is positioned to lead development and implementation of AI algorithms and to manage the associated ethical and legal challenges. This white paper from the Canadian Association of Radiologists provides a framework for study of the legal and ethical issues related to AI in medical imaging, related to patient data (privacy, confidentiality, ownership, and sharing); algorithms (levels of autonomy, liability, and jurisprudence); practice (best practices and current legal framework); and finally, opportunities in AI from the perspective of a universal health care system. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
15. Research on health monitoring and damage recognition algorithm of building structures based on image processing.
- Author
-
Tang, Sicong and Wang, Hailong
- Subjects
- *
IMAGE processing , *MACHINE learning , *PARAMETER identification , *NOISE control , *ALGORITHMS , *IMAGE encryption , *DIGITAL images - Abstract
With the continuous deepening of the urbanization process and the progress of science and technology, people transform nature and develop nature on a larger and larger scale, among which the most iconic transformation is a variety of building structures built by people. And with the passage of time, the building structure in the perennial wind and sun, there will be signs of "illness", if not timely treatment, it will have a huge impact on the stability and safety of the building structure. Based on this, in this paper, according to the characteristics of crack identification on the surface of concrete structure, background subtraction algorithm is selected for image noise reduction processing. Through three steps of digital image noise reduction, crack extraction and crack parameter identification, the quantitative recognition of cracks is completed and a complete system of crack parameter identification is formed. The experimental results show that the machine learning model of building structure health monitoring and damage recognition algorithm proposed in this paper has excellent statistical performance, and the relative error accuracy of recognition can be controlled within 10%. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. Classification of high-dimensional imbalanced biomedical data based on spectral clustering SMOTE and marine predators algorithm.
- Author
-
Qin, Xiwen, Zhang, Siqi, Dong, Xiaogang, Shi, Hongyu, and Yuan, Liping
- Subjects
- *
LINEAR operators , *CLASSIFICATION , *ALGORITHMS , *LEARNING strategies , *FEATURE selection , *LOTKA-Volterra equations , *MACHINE learning , *RANDOM forest algorithms - Abstract
The research of biomedical data is crucial for disease diagnosis, health management, and medicine development. However, biomedical data are usually characterized by high dimensionality and class imbalance, which increase computational cost and affect the classification performance of minority class, making accurate classification difficult. In this paper, we propose a biomedical data classification method based on feature selection and data resampling. First, use the minimal-redundancy maximal-relevance (mRMR) method to select biomedical data features, reduce the feature dimension, reduce the computational cost, and improve the generalization ability; then, a new SMOTE oversampling method (Spectral-SMOTE) is proposed, which solves the noise sensitivity problem of SMOTE by an improved spectral clustering method; finally, the marine predators algorithm is improved using piecewise linear chaotic maps and random opposition-based learning strategy to improve the algorithm's optimization seeking ability and convergence speed, and the key parameters of the spectral-SMOTE are optimized using the improved marine predators algorithm, which effectively improves the performance of the over-sampling approach. In this paper, five real biomedical datasets are selected to test and evaluate the proposed method using four classifiers, and three evaluation metrics are used to compare with seven data resampling methods. The experimental results show that the method effectively improves the classification performance of biomedical data. Statistical test results also show that the proposed PRMPA-Spectral-SMOTE method outperforms other data resampling methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. An Algorithm of Complete Coverage Path Planning for Deep‐Sea Mining Vehicle Clusters Based on Reinforcement Learning.
- Author
-
Xing, Bowen, Wang, Xiao, and Liu, Zhenchong
- Subjects
- *
DEEP reinforcement learning , *MACHINE learning , *OCEAN mining , *ALGORITHMS - Abstract
This paper proposes a deep reinforcement learning algorithm to achieve complete coverage path planning for deep‐sea mining vehicle clusters. First, the mining vehicles and the deep‐sea mining environment are modeled. Then, this paper implements a series of algorithm designs and optimizations based on Deep Q Networks (DQN). The map fusion mechanism can integrate the grid matrix data from multiple mining vehicles to get the state matrix of the complete environment. In this paper, a preprocessing method for the state matrix is also designed to provide suitable training data for the neural network. The reward function and action selection mechanism of the algorithm are also optimized according to the requirements of cluster cooperative operation. Furthermore, the algorithm uses distance constraints to prevent the entanglement of underwater hoses. To improve the training efficiency of the neural network, the algorithm filters and extracts training samples for training through the sample quality score. Considering the requirement of cluster complete coverage mission, this paper introduces Long Short‐Term Memory (LSTM) based on the neural network to achieve a better training effect. After completing the above optimization and design, the algorithm proposed in this paper is verified through simulation experiments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. A term extraction algorithm based on machine learning and comprehensive feature strategy.
- Author
-
Gong, Xiuliang, Cheng, Bo, Hu, Xiaomei, and Bo, Wen
- Subjects
- *
MACHINE learning , *NATURAL language processing , *ALGORITHMS , *RANDOM fields , *ONTOLOGIES (Information retrieval) , *DATABASES , *MACHINE translating - Abstract
Manual term extraction is similar to literal meaning: A translator browses text, classifies words, and prepares for translation. Terminology, as a centralized carrier of expertise, creation, popularization, and disappearance, dynamically reflects the development and evolution of an industry. The automatic extraction of terminology is a key technology for creating a professional terminology database, and it is also a key topic in the field of natural language processing. The purpose of this paper is to study how to analyse a term extraction algorithm based on machine learning and a comprehensive feature strategy. Focusing on the problems of poor generality and single statistical features of current term extraction algorithms, this paper proposes an improved domain ontology term extraction algorithm based on a comprehensive feature strategy. Moreover, automatic term extraction experiments based on a word-based maximum entropy model and a conditional random field model based on machine learning are conducted in this paper. Its word-based conditional random field model outperforms the maximum entropy model. The experimental results show that the algorithm based on the comprehensive feature strategy improves the accuracy by 8.6% compared with the TF-IDF algorithm and the C-value term extraction algorithm. This algorithm can be used to effectively extract the terms in a text and has good generality. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. Scientific papers and artificial intelligence. Brave new world?
- Author
-
Nexøe, Jørgen
- Subjects
- *
COMPUTERS , *MANUSCRIPTS , *ARTIFICIAL intelligence , *MACHINE learning , *DATA analysis , *MEDICAL literature , *MEDICAL research , *ALGORITHMS - Published
- 2023
- Full Text
- View/download PDF
20. Stabilization of parareal algorithms for long-time computation of a class of highly oscillatory Hamiltonian flows using data.
- Author
-
Fang, Rui and Tsai, Richard
- Subjects
- *
HAMILTON'S principle function , *HAMILTONIAN graph theory , *ALGORITHMS , *EIGENFUNCTIONS , *MULTISCALE modeling , *HAMILTONIAN systems , *PROBLEM solving , *PROOF of concept - Abstract
Applying parallel-in-time algorithms to multiscale Hamiltonian systems to obtain stable long-time simulations is very challenging. In this paper, we present novel data-driven methods aimed at improving the standard parareal algorithm developed by Lions et al. in 2001, for multiscale Hamiltonian systems. The first method involves constructing a correction operator to improve a given inaccurate coarse solver through solving a Procrustes problem using data collected online along parareal trajectories. The second method involves constructing an efficient, high-fidelity solver by a neural network trained with offline generated data. For the second method, we address the issues of effective data generation and proper loss function design based on the Hamiltonian function. We show proof-of-concept by applying the proposed methods to a Fermi-Pasta-Ulam (FPU) problem. The numerical results demonstrate that the Procrustes parareal method is able to produce solutions that are more stable in energy compared to the standard parareal. The neural network solver can achieve comparable or better runtime performance compared to numerical solvers of similar accuracy. When combined with the standard parareal algorithm, the improved neural network solutions are slightly more stable in energy than the improved numerical coarse solutions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. Experimental verification of a data-driven algorithm for drive-by bridge condition monitoring.
- Author
-
Corbally, Robert and Malekjafarian, Abdollah
- Subjects
- *
ARTIFICIAL neural networks , *BRIDGES , *FREQUENCY spectra , *MACHINE learning , *STRUCTURAL health monitoring , *ALGORITHMS - Abstract
As the world's transport infrastructure ages, the importance of bridge condition monitoring is becoming increasingly acknowledged. Large-scale deployment of existing inspection and monitoring techniques is infeasible due to cost and logistical challenges. The concept of using sensors located within vehicles for low cost 'drive-by' monitoring has become the focus of much attention in recent years. This paper presents a new data-driven approach for drive-by bridge monitoring. Machine learning techniques are leveraged to allow the influence of vehicle speed to be considered and the Operating Deflection Shape Ratio (ODSR) is presented as an alternative damage-sensitive feature to the commonly used frequency spectrum. Extensive laboratory experiments demonstrate that the method is capable of detecting midspan cracking and seized bearings. A statistical classification approach is adopted to classify damage indicators as either 'damaged' or 'healthy'. Classification accuracy is seen to vary between 65-96% and is similar whether using the frequency spectrum or ODSR. Based on the results of the laboratory testing, it is expected that this approach could be implemented on a large scale to act as an early warning tool for infrastructure owners to identify bridges presenting signs of distress or deterioration. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. Improved minority attack detection in Intrusion Detection System using efficient feature selection algorithms.
- Author
-
Rejimol Robinson, R. R., Anagha Madhav, K. P., and Thomas, Ciza
- Subjects
- *
FEATURE selection , *MACHINE learning , *INTRUSION detection systems (Computer security) , *COMPUTER network traffic , *SUPERVISED learning , *ALGORITHMS - Abstract
Machine Learning and Data Mining algorithms are used extensively to enhance the performance of Intrusion Detection Systems. The number of training instances and the dimensionality of data are crucial factors affecting the performance of the model built during the training of any supervised learning algorithms. A sufficient proportion of instances having relevant features from all classes of attacks and normal traffic are considered most desirable while building the classification model that classifies the network traffic into attack and normal. This paper proposes a methodology to improve the accuracy of the model by giving importance to the relevant features that can contribute to model building. The feature selection using correlation‐based and information gain‐based techniques during training and testing contributes much to the detection of stealthier attacks and minority attacks. Then the features of the less detected attacks are identified as the second phase of the filter that is used to improve the performance. The relevant features of stealthy attacks are identified based on the correlation of corresponding features of the attack and normal data as the attacks are made stealthy mostly by making it resemble the normal traffic. Finally, the attacks that are rarely found in the training data are oversampled to improve their detection. CICIDS 2017 data set is employed as it comprises stealthier attacks generated using modern tools. NSL KDD data set is also used for evaluation to compare the proposed work with existing literature as it is used in most of the available literature. The results show superior performance with an accuracy of 99.8%, false positive rate of 0.2%, and a detection rate and 99.8%. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. A Semi-Automatic Magnetic Resonance Imaging Annotation Algorithm Based on Semi-Weakly Supervised Learning.
- Author
-
Chen, Shaolong and Zhang, Zhiyong
- Subjects
- *
MAGNETIC resonance imaging , *SUPERVISED learning , *MACHINE learning , *ITERATIVE learning control , *ALGORITHMS , *ANNOTATIONS , *DEEP learning - Abstract
The annotation of magnetic resonance imaging (MRI) images plays an important role in deep learning-based MRI segmentation tasks. Semi-automatic annotation algorithms are helpful for improving the efficiency and reducing the difficulty of MRI image annotation. However, the existing semi-automatic annotation algorithms based on deep learning have poor pre-annotation performance in the case of insufficient segmentation labels. In this paper, we propose a semi-automatic MRI annotation algorithm based on semi-weakly supervised learning. In order to achieve a better pre-annotation performance in the case of insufficient segmentation labels, semi-supervised and weakly supervised learning were introduced, and a semi-weakly supervised learning segmentation algorithm based on sparse labels was proposed. In addition, in order to improve the contribution rate of a single segmentation label to the performance of the pre-annotation model, an iterative annotation strategy based on active learning was designed. The experimental results on public MRI datasets show that the proposed algorithm achieved an equivalent pre-annotation performance when the number of segmentation labels was much less than that of the fully supervised learning algorithm, which proves the effectiveness of the proposed algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. A Cross-View Geo-Localization Algorithm Using UAV Image and Satellite Image.
- Author
-
Fan, Jiqi, Zheng, Enhui, He, Yufei, and Yang, Jianxing
- Subjects
- *
REMOTE-sensing images , *TRANSFORMER models , *ALGORITHMS , *TECHNOLOGY transfer , *MACHINE learning - Abstract
Within research on the cross-view geolocation of UAVs, differences in image sources and interference from similar scenes pose huge challenges. Inspired by multimodal machine learning, in this paper, we design a single-stream pyramid transformer network (SSPT). The backbone of the model uses the self-attention mechanism to enrich its own internal features in the early stage and uses the cross-attention mechanism in the later stage to refine and interact with different features to eliminate irrelevant interference. In addition, in the post-processing part of the model, a header module is designed for upsampling to generate heat maps, and a Gaussian weight window is designed to assign label weights to make the model converge better. Together, these methods improve the positioning accuracy of UAV images in satellite images. Finally, we also use style transfer technology to simulate various environmental changes in order to expand the experimental data, further proving the environmental adaptability and robustness of the method. The final experimental results show that our method yields significant performance improvement: The relative distance score (RDS) of the SSPT-384 model on the benchmark UL14 dataset is significantly improved from 76.25% to 84.40%, while the meter-level accuracy (MA) of 3 m, 5 m, and 20 m is increased by 12%, 12%, and 10%, respectively. For the SSPT-256 model, the RDS has been increased to 82.21%, and the meter-level accuracy (MA) of 3 m, 5 m, and 20 m has increased by 5%, 5%, and 7%, respectively. It still shows strong robustness on the extended thermal infrared (TIR), nighttime, and rainy day datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. ULG-SLAM: A Novel Unsupervised Learning and Geometric Feature-Based Visual SLAM Algorithm for Robot Localizability Estimation.
- Author
-
Huang, Yihan, Xie, Fei, Zhao, Jing, Gao, Zhilin, Chen, Jun, Zhao, Fei, and Liu, Xixiang
- Subjects
- *
MACHINE learning , *VISUAL learning , *ALGORITHMS , *ROBOTS , *FEATURE extraction , *WALKING speed - Abstract
Indoor localization has long been a challenging task due to the complexity and dynamism of indoor environments. This paper proposes ULG-SLAM, a novel unsupervised learning and geometric-based visual SLAM algorithm for robot localizability estimation to improve the accuracy and robustness of visual SLAM. Firstly, a dynamic feature filtering based on unsupervised learning and moving consistency checks is developed to eliminate the features of dynamic objects. Secondly, an improved line feature extraction algorithm based on LSD is proposed to optimize the effect of geometric feature extraction. Thirdly, geometric features are used to optimize localizability estimation, and an adaptive weight model and attention mechanism are built using the method of region delimitation and region growth. Finally, to verify the effectiveness and robustness of localizability estimation, multiple indoor experiments using the EuRoC dataset and TUM RGB-D dataset are conducted. Compared with ORBSLAM2, the experimental results demonstrate that absolute trajectory accuracy can be improved by 95% for equivalent processing speed in walking sequences. In fr3/walking_xyz and fr3/walking_half, ULG-SLAM tracks more trajectories than DS-SLAM, and the ATE RMSE is improved by 36% and 6%, respectively. Furthermore, the improvement in robot localizability over DynaSLAM is noteworthy, coming in at about 11% and 3%, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. An Intelligent Decision Algorithm for a Greenhouse System Based on a Rough Set and D-S Evidence Theory.
- Author
-
Lina Wang, Mengjie Xu, and Ying Zhang
- Subjects
- *
GREENHOUSES , *MACHINE learning , *ROUGH sets , *EXPERT evidence , *SUPPORT vector machines , *THEORY of knowledge , *ALGORITHMS , *SOFT sets - Abstract
This paper presents a decision-making approach grounded in rough set theory and evidential reasoning to address the demand for expert decision-making in greenhouse environmental control systems. Furthermore, a decision-making model is developed by integrating the D-S evidence theory with an expert knowledge table for greenhouse environmental control systems. The model's reasoning process encompasses continuous attribute discretization, expert decision table formation, attribute reduction, and evidence combination reasoning. Firstly, the fuzzy C-means clustering algorithm is employed to discretize the original environmental data and cluster it. Subsequently, an attribute reduction algorithm based on information entropy is utilized to optimize the decision table by eliminating unnecessary conditional attributes in expert knowledge. The reduced indicators are then combined using evidential theory. Finally, suitable greenhouse control methods are determined by the confidence decision proposed by the D-S evidence theory. To assess the efficacy of this intelligent decision-making algorithm based on rough set and D-S evidence theory, its performance is compared with traditional SVM algorithms and small-shot learning algorithms. The results indicate that this proposed method significantly enhances the credibility of control decision-making processes, with an average running time of 0.002378s for the fusion decision algorithm and 0.017939s for the support vector machine (SVM) algorithm, respectively. The SVM accuracy rate after testing and training stands at 90.34%. Moreover, retraining based on information entropy attribute reduction leads to a correct decision rate increase of up to 100%. This method notably improves confidence levels in decision-making processes while reducing uncertainty and demonstrates reliability when applied in making decisions regarding greenhouse environments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
27. Malnutrition risk assessment using a machine learning‐based screening tool: A multicentre retrospective cohort.
- Author
-
Parchure, Prathamesh, Besculides, Melanie, Zhan, Serena, Cheng, Fu‐yuan, Timsina, Prem, Cheertirala, Satya Narayana, Kersch, Ilana, Wilson, Sara, Freeman, Robert, Reich, David, Mazumdar, Madhu, and Kia, Arash
- Subjects
- *
MALNUTRITION diagnosis , *RISK assessment , *DIETETICS , *MALNUTRITION , *MEDICAL quality control , *HUMAN services programs , *HOSPITAL care , *NUTRITIONAL assessment , *ARTIFICIAL intelligence , *RETROSPECTIVE studies , *DESCRIPTIVE statistics , *LONGITUDINAL method , *PRE-tests & post-tests , *RESEARCH , *METROPOLITAN areas , *MACHINE learning , *QUALITY assurance , *LENGTH of stay in hospitals , *ALGORITHMS , *DISEASE risk factors ,ELECTRONIC health record standards - Abstract
Background: Malnutrition is associated with increased morbidity, mortality, and healthcare costs. Early detection is important for timely intervention. This paper assesses the ability of a machine learning screening tool (MUST‐Plus) implemented in registered dietitian (RD) workflow to identify malnourished patients early in the hospital stay and to improve the diagnosis and documentation rate of malnutrition. Methods: This retrospective cohort study was conducted in a large, urban health system in New York City comprising six hospitals serving a diverse patient population. The study included all patients aged ≥ 18 years, who were not admitted for COVID‐19 and had a length of stay of ≤ 30 days. Results: Of the 7736 hospitalisations that met the inclusion criteria, 1947 (25.2%) were identified as being malnourished by MUST‐Plus‐assisted RD evaluations. The lag between admission and diagnosis improved with MUST‐Plus implementation. The usability of the tool output by RDs exceeded 90%, showing good acceptance by users. When compared pre‐/post‐implementation, the rate of both diagnoses and documentation of malnutrition showed improvement. Conclusion: MUST‐Plus, a machine learning‐based screening tool, shows great promise as a malnutrition screening tool for hospitalised patients when used in conjunction with adequate RD staffing and training about the tool. It performed well across multiple measures and settings. Other health systems can use their electronic health record data to develop, test and implement similar machine learning‐based processes to improve malnutrition screening and facilitate timely intervention. Key points/Highlights: Malnutrition is prevalent among hospitalised patients and frequently goes unrecognised, with the potential for severe sequelae. Accurate diagnosis, documentation and treatment of malnutrition have the potential of having a positive impact on morbidity rate, mortality rate, length of inpatient stay, readmission rate and hospital revenue. The tool's successful application highlights its potential to optimise malnutrition screening in healthcare systems, offering potential benefits for patient outcomes and hospital finances. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. Time-discrete momentum consensus-based optimization algorithm and its application to Lyapunov function approximation.
- Author
-
Ha, Seung-Yeal, Hwang, Gyuyoung, and Kim, Sungyoon
- Subjects
- *
OPTIMIZATION algorithms , *LYAPUNOV functions , *DISTRIBUTED algorithms , *GLOBAL optimization , *APPROXIMATION algorithms , *MATHEMATICS , *ALGORITHMS - Abstract
In this paper, we study a discrete momentum consensus-based optimization (Momentum-CBO) algorithm which corresponds to a second-order generalization of the discrete first-order CBO [S.-Y. Ha, S. Jin and D. Kim, Convergence of a first-order consensus-based global optimization algorithm, Math. Models Methods Appl. Sci. 30 (2020) 2417–2444]. The proposed algorithm can be understood as the modification of ADAM-CBO, replacing the normalization term by unity. For the proposed Momentum-CBO, we provide a sufficient framework which guarantees the convergence of algorithm toward a global minimum of the objective function. Moreover, we present several experimental results showing that Momentum-CBO has an improved success rate of finding the global minimum compared to vanilla-CBO and show the stability of Momentum-CBO under different initialization schemes. We also show that Momentum-CBO can be used as the alternative of ADAM-CBO which does not have a proper convergence analysis. Finally, we give an application of Momentum-CBO for Lyapunov function approximation using symbolic regression techniques. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. Early Breast Cancer Risk Assessment: Integrating Histopathology with Artificial Intelligence.
- Author
-
Ivanova, Mariia, Pescia, Carlo, Trapani, Dario, Venetis, Konstantinos, Frascarelli, Chiara, Mane, Eltjona, Cursano, Giulia, Sajjadi, Elham, Scatena, Cristian, Cerbelli, Bruna, d'Amati, Giulia, Porta, Francesca Maria, Guerini-Rocco, Elena, Criscitiello, Carmen, Curigliano, Giuseppe, and Fusco, Nicola
- Subjects
- *
BREAST tumor risk factors , *RISK assessment , *MEDICAL protocols , *CANCER relapse , *ARTIFICIAL intelligence , *EARLY detection of cancer , *CYTOCHEMISTRY , *TUMOR markers , *DECISION making in clinical medicine , *IMMUNOHISTOCHEMISTRY , *PATIENT-centered care , *DEEP learning , *ARTIFICIAL neural networks , *MACHINE learning , *ONCOLOGISTS , *INDIVIDUALIZED medicine , *MOLECULAR pathology , *HEALTH care teams , *ALGORITHMS , *DISEASE risk factors - Abstract
Simple Summary: Risk assessment in early breast cancer is critical for clinical decisions, but defining risk categories poses a significant challenge. The integration of conventional histopathology and biomarkers with artificial intelligence (AI) techniques, including machine learning and deep learning, has the potential to offer more precise information. AI applications extend beyond detection to histological subtyping, grading, and molecular feature identification. The successful integration of AI into clinical practice requires collaboration between histopathologists, molecular pathologists, computational pathologists, and oncologists to optimize patient outcomes. Effective risk assessment in early breast cancer is essential for informed clinical decision-making, yet consensus on defining risk categories remains challenging. This paper explores evolving approaches in risk stratification, encompassing histopathological, immunohistochemical, and molecular biomarkers alongside cutting-edge artificial intelligence (AI) techniques. Leveraging machine learning, deep learning, and convolutional neural networks, AI is reshaping predictive algorithms for recurrence risk, thereby revolutionizing diagnostic accuracy and treatment planning. Beyond detection, AI applications extend to histological subtyping, grading, lymph node assessment, and molecular feature identification, fostering personalized therapy decisions. With rising cancer rates, it is crucial to implement AI to accelerate breakthroughs in clinical practice, benefiting both patients and healthcare providers. However, it is important to recognize that while AI offers powerful automation and analysis tools, it lacks the nuanced understanding, clinical context, and ethical considerations inherent to human pathologists in patient care. Hence, the successful integration of AI into clinical practice demands collaborative efforts between medical experts and computational pathologists to optimize patient outcomes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
30. Novel Imaging Approaches for Glioma Classification in the Era of the World Health Organization 2021 Update: A Scoping Review.
- Author
-
Richter, Vivien, Ernemann, Ulrike, and Bender, Benjamin
- Subjects
- *
GLIOMAS , *RADIOMICS , *MAGNETIC resonance imaging , *DESCRIPTIVE statistics , *SYSTEMATIC reviews , *LITERATURE reviews , *DEEP learning , *GENETIC mutation , *NEURORADIOLOGY , *MACHINE learning , *DATA analysis software , *ALGORITHMS - Abstract
Simple Summary: The 2021 WHO classification of central nervous system (CNS) tumors is challenging for neuroradiologists due to the central role of the molecular profile of tumors. We performed a scoping review of recent literature to assess the existing data on the power of novel data analysis tools to predict new tumor classes by imaging. We found room for performance improvement for subgroups with lower incidence (e.g., 1p/19q codeleted or IDH1/2 mutated gliomas) and patients with rare diagnoses (e.g., pediatric gliomas, midline gliomas). More data regarding functional MRI techniques need to be collected. Studies explicitly designed to assess the generalizability of AI-aided tools for predicting molecular tumor subgroups are lacking. The 2021 WHO classification of CNS tumors is a challenge for neuroradiologists due to the central role of the molecular profile of tumors. The potential of novel data analysis tools in neuroimaging must be harnessed to maintain its role in predicting tumor subgroups. We performed a scoping review to determine current evidence and research gaps. A comprehensive literature search was conducted regarding glioma subgroups according to the 2021 WHO classification and the use of MRI, radiomics, machine learning, and deep learning algorithms. Sixty-two original articles were included and analyzed by extracting data on the study design and results. Only 8% of the studies included pediatric patients. Low-grade gliomas and diffuse midline gliomas were represented in one-third of the research papers. Public datasets were utilized in 22% of the studies. Conventional imaging sequences prevailed; data on functional MRI (DWI, PWI, CEST, etc.) are underrepresented. Multiparametric MRI yielded the best prediction results. IDH mutation and 1p/19q codeletion status prediction remain in focus with limited data on other molecular subgroups. Reported AUC values range from 0.6 to 0.98. Studies designed to assess generalizability are scarce. Performance is worse for smaller subgroups (e.g., 1p/19q codeleted or IDH1/2 mutated gliomas). More high-quality study designs with diversity in the analyzed population and techniques are needed. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. Investigating Machine Learning Techniques for Predicting Risk of Asthma Exacerbations: A Systematic Review.
- Author
-
Darsha Jayamini, Widana Kankanamge, Mirza, Farhaan, Asif Naeem, M., and Chan, Amy Hai Yan
- Subjects
- *
ASTHMA risk factors , *ASTHMA prevention , *DISEASE exacerbation , *RISK assessment , *PREDICTION models , *DECISION making , *SYSTEMATIC reviews , *MACHINE learning , *SOCIODEMOGRAPHIC factors , *ALGORITHMS - Abstract
Asthma, a common chronic respiratory disease among children and adults, affects more than 200 million people worldwide and causes about 450,000 deaths each year. Machine learning is increasingly applied in healthcare to assist health practitioners in decision-making. In asthma management, machine learning excels in performing well-defined tasks, such as diagnosis, prediction, medication, and management. However, there remain uncertainties about how machine learning can be applied to predict asthma exacerbation. This study aimed to systematically review recent applications of machine learning techniques in predicting the risk of asthma attacks to assist asthma control and management. A total of 860 studies were initially identified from five databases. After the screening and full-text review, 20 studies were selected for inclusion in this review. The review considered recent studies published from January 2010 to February 2023. The 20 studies used machine learning techniques to support future asthma risk prediction by using various data sources such as clinical, medical, biological, and socio-demographic data sources, as well as environmental and meteorological data. While some studies considered prediction as a category, other studies predicted the probability of exacerbation. Only a group of studies applied prediction windows. The paper proposes a conceptual model to summarise how machine learning and available data sources can be leveraged to produce effective models for the early detection of asthma attacks. The review also generated a list of data sources that other researchers may use in similar work. Furthermore, we present opportunities for further research and the limitations of the preceding studies. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. Enhancing recall in automated record screening: A resampling algorithm.
- Author
-
Zhipeng Hou and Tipton, Elizabeth
- Subjects
- *
AUTOMATIC identification , *ALGORITHMS , *HUMAN error , *PROBABILITY theory , *TRACKING algorithms , *TEXT mining - Abstract
Literature screening is the process of identifying all relevant records from a pool of candidate paper records in systematic review, meta-analysis, and other research synthesis tasks. This process is time consuming, expensive, and prone to human error. Screening prioritization methods attempt to help reviewers identify most relevant records while only screening a proportion of candidate records with high priority. In previous studies, screening prioritization is often referred to as automatic literature screening or automatic literature identification. Numerous screening prioritization methods have been proposed in recent years. However, there is a lack of screening prioritization methods with reliable performance. Our objective is to develop a screening prioritization algorithm with reliable performance for practical use, for example, an algorithm that guarantees an 80% chance of identifying at least 80% of the relevant records. Based on a target-based method proposed in Cormack and Grossman, we propose a screening prioritization algorithm using sampling with replacement. The algorithm is a wrapper algorithm that can work with any current screening prioritization algorithm to guarantee the performance. We prove, with mathematics and probability theory, that the algorithm guarantees the performance. We also run numeric experiments to test the performance of our algorithm when applied in practice. The numeric experiment results show this algorithm achieve reliable performance under different circumstances. The proposed screening prioritization algorithm can be reliably used in real world research synthesis tasks. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. Anomaly detection in IoT environment using machine learning.
- Author
-
Bilakanti, Harini, Pasam, Sreevani, Palakollu, Varshini, and Utukuru, Sairam
- Subjects
- *
ANOMALY detection (Computer security) , *MACHINE learning , *INTERNET of things , *ALGORITHMS - Abstract
This research paper delves into the security concerns within Internet of Things (IoT) networks, emphasizing the need to safeguard the extensive data generated by interconnected physical devices. The presence of anomalies and faults in the sensors and devices deployed within IoT networks can significantly impact the functionality and outcomes of IoT systems. The primary focus of this study is the identification of anomalies in IoT devices arising sensor tampering, with an emphasis on the application of machine learning techniques. While supervised methods like one‐class SVM, Gaussian Naive Bayes, and XG Boost have proven effective in anomaly detection, there has been a noticeable scarcity of research employing unsupervised methods. This scarcity is mainly attributed to the absence of well‐defined ground truths for model training. This research takes an innovative approach by investigating the utility of unsupervised algorithms, including Isolation Forest and Local Outlier Factor, alongside supervised techniques to enhance the precision of anomaly detection. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. A Multi-Agent RL Algorithm for Dynamic Task Offloading in D2D-MEC Network with Energy Harvesting †.
- Author
-
Mi, Xin, He, Huaiwen, and Shen, Hong
- Subjects
- *
ENERGY harvesting , *MACHINE learning , *ALGORITHMS , *INTEGER programming , *DYNAMIC loads , *MOBILE computing , *NONLINEAR programming - Abstract
Delay-sensitive task offloading in a device-to-device assisted mobile edge computing (D2D-MEC) system with energy harvesting devices is a critical challenge due to the dynamic load level at edge nodes and the variability in harvested energy. In this paper, we propose a joint dynamic task offloading and CPU frequency control scheme for delay-sensitive tasks in a D2D-MEC system, taking into account the intricacies of multi-slot tasks, characterized by diverse processing speeds and data transmission rates. Our methodology involves meticulous modeling of task arrival and service processes using queuing systems, coupled with the strategic utilization of D2D communication to alleviate edge server load and prevent network congestion effectively. Central to our solution is the formulation of average task delay optimization as a challenging nonlinear integer programming problem, requiring intelligent decision making regarding task offloading for each generated task at active mobile devices and CPU frequency adjustments at discrete time slots. To navigate the intricate landscape of the extensive discrete action space, we design an efficient multi-agent DRL learning algorithm named MAOC, which is based on MAPPO, to minimize the average task delay by dynamically determining task-offloading decisions and CPU frequencies. MAOC operates within a centralized training with decentralized execution (CTDE) framework, empowering individual mobile devices to make decisions autonomously based on their unique system states. Experimental results demonstrate its swift convergence and operational efficiency, and it outperforms other baseline algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. Machine learning techniques for emotion detection and sentiment analysis: current state, challenges, and future directions.
- Author
-
Alslaity, Alaa and Orji, Rita
- Subjects
- *
SENTIMENT analysis , *DEEP learning , *USER interfaces , *MACHINE learning , *TREATMENT effectiveness , *BEHAVIORAL objectives (Education) , *COMPARATIVE studies , *COMMUNICATION , *FACTOR analysis , *RESEARCH funding , *EMOTIONS , *THEMATIC analysis , *BEHAVIOR modification , *ALGORITHMS ,RESEARCH evaluation - Abstract
Emotion detection and Sentiment analysis techniques are used to understand polarity or emotions expressed by people in many cases, especially during interactive systems use. Recognizing users' emotions is an important topic for human–computer interaction. Computers that recognize emotions would provide more natural interactions. Also, emotion detection helps design human-centred systems that provide adaptable behaviour change interventions based on users' emotions. The growing capability of machine learning to analyze big data and extract emotions therein has led to a surge in research in this domain. With this increased attention, it becomes essential to investigate this research area and provide a comprehensive review of the current state. In this paper, we conduct a systematic review of 123 papers on machine learning-based emotion detection to investigate research trends along many themes, including machine learning approaches, application domain, data, evaluation, and outcome. The results demonstrate: 1) increasing interest in this domain, 2) supervised machine learning (namely, SVM and Naïve Bayes) are the most popular algorithms, 3) Text datasets in the English language are the most common data source, and 4) most research use Accuracy to evaluate performance. Based on the findings, we suggest future directions and recommendations for developing human-centred systems. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. Predicting Money Laundering Using Machine Learning and Artificial Neural Networks Algorithms in Banks.
- Author
-
Lokanan, Mark E.
- Subjects
- *
ARTIFICIAL neural networks , *MONEY laundering , *MACHINE learning , *ALGORITHMS , *RANDOM forest algorithms - Abstract
This paper aims to build a machine learning and a neural network model to detect the probability of money laundering in banks. The paper's data came from a simulation of actual transactions flagged for money laundering in Middle Eastern banks. The main findings highlight that criminal networks mainly use the integration stage to integrate money into the financial system. Fraudsters prefer to launder funds in the early hours, morning followed by the business day's afternoon time intervals. Additionally, the Naïve Bayes and Random Forest classifiers were identified as the two best-performing models to predict bank money laundering transactions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. The Use of Artificial Intelligence Algorithms in the Prognosis and Detection of Lymph Node Involvement in Head and Neck Cancer and Possible Impact in the Development of Personalized Therapeutic Strategy: A Systematic Review.
- Author
-
Michelutti, Luca, Tel, Alessandro, Zeppieri, Marco, Ius, Tamara, Sembronio, Salvatore, and Robiony, Massimo
- Subjects
- *
ARTIFICIAL intelligence , *LYMPH nodes , *HEAD & neck cancer , *ALGORITHMS , *PROGNOSIS - Abstract
Given the increasingly important role that the use of artificial intelligence algorithms is taking on in the medical field today (especially in oncology), the purpose of this systematic review is to analyze the main reports on such algorithms applied for the prognostic evaluation of patients with head and neck malignancies. The objective of this paper is to examine the currently available literature in the field of artificial intelligence applied to head and neck oncology, particularly in the prognostic evaluation of the patient with this kind of tumor, by means of a systematic review. The paper exposes an overview of the applications of artificial intelligence in deriving prognostic information related to the prediction of survival and recurrence and how these data may have a potential impact on the choice of therapeutic strategy, making it increasingly personalized. This systematic review was written following the PRISMA 2020 guidelines. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
38. Deep Learning for Structural Health Monitoring: Data, Algorithms, Applications, Challenges, and Trends.
- Author
-
Jia, Jing and Li, Ying
- Subjects
- *
STRUCTURAL health monitoring , *DEEP learning , *DIGITAL twins , *STRUCTURAL frames , *MACHINE learning , *ALGORITHMS - Abstract
Environmental effects may lead to cracking, stiffness loss, brace damage, and other damages in bridges, frame structures, buildings, etc. Structural Health Monitoring (SHM) technology could prevent catastrophic events by detecting damage early. In recent years, Deep Learning (DL) has developed rapidly and has been applied to SHM to detect, localize, and evaluate diverse damages through efficient feature extraction. This paper analyzes 337 articles through a systematic literature review to investigate the application of DL for SHM in the operation and maintenance phase of facilities from three perspectives: data, DL algorithms, and applications. Firstly, the data types in SHM and the corresponding collection methods are summarized and analyzed. The most common data types are vibration signals and images, accounting for 80% of the literature studied. Secondly, the popular DL algorithm types and application areas are reviewed, of which CNN accounts for 60%. Then, this article carefully analyzes the specific functions of DL application for SHM based on the facility's characteristics. The most scrutinized study focused on cracks, accounting for 30 percent of research papers. Finally, challenges and trends in applying DL for SHM are discussed. Among the trends, the Structural Health Monitoring Digital Twin (SHMDT) model framework is suggested in response to the trend of strong coupling between SHM technology and Digital Twin (DT), which can advance the digitalization, visualization, and intelligent management of SHM. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
39. A Review on Federated Learning and Machine Learning Approaches: Categorization, Application Areas, and Blockchain Technology.
- Author
-
Ogundokun, Roseline Oluwaseun, Misra, Sanjay, Maskeliunas, Rytis, and Damasevicius, Robertas
- Subjects
- *
BLOCKCHAINS , *ARTIFICIAL intelligence , *MACHINE learning , *CONFERENCE papers , *ALGORITHMS , *SCIENCE publishing - Abstract
Federated learning (FL) is a scheme in which several consumers work collectively to unravel machine learning (ML) problems, with a dominant collector synchronizing the procedure. This decision correspondingly enables the training data to be distributed, guaranteeing that the individual device's data are secluded. The paper systematically reviewed the available literature using the Preferred Reporting Items for Systematic Review and Meta-analysis (PRISMA) guiding principle. The study presents a systematic review of appliable ML approaches for FL, reviews the categorization of FL, discusses the FL application areas, presents the relationship between FL and Blockchain Technology (BT), and discusses some existing literature that has used FL and ML approaches. The study also examined applicable machine learning models for federated learning. The inclusion measures were (i) published between 2017 and 2021, (ii) written in English, (iii) published in a peer-reviewed scientific journal, and (iv) Preprint published papers. Unpublished studies, thesis and dissertation studies, (ii) conference papers, (iii) not in English, and (iv) did not use artificial intelligence models and blockchain technology were all removed from the review. In total, 84 eligible papers were finally examined in this study. Finally, in recent years, the amount of research on ML using FL has increased. Accuracy equivalent to standard feature-based techniques has been attained, and ensembles of many algorithms may yield even better results. We discovered that the best results were obtained from the hybrid design of an ML ensemble employing expert features. However, some additional difficulties and issues need to be overcome, such as efficiency, complexity, and smaller datasets. In addition, novel FL applications should be investigated from the standpoint of the datasets and methodologies. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
40. Robust and compact maximum margin clustering for high-dimensional data.
- Author
-
Cevikalp, Hakan and Chome, Edward
- Subjects
- *
CLUSTER sampling , *MACHINE learning , *CONJUGATE gradient methods , *ALGORITHMS , *HYPERPLANES - Abstract
In the field of machine learning, clustering has become an increasingly popular research topic due to its critical importance. Many clustering algorithms have been proposed utilizing a variety of approaches. This study focuses on clustering of high-dimensional data using the maximum margin clustering approach. In this paper, two methods are introduced: The first method employs the classical maximum margin clustering approach, which separates data into two clusters with the greatest margin between them. The second method takes cluster compactness into account and searches for two parallel hyperplanes that best fit to the cluster samples while also being as far apart from each other as possible. Additionally, robust variants of these clustering methods are introduced to handle outliers and noise within the data samples. The stochastic gradient algorithm is used to solve the resulting optimization problems, enabling all proposed clustering methods to scale well with large-scale data. Experimental results demonstrate that the proposed methods are more effective than existing maximum margin clustering methods, particularly in high-dimensional clustering problems, highlighting the efficacy of the proposed methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
41. AC-PLT: An algorithm for computer-assisted coding of semantic property listing data.
- Author
-
Ramos, Diego, Moreno, Sebastián, Canessa, Enrique, Chaigneau, Sergio E., and Marchant, Nicolás
- Subjects
- *
NATURAL language processing , *ALGORITHMS , *MACHINE learning - Abstract
In this paper, we present a novel algorithm that uses machine learning and natural language processing techniques to facilitate the coding of feature listing data. Feature listing is a method in which participants are asked to provide a list of features that are typically true of a given concept or word. This method is commonly used in research studies to gain insights into people's understanding of various concepts. The standard procedure for extracting meaning from feature listings is to manually code the data, which can be time-consuming and prone to errors, leading to reliability concerns. Our algorithm aims at addressing these challenges by automatically assigning human-created codes to feature listing data that achieve a quantitatively good agreement with human coders. Our preliminary results suggest that our algorithm has the potential to improve the efficiency and accuracy of content analysis of feature listing data. Additionally, this tool is an important step toward developing a fully automated coding algorithm, which we are currently preliminarily devising. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. Personalized Treatment Policies with the Novel Buckley-James Q-Learning Algorithm.
- Author
-
Lee, Jeongjin and Kim, Jong-Min
- Subjects
- *
MACHINE learning , *ALGORITHMS , *SURVIVAL analysis (Biometry) , *TIME management , *PATIENT care , *REINFORCEMENT learning - Abstract
This research paper presents the Buckley-James Q-learning (BJ-Q) algorithm, a cutting-edge method designed to optimize personalized treatment strategies, especially in the presence of right censoring. We critically assess the algorithm's effectiveness in improving patient outcomes and its resilience across various scenarios. Central to our approach is the innovative use of the survival time to impute the reward in Q-learning, employing the Buckley-James method for enhanced accuracy and reliability. Our findings highlight the significant potential of personalized treatment regimens and introduce the BJ-Q learning algorithm as a viable and promising approach. This work marks a substantial advancement in our comprehension of treatment dynamics and offers valuable insights for augmenting patient care in the ever-evolving clinical landscape. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
43. Insider employee-led cyber fraud (IECF) in Indian banks: from identification to sustainable mitigation planning.
- Author
-
Roy, Neha Chhabra and Prabhakaran, Sreeleakha
- Subjects
- *
BANKING laws , *FRAUD prevention , *CORRUPTION , *ORGANIZATIONAL behavior , *RISK assessment , *DATA security , *RANDOM forest algorithms , *COMPUTERS , *FOCUS groups , *DATA security failures , *INTERVIEWING , *DEBT , *QUESTIONNAIRES , *ARTIFICIAL intelligence , *LOGISTIC regression analysis , *IDENTITY theft , *SECURITY systems , *FINANCIAL stress , *RESEARCH methodology , *CONCEPTUAL structures , *JOB stress , *ARTIFICIAL neural networks , *MACHINE learning , *ALGORITHMS - Abstract
This paper explores the different insider employee-led cyber frauds (IECF) based on the recent large-scale fraud events of prominent Indian banking institutions. Examining the different types of fraud and appropriate control measures will protect the banking industry from fraudsters. In this study, we identify and classify Cyber Fraud (CF), map the severity of the fraud on a scale of priority, test the mitigation effectiveness, and propose optimal mitigation measures. The identification and classification of CF losses were based on a literature review and focus group discussions with risk and vigilance officers and cyber cell experts. The CF was analyzed using secondary data. We predicted and prioritized CF based on machine learning-derived Random Forest (RF). An efficient fraud mitigation model was developed based on an offender-victim-centric approach. Mitigation is advised both before and after fraud occurs. Through the findings of this research, banks and fraud investigators can prevent CF by detecting it quickly and controlling it on time. This study proposes a structured, sustainable CF mitigation plan that protects banks, employees, regulators, customers, and the economy, thus saving time, resources, and money. Further, these mitigation measures will improve the reputation of the Indian banking industry and ensure its survival. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. Quantum state clustering algorithm based on variational quantum circuit.
- Author
-
Fang, Pengpeng, Zhang, Cai, and Situ, Haozhen
- Subjects
- *
QUANTUM states , *ALGORITHMS , *MACHINE learning , *LEARNING communities - Abstract
Clustering, a well-studied problem in the machine learning community, becomes even more intriguing with the emergence of quantum machine learning. Specifically, exploring clustering techniques for quantum data, such as quantum states, holds great interest. This paper introduces a quantum state clustering algorithm that utilizes variational quantum circuits. Our algorithm transforms the clustering problem into a parameter optimization task involving parametric quantum circuits. Each cluster is represented by a variational quantum circuit (VQC), which learns to extract the distinctive feature of its corresponding cluster during the optimization process. To guide the optimization of circuit parameters, we design an objective function that encourages each cluster's feature extractor to produce features similar to states within its own cluster and dissimilar to states in other clusters. We construct four quantum state datasets for testing the effectiveness of our algorithm. The numerical results demonstrate that our algorithm can achieve satisfying performance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
45. Reweighted Extreme Learning Machine-Based Clutter Suppression and Range Compensation Algorithm for Non-Side-Looking Airborne Radar.
- Author
-
Liu, Jing, Liao, Guisheng, Zeng, Cao, Tao, Haihong, Xu, Jingwei, Zhu, Shengqi, and Juwono, Filbert H.
- Subjects
- *
RADAR in aeronautics , *MACHINE learning , *ALGORITHMS , *MATHEMATICAL complexes - Abstract
Non-side-looking airborne radar provides important applications on account of its all-round multi-angle airspace coverage. However, it suffers clutter range dependence that makes the samples fail to satisfy the condition of being independent and identically distributed (IID), and it severely degrades traditional approaches to clutter suppression and target detection. In this paper, a novel reweighted extreme learning machine (ELM)-based clutter suppression and range compensation algorithm is proposed for non-side-looking airborne radar. The proposed method involves first designing the pre-processing stage, the special reweighted complex-valued activation function containing an unknown range compensation matrix, and two new objective outputs for constructing an initial reweighted ELM-based network with its training. Then, two other objective outputs, a new loss function, and a reverse feedback framework driven by the specifically designed objectives are proposed for the unknown range compensation matrix. Finally, aiming to estimate and reconstruct the unknown compensation matrix, special processes of the complex-valued structures and the theoretical derivations are designed and analyzed in detail. Consequently, with the updated and compensated samples, further processing including space–time adaptive processing (STAP) can be performed for clutter suppression and target detection. Compared with the classic relevant methods, the proposed algorithm achieves significantly superior performance with reasonable computation time. It provides an obviously higher detection probability and better improvement factor (IF). The simulation results verify that the proposed algorithm is effective and has many advantages. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
46. A hybrid feature selection algorithm combining information gain and grouping particle swarm optimization for cancer diagnosis.
- Author
-
Yang, Fangyuan, Xu, Zhaozhao, Wang, Hong, Sun, Lisha, Zhai, Mengjiao, and Zhang, Juan
- Subjects
- *
FEATURE selection , *PARTICLE swarm optimization , *MACHINE learning , *CANCER diagnosis , *ALGORITHMS , *SUPPORT vector machines - Abstract
Background: Cancer diagnosis based on machine learning has become a popular application direction. Support vector machine (SVM), as a classical machine learning algorithm, has been widely used in cancer diagnosis because of its advantages in high-dimensional and small sample data. However, due to the high-dimensional feature space and high feature redundancy of gene expression data, SVM faces the problem of poor classification effect when dealing with such data. Methods: Based on this, this paper proposes a hybrid feature selection algorithm combining information gain and grouping particle swarm optimization (IG-GPSO). The algorithm firstly calculates the information gain values of the features and ranks them in descending order according to the value. Then, ranked features are grouped according to the information index, so that the features in the group are close, and the features outside the group are sparse. Finally, grouped features are searched using grouping PSO and evaluated according to in-group and out-group. Results: Experimental results show that the average accuracy (ACC) of the SVM on the feature subset selected by the IG-GPSO is 98.50%, which is significantly better than the traditional feature selection algorithm. Compared with KNN, the classification effect of the feature subset selected by the IG-GPSO is still optimal. In addition, the results of multiple comparison tests show that the feature selection effect of the IG-GPSO is significantly better than that of traditional feature selection algorithms. Conclusion: The feature subset selected by IG-GPSO not only has the best classification effect, but also has the least feature scale (FS). More importantly, the IG-GPSO significantly improves the ACC of SVM in cancer diagnostic. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
47. Artificial Intelligence in Pediatrics: Learning to Walk Together.
- Author
-
Demirbaş, Kaan Can, Yıldız, Mehmet, Saygılı, Seha, Canpolat, Nur, and Kasapçopur, Özgür
- Subjects
- *
GENOME editing , *COMPUTER assisted instruction , *ARTIFICIAL intelligence , *PEDIATRICS , *MACHINE learning , *LEARNING strategies , *ROBOTICS , *RISK assessment , *CHILD health services , *EDUCATIONAL technology , *DECISION making in clinical medicine , *PREDICTION models , *ALGORITHMS , *EVALUATION - Abstract
In this era of rapidly advancing technology, artificial intelligence (AI) has emerged as a transformative force, even being called the Fourth Industrial Revolution, along with gene editing and robotics. While it has undoubtedly become an increasingly important part of our daily lives, it must be recognized that it is not an additional tool, but rather a complex concept that poses a variety of challenges. AI, with considerable potential, has found its place in both medical care and clinical research. Within the vast field of pediatrics, it stands out as a particularly promising advancement. As pediatricians, we are indeed witnessing the impactful integration of AI-based applications into our daily clinical practice and research efforts. These tools are being used for simple to more complex tasks such as diagnosing clinically challenging conditions, predicting disease outcomes, creating treatment plans, educating both patients and healthcare professionals, and generating accurate medical records or scientific papers. In conclusion, the multifaceted applications of AI in pediatrics will increase efficiency and improve the quality of healthcare and research. However, there are certain risks and threats accompanying this advancement including the biases that may contribute to health disparities and, inaccuracies. Therefore, it is crucial to recognize and address the technical, ethical, and legal challenges as well as explore the benefits in both clinical and research fields. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
48. YOLOv7oSAR: A Lightweight High-Precision Ship Detection Model for SAR Images Based on the YOLOv7 Algorithm.
- Author
-
Liu, Yilin, Ma, Yong, Chen, Fu, Shang, Erping, Yao, Wutao, Zhang, Shuyan, and Yang, Jin
- Subjects
- *
SHIP models , *SYNTHETIC aperture radar , *MACHINE learning , *SOLID state drives , *ALGORITHMS , *DEEP learning - Abstract
Researchers have explored various methods to fully exploit the all-weather characteristics of Synthetic aperture radar (SAR) images to achieve high-precision, real-time, computationally efficient, and easily deployable ship target detection models. These methods include Constant False Alarm Rate (CFAR) algorithms and deep learning approaches such as RCNN, YOLO, and SSD, among others. While these methods outperform traditional algorithms in SAR ship detection, challenges still exist in handling the arbitrary ship distributions and small target features in SAR remote sensing images. Existing models are complex, with a large number of parameters, hindering effective deployment. This paper introduces a YOLOv7 oriented bounding box SAR ship detection model (YOLOv7oSAR). The model employs a rotation box detection mechanism, uses the KLD loss function to enhance accuracy, and introduces a Bi-former attention mechanism to improve small target detection. By redesigning the network's width and depth and incorporating a lightweight P-ELAN structure, the model effectively reduces its size and computational requirements. The proposed model achieves high-precision detection results on the public RSDD dataset (94.8% offshore, 66.6% nearshore), and its generalization ability is validated on a custom dataset (94.2% overall detection accuracy). [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
49. Sea Ice Extraction via Remote Sensing Imagery: Algorithms, Datasets, Applications and Challenges.
- Author
-
Huang, Wenjun, Yu, Anzhu, Xu, Qing, Sun, Qun, Guo, Wenyue, Ji, Song, Wen, Bowei, and Qiu, Chunping
- Subjects
- *
SEA ice , *DEEP learning , *REMOTE sensing , *IMAGE recognition (Computer vision) , *GEOGRAPHIC information systems , *ALGORITHMS - Abstract
Deep learning, which is a dominating technique in artificial intelligence, has completely changed image understanding over the past decade. As a consequence, the sea ice extraction (SIE) problem has reached a new era. We present a comprehensive review of four important aspects of SIE, including algorithms, datasets, applications and future trends. Our review focuses on research published from 2016 to the present, with a specific focus on deep-learning-based approaches in the last five years. We divided all related algorithms into three categories, including the conventional image classification approach, the machine learning-based approach and deep-learning-based methods. We reviewed the accessible ice datasets including SAR-based datasets, the optical-based datasets and others. The applications are presented in four aspects including climate research, navigation, geographic information systems (GIS) production and others. This paper also provides insightful observations and inspiring future research directions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
50. Dendritic Growth Optimization: A Novel Nature-Inspired Algorithm for Real-World Optimization Problems.
- Author
-
Priyadarshini, Ishaani
- Subjects
- *
OPTIMIZATION algorithms , *BIOLOGICALLY inspired computing , *DEEP learning , *MACHINE learning , *METAHEURISTIC algorithms , *PROBLEM solving , *ALGORITHMS - Abstract
In numerous scientific disciplines and practical applications, addressing optimization challenges is a common imperative. Nature-inspired optimization algorithms represent a highly valuable and pragmatic approach to tackling these complexities. This paper introduces Dendritic Growth Optimization (DGO), a novel algorithm inspired by natural branching patterns. DGO offers a novel solution for intricate optimization problems and demonstrates its efficiency in exploring diverse solution spaces. The algorithm has been extensively tested with a suite of machine learning algorithms, deep learning algorithms, and metaheuristic algorithms, and the results, both before and after optimization, unequivocally support the proposed algorithm's feasibility, effectiveness, and generalizability. Through empirical validation using established datasets like diabetes and breast cancer, the algorithm consistently enhances model performance across various domains. Beyond its working and experimental analysis, DGO's wide-ranging applications in machine learning, logistics, and engineering for solving real-world problems have been highlighted. The study also considers the challenges and practical implications of implementing DGO in multiple scenarios. As optimization remains crucial in research and industry, DGO emerges as a promising avenue for innovation and problem solving. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.