78,513 results
Search Results
202. Reproducibility of Deep Learning Algorithms Developed for Medical Imaging Analysis: A Systematic Review.
- Author
-
Moassefi, Mana, Rouzrokh, Pouria, Conte, Gian Marco, Vahdati, Sanaz, Fu, Tianyuan, Tahmasebi, Aylin, Younis, Mira, Farahani, Keyvan, Gentili, Amilcare, Kline, Timothy, Kitamura, Felipe C., Huo, Yuankai, Kuanar, Shiba, Younis, Khaled, Erickson, Bradley J., and Faghani, Shahriar
- Subjects
DEEP learning ,RESEARCH evaluation ,SYSTEMATIC reviews ,ARTIFICIAL intelligence ,DIAGNOSTIC imaging ,DESCRIPTIVE statistics ,ALGORITHMS ,WORLD Wide Web - Abstract
Since 2000, there have been more than 8000 publications on radiology artificial intelligence (AI). AI breakthroughs allow complex tasks to be automated and even performed beyond human capabilities. However, the lack of details on the methods and algorithm code undercuts its scientific value. Many science subfields have recently faced a reproducibility crisis, eroding trust in processes and results, and influencing the rise in retractions of scientific papers. For the same reasons, conducting research in deep learning (DL) also requires reproducibility. Although several valuable manuscript checklists for AI in medical imaging exist, they are not focused specifically on reproducibility. In this study, we conducted a systematic review of recently published papers in the field of DL to evaluate if the description of their methodology could allow the reproducibility of their findings. We focused on the Journal of Digital Imaging (JDI), a specialized journal that publishes papers on AI and medical imaging. We used the keyword "Deep Learning" and collected the articles published between January 2020 and January 2022. We screened all the articles and included the ones which reported the development of a DL tool in medical imaging. We extracted the reported details about the dataset, data handling steps, data splitting, model details, and performance metrics of each included article. We found 148 articles. Eighty were included after screening for articles that reported developing a DL model for medical image analysis. Five studies have made their code publicly available, and 35 studies have utilized publicly available datasets. We provided figures to show the ratio and absolute count of reported items from included studies. According to our cross-sectional study, in JDI publications on DL in medical imaging, authors infrequently report the key elements of their study to make it reproducible. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
203. Community Discovery Algorithm Based on Multi-Relationship Embedding.
- Author
-
Dongming Chen, Mingshuo Nie, Jie Wang, and Dongqi Wang
- Subjects
EMBEDDED computer systems ,ALGORITHMS ,MATRICES (Mathematics) ,CONVOLUTIONAL neural networks ,MACHINE learning - Abstract
Complex systems in the real world often can be modeled as network structures, and community discovery algorithms for complex networks enable researchers to understand the internal structure and implicit information of networks. Existing community discovery algorithms are usually designed for single-layer networks or single-interaction relationships and do not consider the attribute information of nodes. However, many real-world networks consist of multiple types of nodes and edges, and there may be rich semantic information on nodes and edges. The methods for single-layer networks cannot effectively tackle multi-layer information, multi-relationship information, and attribute information. This paper proposes a community discovery algorithm based on multi-relationship embedding. The proposed algorithm first models the nodes in the network to obtain the embedding matrix for each node relationship type and generates the node embedding matrix for each specific relationship type in the network by node encoder. The node embedding matrix is provided as input for aggregating the node embedding matrix of each specific relationship type using a Graph Convolutional Network (GCN) to obtain the final node embedding matrix. This strategy allows capturing of rich structural and attributes information in multi-relational networks. Experiments were conducted on different datasets with baselines, and the results show that the proposed algorithm obtains significant performance improvement in community discovery, node clustering, and similarity search tasks, and compared to the baseline with the best performance, the proposed algorithm achieves an average improvement of 3.1% on Macro-F1 and 4.7% on Micro-F1, which proves the effectiveness of the proposed algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
204. Design of a Learning Path Recommendation System Based on a Knowledge Graph
- Author
-
Liu, Chunhong, Zhang, Haoyang, Zhang, Jieyu, Zhang, Zhengling, and Yuan, Peiyan
- Abstract
Current learning platforms generally have problems such as fragmented knowledge, redundant information, and chaotic learning routes, which cannot meet learners' autonomous learning requirements. This paper designs a learning path recommendation system based on knowledge graphs by using the characteristics of knowledge graphs to structurally represent subject knowledge. The system uses the node centrality and node weight to expand the knowledge graph system, which can better express the structural relationship among knowledge. It applies the particle swarm fusion algorithm of multiple rounds of iterative simulated annealing to achieve the recommendation of learning paths. Furthermore, the system feeds back the students' learning situation to the teachers. Teachers check and fill in the gaps according to the performance of the learners in the teaching activities. Aiming at the weak links of students' knowledge points, the particle swarm intelligence algorithm is used to recommend learning paths and learning resources to fill in the gaps in a targeted manner.
- Published
- 2023
- Full Text
- View/download PDF
205. Behavior Recognition of College Students Based on Improved Deep Learning Algorithm
- Author
-
Ning, Xiaoke
- Abstract
With the vigorous development of intelligent campus construction, great changes have taken place in the development of information technology in colleges and universities from the previous digital to intelligent development. In the teaching process, the analysis of students' classroom learning has also changed from the previous manual observation to intelligent analysis. Based on this, this paper studies the behavior recognition of college students based on the improved deep learning algorithm. Based on a brief analysis of the research background of behavior recognition, the research framework of college students' behavior recognition is constructed. Finally, the authors designed an experiment to evaluate the accuracy of classroom student behavior recognition analysis. The results show that the improved recognition of college students' behavior based on deep learning algorithm can improve the recognition accuracy
- Published
- 2023
- Full Text
- View/download PDF
206. Artificial Intelligence in Intelligent Tutoring Systems toward Sustainable Education: A Systematic Review
- Author
-
Lin, Chien-Chang, Huang, Anna Y. Q., and Lu, Owen H. T.
- Abstract
Sustainable education is a crucial aspect of creating a sustainable future, yet it faces several key challenges, including inadequate infrastructure, limited resources, and a lack of awareness and engagement. Artificial intelligence (AI) has the potential to address these challenges and enhance sustainable education by improving access to quality education, creating personalized learning experiences, and supporting data-driven decision-making. One outcome of using AI and Information Technology (IT) systems in sustainable education is the ability to provide students with personalized learning experiences that cater to their unique learning styles and preferences. Additionally, AI systems can provide teachers with data-driven insights into student performance, emotions, and engagement levels, enabling them to tailor their teaching methods and approaches or provide assistance or intervention accordingly. However, the use of AI and IT systems in sustainable education also presents challenges, including issues related to privacy and data security, as well as potential biases in algorithms and machine learning models. Moreover, the deployment of these systems requires significant investments in technology and infrastructure, which can be a challenge for educators. In this review paper, we will provide different perspectives from educators and information technology solution architects to connect education and AI technology. The discussion areas include sustainable education concepts and challenges, technology coverage and outcomes, as well as future research directions. By addressing these challenges and pursuing further research, we can unlock the full potential of these technologies and support a more equitable and sustainable education system.
- Published
- 2023
- Full Text
- View/download PDF
207. Application of Machine Learning Technology in Classical Music Education
- Author
-
Wang, Dongfang
- Abstract
The goal is to promote the healthy and stable development of music education in China. The time-frequency sequence topology in frequency domain can improve the effect of convolution operation. Therefore, this paper applies the above algorithms to classical music education, including the recognition of classical instruments, the feature extraction and recognition of classical music, and the quality evaluation of classical music education. The quality of the music quality evaluation system can be judged according to the correlation between the output results and the subjective evaluation. The higher the correlation, the better the music quality evaluation method. Through relevant experiments, it is proved that DTW score alignment and end-to-end are more successful in extracting the features of classical music, and more accurate in identifying classical instruments. The objective evaluation method of pronunciation teaching quality is more objective and accurate than P.563 music teaching quality evaluation.
- Published
- 2023
- Full Text
- View/download PDF
208. An Operations Research-Based Teaching Unit for Grade 10: The ROAR Experience, Part I
- Author
-
Colajanni, Gabriella, Gobbi, Alessandro, Picchi, Marinella, Raffaele, Alice, and Taranto, Eugenia
- Abstract
We introduce "Ricerca Operativa Applicazioni Reali" (ROAR; in English, "Real Applications of Operations Research"), a three-year project for higher secondary schools. Its main aim is to improve students' interest, motivation, and skills related to Science, Technology, Engineering, and Mathematics disciplines by integrating mathematics and computer science through operations research. ROAR offers examples and problems closely connected with students' everyday life or with the industrial reality, balancing mathematical modeling and algorithmics. The project is composed of three teaching units, addressed to grades 10, 11, and 12. The implementation of the first teaching unit took place in Spring 2021 at the scientific high school IIS Antonietti in Iseo (Brescia, Italy). In particular, in this paper, we provide a full description of this first teaching unit in terms of objectives, prerequisites, topics and methods, organization of the lectures, and digital technologies used. Moreover, we analyze the feedback received from students and teachers involved in the experimentation, and we discuss advantages and disadvantages related to distance learning that we had to adopt because of the COVID-19 pandemic.
- Published
- 2023
- Full Text
- View/download PDF
209. The Evaluation Algorithm of English Teaching Ability Based on Big Data Fuzzy K-Means Clustering
- Author
-
Lili Qin, Weixuan Zhong, and Hugh C. Davis
- Abstract
In response to the problem of inaccurate classification of big data information in traditional English teaching ability evaluation algorithms, this paper proposes an English teaching ability estimation algorithm based on big data fuzzy K-means clustering. Firstly, the article establishes a constraint parameter index analysis model. Secondly, quantitative recursive analysis is used to evaluate the capabilities of big data information models and achieve entropy feature extraction of capability constrained feature information. Finally, by integrating big data information fusion and K-means clustering algorithm, the article achieves clustering and integration of indicator parameters for English teaching ability, prepares corresponding teaching resource allocation plans, and evaluates English teaching ability. The experimental results show that using this method to evaluate English teaching ability has good information fusion analysis ability and improves the accuracy of teaching ability evaluation and the efficiency of teaching resource application.
- Published
- 2023
- Full Text
- View/download PDF
210. Application of a Short Video Caption Generation Algorithm in International Chinese Education and Teaching
- Author
-
Dai, Qianhui
- Abstract
With the continuous development of speech recognition technology, automatic subtitle generation has gradually attracted people's attention. However, the level of short videos is uneven, and cultural teaching is also one-sided, irregular, and lacking systematicness. Since the media era, it is possible to apply the short video subtitle generation algorithm to international Chinese education and teaching. However, Chinese teachers should pay attention to some possible problems in self-media videos and adopt appropriate teaching strategies. This paper mainly discusses the development of international Chinese education and teaching under the new media environment, and discusses the characteristics, advantages, and disadvantages of international Chinese education and teaching under the new media environment, as well as the existing problems. Short video subtitle generation algorithm provides a new way for international Chinese education and teaching, enhances the vitality of education, and expands educational channels.
- Published
- 2023
- Full Text
- View/download PDF
211. Investigating Immediacy in Multiple Phase-Change Single Case Experimental Designs Using a Variational Bayesian Unknown Change-Points Model
- Author
-
Batley, Prathiba Natesan, Minka, Tom, and Hedges, Larry Vernon
- Abstract
Immediacy is one of the necessary criteria to show strong evidence of treatment effect in single case experimental designs (SCEDs). With the exception of Natesan and Hedges (2017) no inferential statistical tool has been used to demonstrate or quantify it until now. We investigate and quantify immediacy by treating the change-points between the baseline and treatment phases as unknown. We extend Natesan and Hedges' work to multiple phase-change (e.g. ABAB) designs using a Variational Bayesian (VB) unknown change-points model. VB was used instead of Markov chain Monte Carlo methods (MCMC) because MCMC cannot be used effectively to determine multiple change-points. Combined and individual probabilities of correctly estimating the change-points were used as indicators of accuracy of the algorithm. Unlike MCMC in the Natesan and Hedges' (2017) study, VB was able to recover the change-points with high accuracy even for short time-series and in only a fraction of the time for all time-series lengths. We illustrate the algorithm with 13 real datasets. Advantages of the unknown change-points approach, Bayesian, and Variational Bayesian estimation for SCEDs are discussed. [This paper was published in "Behavior Research Methods."]
- Published
- 2020
- Full Text
- View/download PDF
212. Rock-Paper-Scissors Play: Beyond the Win-Stay/Lose-Change Strategy.
- Author
-
Zhang, Hanshu, Moisan, Frederic, and Gonzalez, Cleotilde
- Subjects
- *
COMPUTER algorithms , *ALGORITHMS , *COMPUTER engineering - Abstract
This research studied the strategies that players use in sequential adversarial games. We took the Rock-Paper-Scissors (RPS) game as an example and ran players in two experiments. The first experiment involved two humans, who played the RPS together for 100 times. Importantly, our payoff design in the RPS allowed us to differentiate between participants who used a random strategy from those who used a Nash strategy. We found that participants did not play in agreement with the Nash strategy, but rather, their behavior was closer to random. Moreover, the analyses of the participants' sequential actions indicated heterogeneous cycle-based behaviors: some participants' actions were independent of their past outcomes, some followed a well-known win-stay/lose-change strategy, and others exhibited the win-change/lose-stay behavior. To understand the sequential patterns of outcome-dependent actions, we designed probabilistic computer algorithms involving specific change actions (i.e., to downgrade or upgrade according to the immediate past outcome): the Win-Downgrade/Lose-Stay (WDLS) or Win-Stay/Lose-Upgrade (WSLU) strategies. Experiment 2 used these strategies against a human player. Our findings show that participants followed a win-stay strategy against the WDLS algorithm and a lose-change strategy against the WSLU algorithm, while they had difficulty in using an upgrade/downgrade direction, suggesting humans' limited ability to detect and counter the actions of the algorithm. Taken together, our two experiments showed a large diversity of sequential strategies, where the win-stay/lose-change strategy did not describe the majority of human players' dynamic behaviors in this adversarial situation. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
213. Zero-preserving imputation of single-cell RNA-seq data.
- Author
-
Linderman GC, Zhao J, Roulis M, Bielecki P, Flavell RA, Nadler B, and Kluger Y
- Subjects
- Animals, B-Lymphocytes cytology, B-Lymphocytes metabolism, Bronchi cytology, Bronchi metabolism, Datasets as Topic, Epithelial Cells cytology, Epithelial Cells metabolism, Humans, Killer Cells, Natural cytology, Killer Cells, Natural metabolism, Mice, Monocytes cytology, Monocytes metabolism, Primary Cell Culture, RNA metabolism, RNA-Seq, Single-Cell Analysis, T-Lymphocytes cytology, T-Lymphocytes metabolism, Algorithms, RNA genetics, Sequence Analysis, RNA statistics & numerical data
- Abstract
A key challenge in analyzing single cell RNA-sequencing data is the large number of false zeros, where genes actually expressed in a given cell are incorrectly measured as unexpressed. We present a method based on low-rank matrix approximation which imputes these values while preserving biologically non-expressed genes (true biological zeros) at zero expression levels. We provide theoretical justification for this denoising approach and demonstrate its advantages relative to other methods on simulated and biological datasets., (© 2022. The Author(s).)
- Published
- 2022
- Full Text
- View/download PDF
214. A new method for roadheader pick arrangement based on meshing pick spatial position and rock cutting verification.
- Author
-
Zhang, Mengqi, Yan, Xianguo, and Qin, Guoqiang
- Subjects
GAUSSIAN distribution ,COMPRESSIVE strength ,PAPER arts ,CONSUMPTION (Economics) ,ALGORITHMS - Abstract
This paper proposes a cutting head optimization method based on meshing the spatial position of the picks. According to the expanded shape of the spatial mesh composed of four adjacent picks on the plane, a standard mesh shape analysis method can be established with mesh skewness, mesh symmetry, and mesh area ratio as the indicators. The traversal algorithm is used to calculate the theoretical meshing rate, pick rotation coefficient, and the variation of cutting load for the longitudinal cutting head with 2, 3, and 4 helices. The results show that the 3-helix longitudinal cutting head has better performance. By using the traversal result with maximum theoretical meshing rate as the design parameter, the longitudinal cutting head CH51 with 51 picks was designed and analyzed. The prediction model of pick consumption is established based on cutting speed, direct rock cutting volume of each pick, pick rotation coefficient, uniaxial compressive strength, and CERCHAR abrasivity index. And the rock with normal distribution characteristics of Uniaxial Compressive Strength is used for the specific energy calculating. The artificial rock wall cutting test results show that the reduction in height loss suppresses the increase in pick equivalent loss caused by the increase in mass loss, and the pick consumption in this test is only 0.037–0.054 picks/m
3 . In addition, the correlation between the actual pick consumption and the prediction model, and the correlation between the actual cutting specific energy and the theoretical calculation value are also analyzed. The research results show that the pick arrangement design method based on meshing pick tip spatial position can effectively reduce pick consumption and improve the rock cutting performance. [ABSTRACT FROM AUTHOR]- Published
- 2021
- Full Text
- View/download PDF
215. A Distributed Security SDN Cluster Architecture for Smart Grid Based on Blockchain Technology.
- Author
-
Xiong, Ao, Tian, Hongkang, He, Wenchen, Zhang, Jie, Meng, Huiping, Guo, Shaoyong, Wang, Xinyan, Wu, Xinyi, and Kadoch, Michel
- Subjects
BLOCKCHAINS ,SMART power grids ,TELECOMMUNICATION systems ,DENIAL of service attacks ,ALGORITHMS ,ELECTRONIC paper ,INFORMATION technology ,MULTICASTING (Computer networks) - Abstract
This paper proposes a smart grid distributed security architecture based on blockchain technology and SDN cluster structure, referred to as ClusterBlock model, which combines the advantages of two emerging technologies, blockchain and SDN. The blockchain technology allows for distributed peer-to-peer networks, where the network can ensure the trusted interaction of untrusted nodes in the network. At the same time, this article adopts the design of an SDN controller distributed cluster to avoid single point of failure and balance the load between equipment and the controller. A cluster head was selected in each SDN cluster, and it was used as a blockchain node to construct an SDN cluster head blockchain. By combining blockchain technology, the security and privacy of the SDN communication network can be enhanced. At the same time, this paper designs a distributed control strategy and network attack detection algorithm based on blockchain consensus and introduces the Jaccard similarity coefficient to detect the network attacks. Finally, this paper evaluates the ClusterBlock model and the existing model based on the OpenFlow protocol through simulation experiments and compares the security performance. The evaluation results show that the ClusterBlock model has more stable bandwidth and stronger security performance in the face of DDoS attacks of the same scale. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
216. Two-stage algorithms for visually exploring spatio-temporal clustering of avian influenza virus outbreaks in poultry farms.
- Author
-
Wu HI and Chao DY
- Subjects
- Animals, Influenza in Birds diagnosis, Influenza in Birds virology, Poultry Diseases diagnosis, Poultry Diseases virology, Taiwan, Time Factors, Algorithms, Animal Husbandry, Influenza A Virus, H5N2 Subtype pathogenicity, Influenza A Virus, H5N8 Subtype pathogenicity, Influenza in Birds transmission, Poultry virology, Poultry Diseases transmission, Space-Time Clustering
- Abstract
The development of visual tools for the timely identification of spatio-temporal clusters will assist in implementing control measures to prevent further damage. From January 2015 to June 2020, a total number of 1463 avian influenza outbreak farms were detected in Taiwan and further confirmed to be affected by highly pathogenic avian influenza subtype H5Nx. In this study, we adopted two common concepts of spatio-temporal clustering methods, the Knox test and scan statistics, with visual tools to explore the dynamic changes of clustering patterns. Since most (68.6%) of the outbreak farms were detected in 2015, only the data from 2015 was used in this study. The first two-stage algorithm performs the Knox test, which established a threshold of 7 days and identified 11 major clusters in the six counties of southwestern Taiwan, followed by the standard deviational ellipse (SDE) method implemented on each cluster to reveal the transmission direction. The second algorithm applies scan likelihood ratio statistics followed by AGC index to visualize the dynamic changes of the local aggregation pattern of disease clusters at the regional level. Compared to the one-stage aggregation approach, Knox-based and AGC mapping were more sensitive in small-scale spatio-temporal clustering., (© 2021. The Author(s).)
- Published
- 2021
- Full Text
- View/download PDF
217. Critical lab values: A 50-year perspective honoring the MLO anniversary of publishing the laboratory panic values paper.
- Author
-
Lundberg, George D.
- Subjects
- *
SERIAL publications , *GENERATIVE artificial intelligence , *DOCUMENTATION , *LABORATORIES , *MEDICARE , *LEADERSHIP , *DECISION making , *SPECIAL days , *PUBLISHING , *ATTITUDES of medical personnel , *COLLECTION & preservation of biological specimens , *TIME , *LABOR supply , *ALGORITHMS - Abstract
The article focuses on the significance of critical laboratory values and their role in preventing life-threatening situations, highlighting the historical development of a systematic approach to manage these values. Topics include the implementation of the original critical value system at Los Angeles County/USC Medical Center, the contributions of Dr. Sol Bernstein to laboratory utilization, and the broader sociologic and economic factors influencing this advancement in the 1960s.
- Published
- 2024
218. Determining the Moho topography using an improved inversion algorithm: a case study from the South China Sea.
- Author
-
Zhang, Hui, Yu, Hangtao, Xu, Chuang, Li, Rui, Bie, Lu, He, Qingyin, Liu, Yiqi, Lu, Jinsong, Xiao, Yinan, Lyu, Yang, Eldosouky, Ahmed M., and Loureiro, Afonso
- Subjects
MOHOROVICIC discontinuity ,OPTIMIZATION algorithms ,TOPOGRAPHY ,ALGORITHMS - Abstract
The Parker-Oldenburg method, as a classical frequency-domain algorithm, has been widely used in Moho topographic inversion. The method has two indispensable hyperparameters, which are the Moho density contrast and the average Moho depth. Accurate hyperparameters are important prerequisites for inversion of fine Moho topography. However, limited by the nonlinear terms, the hyperparameters estimated by previous methods have obvious deviations. For this reason, this paper proposes a new method to improve the existing ParkerOldenburg method by taking advantage of the invasive weed optimization algorithm in estimating hyperparameters. The synthetic test results of the new method show that, compared with the trial and error method and the linear regression method, the new method estimates the hyperparameters more accurately, and the computational efficiency performs excellently, which lays the foundation for the inversion of more accurate Moho topography. In practice, the method is applied to the Moho topographic inversion in the South China Sea. With the constraints of available seismic data, the crust-mantle density contrast and the average Moho depth in the South China Sea are determined to be 0.535 g/cm
3 and 21.63 km, respectively, and the Moho topography of the South China Sea is inverted based on this. The results of the Moho topography show that the Moho depth in the study area ranges from 5.7 km to 32.3 km, with more obvious undulations. Among them, the shallowest part of the Moho topography is mainly located in the southern part of the Southwestern sub-basin and the southern part of the Manila Trench, with a depth of about 6 km. Compared with the CRUST 1.0 model and the model calculated by the improved Bott's method, the RMS between the Moho model and the seismic point difference in this paper is smaller, which proves that the method in this paper has some advantages in Moho topographic inversion. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
219. Letter to the Editor Regarding Paper 'Automatic Computation of Left Ventricular Volume Changes over a Cardiac Cycle from Echocardiography Images by Nonlinear Dimensionality Reduction'
- Author
-
Ahmad Shalbaf and Hamid Behnam
- Subjects
Surface (mathematics) ,Geodesic ,Heart Ventricles ,Image registration ,Motion (geometry) ,Geometry ,Image processing ,Article ,Pattern Recognition, Automated ,Cardiovascular Physiological Phenomena ,Motion estimation ,Image Interpretation, Computer-Assisted ,Image Processing, Computer-Assisted ,Humans ,Radiology, Nuclear Medicine and imaging ,Letter to the Editor ,Mathematics ,Ultrasonography ,Radiological and Ultrasound Technology ,Nonlinear dimensionality reduction ,Reproducibility of Results ,Function (mathematics) ,Organ Size ,Computer Science Applications ,Transformation (function) ,Algebraic function ,Isomap ,Rotation (mathematics) ,Algorithm ,Algorithms - Abstract
Question: 1—Echocardiography images Images were acquired from end of diastole to end of systole at different phases (7 phases) within a cardiac cycle by a Vivid 3 GE Health echocardiography machine. To obtain worthy images at denoted phases, it would be dependent to operator (for the best clinical examination). Of course, having assumed images not only were acquired from A4C and A2C but also from short-axis views. Short axis views have not been used at your papers, so some radial, circumferential and sample rotatory data would be missed. Response: It is simply possible for us to record and analyze images of short-axis views. However, according to American Society of Echocardiography (ASE) [1, 2], in the modified Simpson’s rule which is a method for LV volume computation from 2-D echocardiography images, it suffices to acquire apical two and four chamber views (A4C, A2C) for left ventricular (LV) volume computation. Question: 2—Observed data yi′ s Echocardiography images paly role of observed data and they are symbolized by yi′ s. there is a sequence of observed data y1, y2,…,yN. N is the number of obtained images. In fact we have made a chain of displacement, rotation and pure strain or nonrigid transformation (deformation) by these observed data that is started from y1 and is ended up to yN. This chain has some conceptual interpretations of the elasticity of the global left ventricular motion/function and the regional LV fiber arrangement/movement. Response: Yes, it is correct. Question: 3—Embed observed data to a high dimensional space Response: Yes, it is correct. Question: 4—Image processing on yi′ s and geodesic distance between yi′ s Observed data y1, y2,…,yN are embed to a meshed surface sized number of pixels of an image. Translation, rotation and pure deformation of y1, y2,…,yN are occurred/and studied at this surface utilizing the mathematical elasticity theory. A distance metric is defined that computes distances between observed points. Medical interpretations of observed points over the time should be checked/and stated clearly at your manuscripts. Images are not usual images but also are images from the left ventricle. Motion (displacement and velocity), deformation (strain and strain rate) and torsion of each myocardial sample have to be extracted during a cardiac cycle in the mentioned surface. These are used on the creation of graph G referred to their papers. Response: In this stage, we have just computed the distance between the two images in a cardiac cycle using non-rigid image registration regardless of relationship between images over the time. Therefore by this method, the motion and longitudinal deformation (short-axis views are needed to get for radial and circumferential deformations) of LV myocardium between all of two images in a cardiac cycle are extracted (for example, between end systole image and end diastole image, etc.). Then, in other stage of our method (calculation of the matrix of pairwise geodesic distance and apply multidimensional scaling on the resulting geodesic distance matrix to construct the low dimensional data points xi), motion and longitudinal deformation of LV myocardium during a cardiac cycle are extracted. Question: 5—An Isomap f The main tool at this method is the function “f” which is an isometric map [2, 3]. I will be interested to know the structure/formula or relationship of this isometric function which has most probably a lot of practical information of the left ventricular function and structure. I mean, how we can get a good representation of such a this function “f” for heart science study? Is it “f” known as an algebraic function? What information has been coded to fibers (f−1) of this isometric function “f”? Response: Isomap algorithm is one of the most popular sets of nonlinear dimensionality reduction (NLDR) algorithms. This algorithm attempts to extract low-dimensional data points from input points in a high-dimensional space in such a way that pairwise geodesic distances (the distance between two points measured over the manifold) are preserved. So that nearby and far points in high-dimensional space map to nearby and far points in low-dimensional space. This method is frequently used for visualization of medial image set. Detail of this method is described in Tenenbaum et al. and Borg et al. [3, 4]. It should be noted that we do not change the structure or formula of isomap algorithm. We only define a new image distance function based on image registration in calculation of geodesic distance between yi′ s in isomap algorithm. With this modification, Isomap algorithm is specified for assessment of the left ventricular function and structure. However, precise mathematical descriptions of f and (f−1) applied to echocardiography images and using the other NLDR algorithms need further researches in the following of this article. Question: 6—A reconstructed curve on a 2D manifold space & hidden data xi′ s; f(yi) = xi Isometric function results in a reconstructive curve crossing from hidden data (xi′ s) in a surface. It’s natural that some information (maybe clinical information) would betransferred to these hidden points like the curve of LV volume changes and so on and so on. A main question is: what are these new points xi′ s exactly (medically point of view)? What the other data has been come out from these hidden points? I think all of these questions back to gain a good understanding of isometric function “f”. Response: These points xi′ s will probably enable physicians to diagnose and follow up many cardiac structures and functions that also open doors to much more research at imaging cardiology in the future. References 1. AlizadehSani Z, Shalbaf A, Behnam H, Shalbaf R: Automatic computation of left ventricularvolume changes over a cardiac cycle from echocardiography images by nonlinear dimensionality reduction. J Digit Imaging : July, 2014 2. Tenenbaum JB, de Silva V, Langford JC: global geometric framework for nonlinear dimensionalityreduction. Science 290:2319–2323, 2000. Reprint available online: http://web.mit.edu/cocosci/Papers/sci_reprint.pdf 3. Ledesma-Carbayo MJ, Kybic J, Desco M, et al.: Spatio-temporal nonrigid registration for ultrasoundcardiac motion estimation. IEEE Trans Med Imaging 24:1113–1126, 2005.
- Published
- 2015
220. Comparative Dissemination of Aerosol and Splatter Using Suction Device during Ultrasonic Scaling: A Pilot Study.
- Author
-
Engsomboon, Nutthawadee, Pachimsawat, Praewpat, and Thanathornwong, Bhornsawan
- Subjects
AEROSOLS ,ULTRASONIC equipment ,CARDBOARD ,DENTAL equipment ,PILOT projects ,MASS spectrometers ,DENTAL scaling - Abstract
Objective: This study compared the aerosol and splatter diameter and count numbers produced by a dental mouth prop with a suction holder device and a saliva ejector during ultrasonic scaling in a clinical setting. Methodology: Fluorescein dye was placed in the dental equipment irrigation reservoirs with a mannequin, and an ultrasonic scaler was employed. The procedures were performed three times per device. The upper and bottom board papers were placed on the laboratory platform. All processes used an ultrasonic scaler to generate aerosol and splatter. A dental mouth prop with a suction holder and a saliva ejector were also tested. Photographic analysis was used to examine the fluorescein samples, followed by image processing in Python and assessment of the diameter and count number. For device comparison, statistics were used with an independent t-test. Result: When using the dental mouth prop with a suction holder, the scaler produced aerosol particles that were maintained on the upper board paper (mean ± SD: 1080 ± 662 µm) compared to on the bottom board paper (1230 ± 1020 µm). When the saliva ejector was used, it was found that the diameter of the aerosol on the upper board paper was 900 ± 580 µm, and the diameter on the bottom board paper was 1000 ± 756 µm. Conclusion: There was a significant difference in the aerosol and splatter particle diameter and count number between the dental mouth prop with a suction holder and saliva ejector (p < 0.05). Furthermore, the results revealed that there was a statistically significant difference between the two groups on the upper and bottom board papers. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
221. Tools and algorithms for the construction and analysis of systems: a special issue for TACAS 2020.
- Author
-
Biere, Armin and Parker, David
- Subjects
ALGORITHMS ,SOFTWARE verification ,TECHNOLOGY transfer ,SOFTWARE maintenance ,SOFTWARE engineering - Abstract
This special issue of Software Tools for Technology Transfer comprises extended versions of selected papers from the 26th edition of the International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS 2020). The focus of this conference series is tools and algorithms for the rigorous analysis of software and hardware systems, and the papers in this special cover the spectrum of current work in this field. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
222. Moth-flame optimization algorithm based on diversity and mutation strategy.
- Author
-
Ma, Lei, Wang, Chao, Xie, Neng-gang, Shi, Miao, Ye, Ye, and Wang, Lu
- Subjects
MATHEMATICAL optimization ,ALGORITHMS ,CONSTRAINED optimization ,MAXIMA & minima - Abstract
In this work, an improved moth-flame optimization algorithm is proposed to alleviate the problems of premature convergence and convergence to local minima. From the perspective of diversity, an inertia weight of diversity feedback control is introduced in the moth-flame optimization to balance the algorithm's exploitation and global search abilities. Furthermore, a small probability mutation after the position update stage is added to improve the optimization performance. The performance of the proposed algorithm is extensively evaluated on a suite of CEC'2014 series benchmark functions and four constrained engineering optimization problems. The results of the proposed algorithm are compared with the ones of other improved algorithms presented in literatures. It is observed that the proposed method has a superior performance to improve the convergence ability of the algorithm. In addition, the proposed algorithm assists in escaping the local minima. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
223. Role of Machine Learning in Resource Allocation Strategy over Vehicular Networks: A Survey.
- Author
-
Nurcahyani I and Lee JW
- Subjects
- Resource Allocation, Algorithms, Machine Learning
- Abstract
The increasing demand for smart vehicles with many sensing capabilities will escalate data traffic in vehicular networks. Meanwhile, available network resources are limited. The emergence of AI implementation in vehicular network resource allocation opens the opportunity to improve resource utilization to provide more reliable services. Accordingly, many resource allocation schemes with various machine learning algorithms have been proposed to dynamically manage and allocate network resources. This survey paper presents how machine learning is leveraged in the vehicular network resource allocation strategy. We focus our study on determining its role in the mechanism. First, we provide an analysis of how authors designed their scenarios to orchestrate the resource allocation strategy. Secondly, we classify the mechanisms based on the parameters they chose when designing the algorithms. Finally, we analyze the challenges in designing a resource allocation strategy in vehicular networks using machine learning. Therefore, a thorough understanding of how machine learning algorithms are utilized to offer a dynamic resource allocation in vehicular networks is provided in this study.
- Published
- 2021
- Full Text
- View/download PDF
224. Fair algorithms for selecting citizens' assemblies.
- Author
-
Flanigan B, Gölz P, Gupta A, Hennig B, and Procaccia AD
- Subjects
- Datasets as Topic, Female, Humans, Male, Random Allocation, Administrative Personnel organization & administration, Algorithms, Democracy, Policy Making, Probability
- Abstract
Globally, there has been a recent surge in 'citizens' assemblies'
1 , which are a form of civic participation in which a panel of randomly selected constituents contributes to questions of policy. The random process for selecting this panel should satisfy two properties. First, it must produce a panel that is representative of the population. Second, in the spirit of democratic equality, individuals would ideally be selected to serve on this panel with equal probability2,3 . However, in practice these desiderata are in tension owing to differential participation rates across subpopulations4,5 . Here we apply ideas from fair division to develop selection algorithms that satisfy the two desiderata simultaneously to the greatest possible extent: our selection algorithms choose representative panels while selecting individuals with probabilities as close to equal as mathematically possible, for many metrics of 'closeness to equality'. Our implementation of one such algorithm has already been used to select more than 40 citizens' assemblies around the world. As we demonstrate using data from ten citizens' assemblies, adopting our algorithm over a benchmark representing the previous state of the art leads to substantially fairer selection probabilities. By contributing a fairer, more principled and deployable algorithm, our work puts the practice of sortition on firmer foundations. Moreover, our work establishes citizens' assemblies as a domain in which insights from the field of fair division can lead to high-impact applications., (© 2021. The Author(s).)- Published
- 2021
- Full Text
- View/download PDF
225. Binding affinity prediction for binary drug-target interactions using semi-supervised transfer learning.
- Author
-
Tanoori B, Zolghadri Jahromi M, and Mansoori EG
- Subjects
- Humans, Algorithms, Drug Interactions, Pharmaceutical Preparations chemistry, Pharmaceutical Preparations metabolism, Supervised Machine Learning
- Abstract
In the field of drug-target interactions prediction, the majority of approaches formulated the problem as a simple binary classification task. These methods used binary drug-target interaction datasets to train their models. The prediction of drug-target interactions is inherently a regression problem and these interactions would be identified according to the binding affinity between drugs and targets. This paper deals the binary drug-target interactions and tries to identify the binary interactions based on the binding strength of a drug and its target. To this end, we propose a semi-supervised transfer learning approach to predict the binding affinity in a continuous spectrum for binary interactions. Due to the lack of training data with continuous binding affinity in the target domain, the proposed method makes use of the information available in other domains (i.e. source domain), via the transfer learning approach. The general framework of our algorithm is based on an objective function, which considers the performance in both source and target domains as well as the unlabeled data in the target domain via a regularization term. To optimize this objective function, we make use of a gradient boosting machine which constructs the final model. To assess the performance of the proposed method, we have used some benchmark datasets with binary interactions for four classes of human proteins. Our algorithm identifies interactions in a more realistic situation. According to the experimental results, our regression model performs better than the state-of-the-art methods in some procedures., (© 2021. The Author(s), under exclusive licence to Springer Nature Switzerland AG.)
- Published
- 2021
- Full Text
- View/download PDF
226. USVs Path Planning for Maritime Search and Rescue Based on POS-DQN: Probability of Success-Deep Q-Network.
- Author
-
Liu, Lu, Shan, Qihe, and Xu, Qi
- Subjects
DEEP reinforcement learning ,RESCUE work ,AUTONOMOUS vehicles ,PROBLEM solving ,ALGORITHMS - Abstract
Efficient maritime search and rescue (SAR) is crucial for responding to maritime emergencies. In traditional SAR, fixed search path planning is inefficient and cannot prioritize high-probability regions, which has significant limitations. To solve the above problems, this paper proposes unmanned surface vehicles (USVs) path planning for maritime SAR based on POS-DQN so that USVs can perform SAR tasks reasonably and efficiently. Firstly, the search region is allocated as a whole using an improved task allocation algorithm so that the task region of each USV has priority and no duplication. Secondly, this paper considers the probability of success (POS) of the search environment and proposes a POS-DQN algorithm based on deep reinforcement learning. This algorithm can adapt to the complex and changing environment of SAR. It designs a probability weight reward function and trains USV agents to obtain the optimal search path. Finally, based on the simulation results, by considering the complete coverage of obstacle avoidance and collision avoidance, the search path using this algorithm can prioritize high-probability regions and improve the efficiency of SAR. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
227. Research on a Recognition Algorithm for Traffic Signs in Foggy Environments Based on Image Defogging and Transformer.
- Author
-
Liu, Zhaohui, Yan, Jun, and Zhang, Jinzhao
- Subjects
TRAFFIC signs & signals ,TRAFFIC monitoring ,ALGORITHMS ,AUTONOMOUS vehicles - Abstract
The efficient and accurate identification of traffic signs is crucial to the safety and reliability of active driving assistance and driverless vehicles. However, the accurate detection of traffic signs under extreme cases remains challenging. Aiming at the problems of missing detection and false detection in traffic sign recognition in fog traffic scenes, this paper proposes a recognition algorithm for traffic signs based on pix2pixHD+YOLOv5-T. Firstly, the defogging model is generated by training the pix2pixHD network to meet the advanced visual task. Secondly, in order to better match the defogging algorithm with the target detection algorithm, the algorithm YOLOv5-Transformer is proposed by introducing a transformer module into the backbone of YOLOv5. Finally, the defogging algorithm pix2pixHD is combined with the improved YOLOv5 detection algorithm to complete the recognition of traffic signs in foggy environments. Comparative experiments proved that the traffic sign recognition algorithm proposed in this paper can effectively reduce the impact of a foggy environment on traffic sign recognition. Compared with the YOLOv5-T and YOLOv5 algorithms in moderate fog environments, the overall improvement of this algorithm is achieved. The precision of traffic sign recognition of the algorithm in the fog traffic scene reached 78.5%, the recall rate was 72.2%, and mAP@0.5 was 82.8%. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
228. Online Social Network Information Source Identification Algorithm Based on Multi-Attribute Topological Clustering.
- Author
-
Dong, Ming, Lu, Yujuan, Tan, Zhenhua, and Zhang, Bin
- Subjects
ONLINE social networks ,INFORMATION resources ,INFORMATION networks ,INFORMATION dissemination ,ALGORITHMS ,IDENTIFICATION - Abstract
This paper focuses on the problem of information source identification in online social networks (OSNs). By analyzing the research situation of source identification problems and challenges (such as the randomness of the information dissemination process and complexity of the underlying network topology), this paper studies the problem of multiple source diffusion and proposes a source identification algorithm based on multi-attribute topological clustering (MaTC). The basic idea of the algorithm is to decompose the multi-source problems into a series of single-source problems by using clustering partitioning to improve accuracy and efficiency. Firstly, it estimates the number of source nodes, which is also the number of network partitions, then characterizes the combination of multiple attribute structures as an attribute index of topological clustering, performs an analysis of the distribution of real source nodes in each partition to evaluate the accuracy of the clustering partition, and finally uses Jordan centrality within each partition for single-source identification. Through comparative experiments, it is verified that the proposed MaTC algorithm is superior to the comparison algorithms in evaluating indicators. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
229. A Lightweight Remote Sensing Small Target Image Detection Algorithm Based on Improved YOLOv8.
- Author
-
Nie, Haijiao, Pang, Huanli, Ma, Mingyang, and Zheng, Ruikai
- Subjects
OBJECT recognition (Computer vision) ,ALGORITHMS ,REMOTE-sensing images ,REMOTE sensing - Abstract
In response to the challenges posed by small objects in remote sensing images, such as low resolution, complex backgrounds, and severe occlusions, this paper proposes a lightweight improved model based on YOLOv8n. During the detection of small objects, the feature fusion part of the YOLOv8n algorithm retrieves relatively fewer features of small objects from the backbone network compared to large objects, resulting in low detection accuracy for small objects. To address this issue, firstly, this paper adds a dedicated small object detection layer in the feature fusion network to better integrate the features of small objects into the feature fusion part of the model. Secondly, the SSFF module is introduced to facilitate multi-scale feature fusion, enabling the model to capture more gradient paths and further improve accuracy while reducing model parameters. Finally, the HPANet structure is proposed, replacing the Path Aggregation Network with HPANet. Compared to the original YOLOv8n algorithm, the recognition accuracy of mAP@0.5 on the VisDrone data set and the AI-TOD data set has increased by 14.3% and 17.9%, respectively, while the recognition accuracy of mAP@0.5:0.95 has increased by 17.1% and 19.8%, respectively. The proposed method reduces the parameter count by 33% and the model size by 31.7% compared to the original model. Experimental results demonstrate that the proposed method can quickly and accurately identify small objects in complex backgrounds. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
230. Avoiding the Digital Age is Hurting Research Efforts: A greater shift from paper records and physical assets is achievable.
- Author
-
HOLLAN, MIKE
- Subjects
DIGITAL technology ,ARTIFICIAL intelligence ,LIFE sciences ,AUTOMATIC data collection systems ,ELECTRONIC data interchange ,ELECTRONIC health records ,MACHINE learning ,DRUG development ,ALGORITHMS - Abstract
The article offers information on the importance of data in drug development and the life sciences industry. Topics include the use of new technologies like AI and machine learning for data collection and analysis, the persistence of paper-based processes in the industry, and challenges such as the "first-mile problem" in data collection and management.
- Published
- 2024
231. Exploring the opportunity of using machine learning to support the system dynamics method: Comment on the paper by Edali and Yücel.
- Author
-
Duggan, Jim
- Subjects
ALGORITHMS ,COMPUTER simulation ,DECISION making ,MACHINE learning ,HUMAN services programs - Abstract
The author presents comments on a paper on the use of machine learning to support the system dynamics method. Topics discussed include its interpretation of simulation models and explanation of policy analysis, and the emerging view whereby dynamic problems from endogenous feedback structures can be tackled via wider tools and methodological approaches. Also noted is the resulting potential for greater insights into the modelling process.
- Published
- 2020
- Full Text
- View/download PDF
232. Mapping analysis in ontology-based data access: Algorithms and complexity (Discussion paper)
- Author
-
Lembo, Domenico, Mora, José, Rosati, Riccardo, Domenico Fabio Savo, and Thorstensen, Evgenij
- Subjects
Computational complexity ,Redundancy ,Mapping ,Algorithms ,Computational linguistics ,Settore ING-INF/05 - Sistemi di Elaborazione delle Informazioni - Published
- 2015
233. A Low‐Frequency Oscillation Identification Method for Power System Based on Adaptive Generalized S‐Transform with Bat Algorithm.
- Author
-
Yu, Miao, Wei, Jingjing, Tian, Shuoshuo, Sun, Jianqun, Wu, Yixiao, Zhang, Shouzhi, Hu, Jingxuan, and Musca, Rossano
- Subjects
ELECTRIC power distribution grids ,GAUSSIAN function ,SYSTEM identification ,ALGORITHMS ,IMMUNITY - Abstract
The complexity of the interconnected grid and the continuous increase of new energy sources have led to an acute problem with low‐frequency oscillation (LFO) in power system. Identification and monitoring of LFO in power grid are prerequisites for effective control of low‐frequency oscillation phenomena. To address matter that the traditional S‐transform time‐frequency window function has a fixed scale and cannot be applied to the specific local characteristics of different signals, an adaptive generalized S‐transform algorithm based on a bat algorithm is proposed in this paper. It uses adjustment parameters to control the generalized Gaussian window function. The parameters are automatically adjusted by a bat algorithm adaptive optimization to find the best time‐frequency characterization. Secondly, the PMU data waveform with implicit low‐frequency oscillation information is converted into a two‐dimensional time‐frequency figure including the onset moment, frequency, and amplitude. The system enables identification and visual monitoring of low‐frequency oscillations. After that, simulation experiments of New England system are conducted. The superiority of the proposed method is verified, which can greatly improve the time‐frequency resolution of PMU active power data signal and has effective noise immunity. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
234. The application of matching pursuit spectral blueing in post-stack seismic frequency enhancement.
- Author
-
Xuebin, Jin, Bingxi, Li, Zhenguo, Zhang, Maosheng, Lei, Lishuang, An, and Kai, Ding
- Subjects
REFLECTANCE ,CALIBRATION ,ALGORITHMS - Abstract
The calculation of the spectral blueing operator in the traditional spectral blueing method has singularity, which leads to poor performance in post-stack seimic frequency expansion. To this end, a frequency spreading technique based on matching pursuit (MP) and spectral blueing is proposed. Through time–frequency analysis processing, it is shown that the seismic signal extracted by matching tracking method has good stability and higher resolution. The specific process of the method in this paper firstly uses the matching tracking method to accurately divide the post-stack seismic data into multiple frequency-division seismic bodies; then, in the process of calculating the spectral blueing ization operators for each frequency band, the weighting idea is used to calculate the weights of the optimized spectral blueing ization operators for each frequency band based on the differences in energy in different frequency bands; finally, the optimized spectral blueing operator is convolved with seismic reflection coefficients to obtain high-resolution seismic data. The actual test results of poststack seismic data have proven that the frequency raising method proposed in this paper is superior to the traditional spectral blueing ization algorithm, greatly improving the high-frequency component information of poststack seismic data. After frequency extension, there are more seismic events and higher resolution. Finally, the practicability and rationality of the seismic data after frequency extraction are verified by a series of operations such as attribute extraction, well seismic calibration and inversion. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
235. Deep Learning Algorithms for Traffic Forecasting: A Comprehensive Review and Comparison with Classical Ones.
- Author
-
Afandizadeh, Shahriar, Abdolahi, Saeid, Mirzahossein, Hamid, and Li, Ruimin
- Subjects
MACHINE learning ,TRAFFIC estimation ,TRANSPORTATION management system ,DEEP learning ,INTELLIGENT transportation systems ,ALGORITHMS ,FORECASTING ,TRAFFIC safety - Abstract
Accurate and timely forecasting of critical components is pivotal in intelligent transportation systems and traffic management, crucially mitigating congestion and enhancing safety. This paper aims to comprehensively review deep learning algorithms and classical models employed in traffic forecasting. Spanning diverse traffic datasets, the study encompasses various scenarios, offering a nuanced understanding of traffic forecasting methods. Reviewing 111 seminal research works since the 1980s, encompassing both deep learning and classical models, the paper begins by detailing the data sources utilized in transportation systems. Subsequently, it delves into the theoretical underpinnings of prevalent deep learning algorithms and classical models prevalent in traffic forecasting. Furthermore, it investigates the application of these algorithms and models in forecasting key traffic characteristics, informed by their utility in transport and traffic analyses. Finally, the study elucidates the merits and drawbacks of proposed models through applied research in traffic forecasting. Findings indicate that while deep learning algorithms and classic models serve as valuable tools, their suitability varies across contexts, necessitating careful consideration in future studies. The study underscores research opportunities in road traffic forecasting, providing a comprehensive guide for future endeavors in this domain. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
236. Time Sequence Deep Learning Model for Ubiquitous Tabular Data with Unique 3D Tensors Manipulation.
- Author
-
Gicic, Adaleta, Đonko, Dženana, and Subasi, Abdulhamit
- Subjects
ARTIFICIAL neural networks ,MACHINE learning ,ALGORITHMS ,DATA modeling - Abstract
Although deep learning (DL) algorithms have been proved to be effective in diverse research domains, their application in developing models for tabular data remains limited. Models trained on tabular data demonstrate higher efficacy using traditional machine learning models than DL models, which are largely attributed to the size and structure of tabular datasets and the specific application contexts in which they are utilized. Thus, the primary objective of this paper is to propose a method to use the supremacy of Stacked Bidirectional LSTM (Long Short-Term Memory) deep learning algorithms in pattern discovery incorporating tabular data with customized 3D tensor modeling in feeding neural networks. Our findings are empirically validated using six diverse, publicly available datasets each varying in size and learning objectives. This paper proves that the proposed model based on time-sequence DL algorithms, which were generally described as inadequate when dealing with tabular data, yields satisfactory results and competes effectively with other algorithms specifically designed for tabular data. An additional benefit of this approach is its ability to preserve simplicity while ensuring fast model training also with large datasets. Even with extremely small datasets, models can be applied to achieve exceptional predictive results and fully utilize their capacity. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
237. A Synchronization Algorithm for MBOC Signal Based on Reconstructed Correlation Function.
- Author
-
Wu, Ting, Ji, Yuanfa, and Sun, Xiyan
- Subjects
GLOBAL Positioning System ,STATISTICAL correlation ,SYNCHRONIZATION ,ALGORITHMS - Abstract
In order to address the ambiguous synchronization problem caused by the multi-peak nature of the autocorrelation function of the modulated signal of the Multiplexed Binary Offset Carrier (MBOC) in the Global Navigation Satellite System (GNSS), a new synchronization algorithm for MBOC signals is presented in this paper, which uses a reconstruction correlation function to effectively handle synchronization ambiguities associated with multi-peak signals. The paper proposes an algorithm for reconstructing the correlation function of the MBOC signal by analyzing its characteristics. The algorithm generates three local auxiliary signals, namely, pseudo-random codes (PRN), BOC(1,1) signals, and MBOC signals, which are correlated with the received signal. By combining the three correlation functions, the algorithm produces a reconstructed correlation function based on reconstruction rules, eliminating side peaks and achieving unambiguous synchronization. Simulation results show that the proposed algorithm in this paper eliminates all side peaks while maintaining a high detection probability, and its deblurring capability is optimal compared to other algorithms. In addition, the discriminant curve shows that the algorithm in this paper successfully eliminates all the mis-locked points, and the slope gain is improved by more than 2.5 dB compared with other algorithms, and the anti-multipath performance of the algorithm in this paper is better than that of other traditional algorithms, such as ASPeCT (Autocorrelation Side-Peak Cancellation Technique). [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
238. Multi-Robot Collaborative Mapping with Integrated Point-Line Features for Visual SLAM.
- Author
-
Xia, Yu, Wu, Xiao, Ma, Tao, Zhu, Liucun, Cheng, Jingdi, and Zhu, Junwu
- Subjects
VISUAL odometry ,MOBILE operating systems ,MOBILE robots ,ALGORITHMS ,PHOTOGRAMMETRY ,ROBOTS - Abstract
Simultaneous Localization and Mapping (SLAM) enables mobile robots to autonomously perform localization and mapping tasks in unknown environments. Despite significant progress achieved by visual SLAM systems in ideal conditions, relying solely on a single robot and point features for mapping in large-scale indoor environments with weak-texture structures can affect mapping efficiency and accuracy. Therefore, this paper proposes a multi-robot collaborative mapping method based on point-line fusion to address this issue. This method is designed for indoor environments with weak-texture structures for localization and mapping. The feature-extraction algorithm, which combines point and line features, supplements the existing environment point feature-extraction method by introducing a line feature-extraction step. This integration ensures the accuracy of visual odometry estimation in scenes with pronounced weak-texture structure features. For relatively large indoor scenes, a scene-recognition-based map-fusion method is proposed in this paper to enhance mapping efficiency. This method relies on visual bag of words to determine overlapping areas in the scene, while also proposing a keyframe-extraction method based on photogrammetry to improve the algorithm's robustness. By combining the Perspective-3-Point (P3P) algorithm and Bundle Adjustment (BA) algorithm, the relative pose-transformation relationships of multi-robots in overlapping scenes are resolved, and map fusion is performed based on these relative pose relationships. We evaluated our algorithm on public datasets and a mobile robot platform. The experimental results demonstrate that the proposed algorithm exhibits higher robustness and mapping accuracy. It shows significant effectiveness in handling mapping in scenarios with weak texture and structure, as well as in small-scale map fusion. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
239. Deep Learning-Based Intelligent Detection Device for Insulation Pull Rod Defects.
- Author
-
Yu, Hua, Niu, Shu, Li, Shuai, Yang, Gang, Wang, Xuan, Luo, Hanhua, Fan, Xianhao, and Li, Chuanyang
- Subjects
OBJECT recognition (Computer vision) ,INTELLIGENT buildings ,DEEP learning ,ALGORITHMS ,SPEED ,HARDWARE - Abstract
This paper proposes a deep learning-based intelligent detection device for insulation pull rod defects, addressing the issues of low detection accuracy, poor timeliness of intelligent analysis, and the difficulty in preserving detection results. Firstly, by constructing the pull rod defects dataset and training the YOLOv5s network, along with commonly used object detection algorithms in industrial defect detection, the feasibility of deep learning networks for insulation pull rod defects detection is explored. Secondly, the trained model is combined to build an intelligent detection device for pull rod defects, integrating insulation pull rod image acquisition and defect detection into a unified system. The research results demonstrate that the YOLOv5s network can quickly and accurately detect pull rod defects. On the test set constructed in this paper, the detection performance metric mAP@0.5:0.95 of the trained model reached 54.7%. Specifically, the mAP@0.5 score was 86.9% at a threshold of 0.5. The detection speed FPS reached 169.5, significantly improving the detection efficiency and accuracy compared to traditional object detection algorithms. By establishing an organic connection between the image hardware acquisition device and the deep learning network, the existing problems of inefficient detection and difficult storage of detection results in pull rod defects detection methods are effectively addressed. This research provides new insights for detecting insulation pull rod defects. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
240. Reinforcement Machine Learning for Sparse Array Antenna Optimization with PPO.
- Author
-
Mohammad-Ali-Nezhad, Sajad and Kassem, Mohammad H.
- Subjects
ANTENNA arrays ,ANTENNAS (Electronics) ,TELECOMMUNICATION systems ,MACHINE learning ,ALGORITHMS - Abstract
This paper focuses on optimizing the radiation pattern of sparse array antennas using reinforcement learning, with many algorithms. The paper aims to leverage Proximal Policy Optimization’s (PPO’s) advantages in optimization and its effectiveness in handling stochastic transitions and rewards to achieve a reduced number of elements while maintaining desired signal performance and minimizing unnecessary side lobe signals. By removing a few of the antennas using reinforcement learning and PPO optimization, the same results as a complete array have been obtained. The anticipated outcomes of this research hold the promise of significantly enhancing the effectiveness and utility of sparse array antennas in communication systems. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
241. Predictive power control strategy without grid voltage sensors of the Vienna rectifier.
- Author
-
Yang, Tao, Chen, Lan, and Miao, Yiru
- Subjects
SOFT power (Social sciences) ,VOLTAGE ,PROBLEM solving ,DETECTORS ,ELECTRIC current rectifiers ,ALGORITHMS ,PULSE width modulation transformers - Abstract
This paper proposes a predictive power control strategy for the three‐phase, six‐switch Vienna rectifier without grid voltage sensors to reduce the hardware cost and complexity of a high‐power PWM rectifier system. Firstly, an algorithm for calculating the AC‐side voltage in the αβ coordinate system is derived according to the operating principle of the Vienna rectifier, and a voltage observer is constructed by combining a second‐order low‐pass filter to estimate the grid voltage. Secondly, a soft start method is designed to solve the problem that the rectifier is prone to inrush current when it is started. Furthermore, the control method of grid voltage sensorless is combined with predictive power control with good dynamic characteristics and simple parameter settings to form the control strategy proposed in this paper. Finally, simulation analysis and experimental verification are carried out on the proposed control strategy. Simulation and experimental results show that the grid voltage estimation has high accuracy, a good surge current suppression effect, unit power factor operation, low input current harmonic content, and good dynamic and steady‐state performance. Therefore, the correctness and effectiveness of the strategy proposed in this paper are verified. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
242. A Distorted-Image Quality Assessment Algorithm Based on a Sparse Structure and Subjective Perception.
- Author
-
Yang, Yang, Liu, Chang, Wu, Hui, and Yu, Dingguo
- Subjects
PEARSON correlation (Statistics) ,COMPUTATIONAL complexity ,PERCEIVED quality ,IMAGING systems ,ALGORITHMS - Abstract
Most image quality assessment (IQA) algorithms based on sparse representation primarily focus on amplitude information, often overlooking the structural composition of images. However, structural composition is closely linked to perceived image quality, a connection that existing methods do not adequately address. To fill this gap, this paper proposes a novel distorted-image quality assessment algorithm based on a sparse structure and subjective perception (IQA-SSSP). This algorithm evaluates the quality of distorted images by measuring the sparse structure similarity between a reference and distorted images. The proposed method has several advantages. First, the sparse structure algorithm operates with reduced computational complexity, leading to faster processing speeds, which makes it suitable for practical applications. Additionally, it efficiently handles large-scale data, further enhancing the assessment process. Experimental results validate the effectiveness of the algorithm, showing that it achieves a high correlation with human visual perception, as reflected in both objective and subjective evaluations. Specifically, the algorithm yielded a Pearson correlation coefficient of 0.929 and a mean squared error of 8.003, demonstrating its robustness and efficiency. By addressing the limitations of existing IQA methods and introducing a more holistic approach, this paper offers new perspectives on IQA. The proposed algorithm not only provides reliable quality assessment results but also closely aligns with human visual experience, thereby enhancing both the objectivity and accuracy of image quality evaluations. This research offers significant theoretical support for the advancement of sparse representation in IQA. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
243. A Fast Algorithm for 3D Focusing Inversion of Magnetic Data and Its Application in Geothermal Exploration.
- Author
-
Dai, Weiming, Jia, Hongfa, Jiang, Niande, Liu, Yanhong, Zhou, Weihui, Zhu, Zhiying, and Zhou, Shuai
- Subjects
CONJUGATE gradient methods ,MATRIX effect ,ALGORITHMS ,GEOTHERMAL resources - Abstract
This paper presents a fast focusing inversion algorithm of magnetic data based on the conjugate gradient method, which can be used to describe the underground target geologic body efficiently and clearly. The proposed method realizes an effect similar to matrix compression by changing the computation order, calculating the inner product of vectors and equivalent expansion of expressions. Model tests show that this strategy successfully reduces the computation time of a single iteration of the conjugate gradient method, so the three-dimensional magnetic data inversion is realized under a certain number of iterations. In this paper, the detailed calculation steps of the proposed inversion method are given, and the effectiveness and high efficiency of the proposed fast focusing inversion method are verified by three theoretical model tests and a set of measured data. Finally, the fast focus inversion algorithm is applied to the magnetic data of Gonghe Basin, Qinghai Province, to describe the spatial distribution range of deep hot dry rock, which provides a direction for the continuous exploration of geothermal resources in this area. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
244. Pre-Processing Event Logs by Chaotic Filtering Approaches Based on the Direct Following Relationship.
- Author
-
Lv, Tengzi, Gong, Xiugang, Gong, Na, and Li, Kaiyu
- Subjects
PROCESS mining ,SAWLOGS ,ALGORITHMS - Abstract
Process discovery aims to discover process models from event logs to describe actual business processes. The quality of event logs has an impact on the quality of process models, so preprocessing methods can be used to improve the quality of event logs. Chaotic activities may exist in real business scenarios, and the occurrence of chaotic activities is independent of other activities in the process and can occur at any location in the event log at any frequency. Therefore, chaotic activities seriously affect the model quality of process discovery. Filtering chaotic activities in event logs can effectively improve the quality of event logs and thus improve the quality of process models. The traditional chaotic activity filtering algorithm makes it difficult to balance accuracy and time performance. Therefore, a direct method for filtering chaotic activities is proposed in this paper. By analyzing the relationship between activities, chaotic activities are identified in the log according to the characteristics of chaotic activities and the direct following relationship of activities as the judgment condition, and the filtering of chaotic activities in the event log is realized. In addition, this paper proposes an indirect chaotic activity filtering method, which identifies and filters chaotic activities in the log by analyzing the influence of the existence of different activities on the overall chaos degree of the log. The proposed method is compared with the traditional chaotic activity filtering method on several simulation/real data sets, and the accuracy and running time between the multi-group event logs and the process models generated before and after chaotic activity filtering are analyzed, further verifying the effectiveness and feasibility of the proposed method. By summarizing the experimental results, it is found that the accuracy of the proposed chaotic activity filtering methods is greater than that of the frequency-based filtering method and is close to that of the entropy-based chaotic activity filtering methods. Moreover, compared with other filtering methods used in the experiment, the chaotic activity filtering method proposed in this paper can improve the efficiency by 23.4% on average for simulation logs, and by 84.25% on average for real event logs. It is concluded that compared with other filtering methods, the proposed chaotic activity filtering methods have higher accuracy and can effectively improve the time performance of chaotic activity filtering. Therefore, the chaotic activity filtering method proposed in this paper can balance the accuracy and time performance, and can ensure the integrity of the filtered event log to a certain extent. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
245. UWB-Based Human-Following System with Obstacle and Crevasse Avoidance for Polar-Exploration Robots.
- Author
-
Kwon, Ji-Wook, Lee, Hyoujun, Lee, Jongdeuk, Lee, Na-Hyun, Kim, Jong Chan, Uhm, Taeyoung, and Choi, Young-Ho
- Subjects
EXTREME environments ,ROBOTS ,EXPLORERS ,ALGORITHMS ,SUCCESS - Abstract
This paper introduces a UWB-based human-following system for polar-exploration robots, integrating obstacle and crevasse avoidance functions to enhance the safety and efficiency of explorers in extreme environments. The proposed system determines the relative position of the explorer using UWB anchors and tags. It also utilizes real-time local obstacle mapping and path-planning algorithms to find safe paths that avoid collisions with obstacles. Simulation and real-world experiments confirm that the proposed system operates effectively in polar environments, reducing the operational burden on explorers and increasing mission success rates. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
246. Pointer Meter Reading Method Based on YOLOv8 and Improved LinkNet.
- Author
-
Lu, Xiaohu, Zhu, Shisong, and Lu, Bibo
- Subjects
FEATURE extraction ,ROTATIONAL motion ,ANGLES ,READING ,ALGORITHMS - Abstract
In order to improve the reading efficiency of pointer meter, this paper proposes a reading method based on LinkNet. Firstly, the meter dial area is detected using YOLOv8. Subsequently, the detected images are fed into the improved LinkNet segmentation network. In this network, we replace traditional convolution with partial convolution, which reduces the number of model parameters while ensuring accuracy is not affected. Remove one pair of encoding and decoding modules to further compress the model size. In the feature fusion part of the model, the CBAM (Convolutional Block Attention Module) attention module is added and the direct summing operation is replaced by the AFF (Attention Feature Fusion) module, which enhances the feature extraction capability of the model for the segmented target. In the subsequent rotation correction section, this paper effectively addresses the issue of inaccurate prediction by CNN networks for axisymmetric images within the 0–360° range, by dividing the rotation angle prediction into classification and regression steps. It ensures that the final reading part receives the correct angle of image input, thereby improving the accuracy of the overall reading algorithm. The final experimental results indicate that our proposed reading method has a mean absolute error of 0.20 and a frame rate of 15. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
247. A multi-channel spatial information feature based human pose estimation algorithm.
- Author
-
Xie, Yinghong, Hao, Yan, Han, Xiaowei, Gao, Qiang, and Yin, Biao
- Subjects
COMPUTER vision ,FIX-point estimation ,HUMAN body ,ALGORITHMS ,HUMAN beings - Abstract
Human pose estimation is an important task in computer vision, which can provide key point detection of human body and obtain bone information. At present, human pose estimation is mainly utilized for detection of large targets, and there is no solution for detection of small targets. This paper proposes a multi-channel spatial information feature based human pose (MCSF-Pose) estimation algorithm to address the issue of medium and small targets inaccurate detection of human key points in scenarios involving occlusion and multiple poses. The MCSF-Pose network is a bottom-up regression network. Firstly, an UP-Focus module is designed to expand the feature information while reducing parameter computation during the up-sampling process. Then, the channel segmentation strategy is adopted to cut the features, and the feature information of multiple dimensions is retained through different convolutional groups, which reduces the parameter lightweight network model and makes up for the loss of the feature information associated with the depth of the network. Finally, the three-layer PANet structure is designed to reduce the complexity of the model. With the aid of the structure, it also to improve the detection accuracy and anti-interference ability of human key points. The experimental results indicate that the proposed algorithm outperforms YOLO-Pose and other human pose estimation algorithms on COCO2017 and MPII human pose datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
248. Precise path planning and trajectory tracking based on improved A-star algorithm.
- Author
-
Xu, Boyang
- Subjects
AUTONOMOUS vehicles ,ROBOT control systems ,ALGORITHMS ,ARTIFICIAL satellite tracking ,POTENTIAL field method (Robotics) - Abstract
Path planning and trajectory tracking are very meaningful for the field of autonomous driving, but currently path planning still has problems such as non-optimal paths and insufficiently accurate paths. This paper addresses the issue of path planning by proposing a improved A-star algorithm and locally zooming on the map technique to achieve precise path planning. Compared with the conventional method, this method reduces the time by 23% and the path length by 21% in the scenarios shown in the paper, respectively, and provides a reference for related research. Moreover, trajectory tracking was achieved using the improved LQR control. Compared with the conventional method, the improved LQR control algorithm reduces the average error by 80% in the scenario shown in the paper. Firstly, the A-star algorithm is enhanced by incorporating an unknown path cost estimation function, thereby improving the effect of its path planning in complex environments. Additionally, the method of locally zooming on the map is incorporated, effectively enhancing the accuracy and safety of path planning. Building upon the path planning, further improvements are made to the LQR control algorithm, enabling autonomous deceleration in complex sections, which facilitates better trajectory tracking and enhances the motion control performance of the robot during practical operations. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
249. Improvement and Fusion of D*Lite Algorithm and Dynamic Window Approach for Path Planning in Complex Environments.
- Author
-
Gao, Yang, Han, Qidong, Feng, Shuo, Wang, Zhen, Meng, Teng, and Yang, Jingshuai
- Subjects
MOBILE robots ,AUTONOMOUS robots ,COST functions ,SCHEDULING ,ALGORITHMS ,POTENTIAL field method (Robotics) - Abstract
Effective path planning is crucial for autonomous mobile robots navigating complex environments. The "global–local" coupled path planning algorithm exhibits superior global planning capabilities and local adaptability. However, these algorithms often fail to fully realize their potential due to low efficiency and excessive constraints. To address these issues, this study introduces a simpler and more effective integration strategy. Specifically, this paper proposes using a bi-layer map and a feasible domain strategy to organically combine the D*Lite algorithm with the Dynamic Window Approach (DWA). The bi-layer map effectively reduces the number of nodes in global planning, enhancing the efficiency of the D*Lite algorithm. The feasible domain strategy decreases constraints, allowing the local algorithm DWA to utilize its local planning capabilities fully. Moreover, the cost functions of both the D*Lite algorithm and DWA have been refined, enabling the fused algorithm to cope with more complex environments. This paper conducts simulation experiments across various settings and compares our method with A_DWA, another "global–local" coupled approach, which combines A* and DWA. D_DWA significantly outperforms A_DWA in complex environments, despite a 7.43% increase in path length. It reduces the traversal of risk areas by 71.95%, accumulative risk by 80.34%, global planning time by 26.98%, and time cost by 35.61%. Additionally, D_DWA outperforms the A_Q algorithm, a coupled approach validated in real-world environments, which combines A* and Q-learning, achieving reductions of 1.34% in path length, 67.14% in traversal risk area, 78.70% in cumulative risk, 34.85% in global planning time, and 37.63% in total time cost. The results demonstrate the superiority of our proposed algorithm in complex scenarios. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
250. A novel similarity algorithm for triangular cloud models based on exponential closeness and cloud drop variance.
- Author
-
Yang, Jianjun, Han, Jiahao, Wan, Qilin, Xing, Shanshan, and Shi, Hongbo
- Subjects
VALUE engineering ,ALGORITHMS ,CLASSIFICATION algorithms ,MODEL theory ,MICROGRIDS ,SECURITY systems - Abstract
Cloud model similarity algorithm is an important part of cloud modelling theory. Most of the existing cloud model similarity algorithms suffer from poor discriminability, poor classification, unstable results, and low time efficiency. In this paper, a new similarity algorithm is proposed that considers the triangular cloud model distance and shape. First, according to the D T distance formula, a new exponential closeness measure is defined, with which the distance similarity of cloud models is characterized. Then, the shape similarity is calculated according to the variance of the cloud model cloud drops. Finally, the two similarities are synthesized to define a similarity algorithm for determining the distance from the D T distance formula and shape based on the triangular cloud model (DD
T STCM). In this paper, discriminability, stability, efficiency and theoretical interpretability are taken as the evaluation indices. Equipment security system capability evaluation experiment, cloud model differentiation simulation experiment and time series classification accuracy experiment are set up to verify the effectiveness of the algorithm in terms of the four above aspects. The experimental results show that DDT STCM has good differentiation and excellent classification effects. In the classification experiment for the time series, the average classification accuracy of DDT STCM reaches 91.78%, which is at least 2.78% higher than those of the other seven commonly used algorithms. The CPU running efficiency of DDT STCM is also extremely high, and the average CPU running time of group training is always on the order of milliseconds, which effectively reduces the time cost. Finally, a case study is conducted to analyse a risk assessment problem for China's island microgrid industry, and the evaluation results based on DDT STCM are in line with human cognition and have good value for engineering applications. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.