41,619 results
Search Results
2. A fully-automated paper ECG digitisation algorithm using deep learning.
- Author
-
Wu, Huiyi, Patel, Kiran Haresh Kumar, Li, Xinyang, Zhang, Bowen, Galazis, Christoforos, Bajaj, Nikesh, Sau, Arunashis, Shi, Xili, Sun, Lin, Tao, Yanda, Al-Qaysi, Harith, Tarusan, Lawrence, Yasmin, Najira, Grewal, Natasha, Kapoor, Gaurika, Waks, Jonathan W., Kramer, Daniel B., Peters, Nicholas S., and Ng, Fu Siong
- Subjects
- *
DEEP learning , *ELECTROCARDIOGRAPHY , *ELECTRONIC paper , *ATRIAL fibrillation , *ALGORITHMS , *HEART failure , *HEART rate monitors - Abstract
There is increasing focus on applying deep learning methods to electrocardiograms (ECGs), with recent studies showing that neural networks (NNs) can predict future heart failure or atrial fibrillation from the ECG alone. However, large numbers of ECGs are needed to train NNs, and many ECGs are currently only in paper format, which are not suitable for NN training. We developed a fully-automated online ECG digitisation tool to convert scanned paper ECGs into digital signals. Using automated horizontal and vertical anchor point detection, the algorithm automatically segments the ECG image into separate images for the 12 leads and a dynamical morphological algorithm is then applied to extract the signal of interest. We then validated the performance of the algorithm on 515 digital ECGs, of which 45 were printed, scanned and redigitised. The automated digitisation tool achieved 99.0% correlation between the digitised signals and the ground truth ECG (n = 515 standard 3-by-4 ECGs) after excluding ECGs with overlap of lead signals. Without exclusion, the performance of average correlation was from 90 to 97% across the leads on all 3-by-4 ECGs. There was a 97% correlation for 12-by-1 and 3-by-1 ECG formats after excluding ECGs with overlap of lead signals. Without exclusion, the average correlation of some leads in 12-by-1 ECGs was 60–70% and the average correlation of 3-by-1 ECGs achieved 80–90%. ECGs that were printed, scanned, and redigitised, our tool achieved 96% correlation with the original signals. We have developed and validated a fully-automated, user-friendly, online ECG digitisation tool. Unlike other available tools, this does not require any manual segmentation of ECG signals. Our tool can facilitate the rapid and automated digitisation of large repositories of paper ECGs to allow them to be used for deep learning projects. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
3. Development and Validation of an Algorithm for the Digitization of ECG Paper Images.
- Author
-
Randazzo, Vincenzo, Puleo, Edoardo, Paviglianiti, Annunziata, Vallan, Alberto, and Pasero, Eros
- Subjects
- *
DIGITIZATION , *DIGITAL images , *ELECTROCARDIOGRAPHY , *HEART rate monitors , *PEARSON correlation (Statistics) , *MEASUREMENT errors , *HEART beat , *ALGORITHMS - Abstract
The electrocardiogram (ECG) signal describes the heart's electrical activity, allowing it to detect several health conditions, including cardiac system abnormalities and dysfunctions. Nowadays, most patient medical records are still paper-based, especially those made in past decades. The importance of collecting digitized ECGs is twofold: firstly, all medical applications can be easily implemented with an engineering approach if the ECGs are treated as signals; secondly, paper ECGs can deteriorate over time, therefore a correct evaluation of the patient's clinical evolution is not always guaranteed. The goal of this paper is the realization of an automatic conversion algorithm from paper-based ECGs (images) to digital ECG signals. The algorithm involves a digitization process tested on an image set of 16 subjects, also with pathologies. The quantitative analysis of the digitization method is carried out by evaluating the repeatability and reproducibility of the algorithm. The digitization accuracy is evaluated both on the entire signal and on six ECG time parameters (R-R peak distance, QRS complex duration, QT interval, PQ interval, P-wave duration, and heart rate). Results demonstrate the algorithm efficiency has an average Pearson correlation coefficient of 0.94 and measurement errors of the ECG time parameters are always less than 1 mm. Due to the promising experimental results, the algorithm could be embedded into a graphical interface, becoming a measurement and collection tool for cardiologists. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
4. AI GODS, JEANS GODS, AND THRIFT GODS: RESPONDING TO RESPONSES TO THE BLESSED BY THE ALGORITHM PAPER (SINGLER 2020).
- Author
-
Singler, Beth
- Subjects
- *
GODS , *ARTIFICIAL intelligence , *ALGORITHMS , *THRIFT institutions - Published
- 2023
- Full Text
- View/download PDF
5. Tools and algorithms for the construction and analysis of systems: a special issue on tool papers for TACAS 2021.
- Author
-
Jensen, Peter Gjøl and Neele, Thomas
- Subjects
- *
ALGORITHMS , *SOFTWARE verification , *INTEGRATED circuit verification , *SYSTEMS software , *CONFERENCES & conventions - Abstract
This special issue contains six revised and extended versions of tool papers that appeared in the proceedings of TACAS 2021, the 27th International Conference on Tools and Algorithms for the Construction and Analysis of Systems. The issue is dedicated to the realization of algorithms in tools and the studies of the application of these tools for analysing hard- and software systems. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
6. A Machine Learning Model to Predict Citation Counts of Scientific Papers in Otology Field.
- Author
-
Alohali, Yousef A., Fayed, Mahmoud S., Mesallam, Tamer, Abdelsamad, Yassin, Almuhawas, Fida, and Hagr, Abdulrahman
- Subjects
- *
DECISION trees , *SERIAL publications , *NATURAL language processing , *BIBLIOMETRICS , *MACHINE learning , *REGRESSION analysis , *RANDOM forest algorithms , *CITATION analysis , *DESCRIPTIVE statistics , *PREDICTION models , *ARTIFICIAL neural networks , *MEDICAL research , *MEDICAL specialties & specialists , *ALGORITHMS - Abstract
One of the most widely used measures of scientific impact is the number of citations. However, due to its heavy-tailed distribution, citations are fundamentally difficult to predict but can be improved. This study was aimed at investigating the factors and parts influencing the citation number of a scientific paper in the otology field. Therefore, this work proposes a new solution that utilizes machine learning and natural language processing to process English text and provides a paper citation as the predicted results. Different algorithms are implemented in this solution, such as linear regression, boosted decision tree, decision forest, and neural networks. The application of neural network regression revealed that papers' abstracts have more influence on the citation numbers of otological articles. This new solution has been developed in visual programming using Microsoft Azure machine learning at the back end and Programming Without Coding Technology at the front end. We recommend using machine learning models to improve the abstracts of research articles to get more citations. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
7. A BPNN Model-Based AdaBoost Algorithm for Estimating Inside Moisture of Oil–Paper Insulation of Power Transformer.
- Author
-
Liu, Jiefeng, Ding, Zheshi, Fan, Xianhao, Geng, Chuhan, Song, Boshu, Wang, Qingyin, and Zhang, Yiyi
- Subjects
- *
POWER transformers , *TRANSFORMER insulation , *MOISTURE , *ALGORITHMS , *MACHINE learning , *CLASSIFICATION algorithms - Abstract
The traditional method for transformer moisture diagnosis is to establish empirical equations between feature parameters extracted from frequency domain spectroscopy (FDS) and the transformer’s moisture content. However, the established empirical equation may not be applicable to a novel testing environment, resulting in an unreliable evaluation result. In this regard, it is acknowledged that FDS combined with machine learning is more suitable for estimating moisture content in a variety of test environments. Nonetheless, the accuracy of the estimation results obtained using the existing method is limited by the algorithm’s inability to generalize. To address this issue, we propose an AdaBoost algorithm-enhanced back-propagation neural network (BP_AdaBoost). This study creates a database by extracting feature parameters from the FDS that characterize the insulation states of the prepared samples. Then, using the BP_AdaBoost algorithm and the newly constructed database, the moisture estimation models are trained. Finally, the results of the estimation are discussed in terms of laboratory and field transformers. By comparing the proposed BP_AdaBoost algorithm to other intelligence algorithms, it is demonstrated that it not only performs better in generalization, but also maintains a high level of accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
8. A reviewer-reputation ranking algorithm to identify high-quality papers during the review process.
- Author
-
Gao, Fujuan, Fenoaltea, Enrico Maria, Zhang, Pan, and Zeng, An
- Subjects
- *
ALGORITHMS , *CITATION networks , *REPUTATION , *RESEARCH personnel , *BIPARTITE graphs , *BEES algorithm - Abstract
With the exponential growth in the number of academic researchers, it is crucial for editors of scientific journals to identify the highest-quality papers. While several measures exist to evaluate a paper's impact post-publication, the challenge of determining the potential impact of a manuscript during the review process remains an understudied issue. In this paper, we propose a reviewer-reputation ranking algorithm to identify high-quality papers based on paper citations, where a reviewer's reputation is computed from the correlation between their past ratings and the current number of citations received by the papers they have evaluated. During the review process, reviewers with high reputation scores are given more weight to determine the quality of papers. We test the algorithm on an artificial network with 200 reviewers and 600 papers, as well as on the American Physical Society (APS) data set, including in the analysis 308,243 papers and 274,154 mutual citations. We compare our approach with two existing methods, demonstrating that our algorithm significantly outperforms the others in identifying manuscripts with the highest quality. Our findings can help improve the impact of scientific journals, thereby contributing to academic and scientific progress. • We propose an algorithm to identify the papers with the highest quality from a large number of submissions. • We compare our new algorithm with other existing methods of aggregating user ratings in various online services. • We test our algorithm both with an artificial network and with the empirical data of the APS data set. • We show that our algorithm outperforms the other methods in identifying the papers with the highest quality. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Utilizing tables, figures, charts and graphs to enhance the readability of a research paper.
- Author
-
Divecha C. A., Tullu M. S., and Karande S.
- Subjects
- *
GRAPHIC arts , *READABILITY (Literary style) , *SERIAL publications , *RESEARCH methodology , *COPYRIGHT , *MEDICAL research , *ALGORITHMS - Abstract
The authors offer observation on utilizing tables, figures, charts and graphs to help understand the research presented in a simple manner but also engage and sustain the reader's interest. Topics discussed include benefits provided by the use of tables/figures/charts/graphs, general methodology of design and submission, and copyright issues of using material from government publications/public domain.
- Published
- 2023
- Full Text
- View/download PDF
10. SDP-Based Bounds for the Quadratic Cycle Cover Problem via Cutting-Plane Augmented Lagrangian Methods and Reinforcement Learning: INFORMS Journal on Computing Meritorious Paper Awardee.
- Author
-
de Meijer, Frank and Sotirov, Renata
- Subjects
- *
REINFORCEMENT learning , *COMBINATORIAL optimization , *TRAVELING salesman problem , *ALGORITHMS , *SEMIDEFINITE programming , *MACHINE learning , *DIRECTED graphs - Abstract
We study the quadratic cycle cover problem (QCCP), which aims to find a node-disjoint cycle cover in a directed graph with minimum interaction cost between successive arcs. We derive several semidefinite programming (SDP) relaxations and use facial reduction to make these strictly feasible. We investigate a nontrivial relationship between the transformation matrix used in the reduction and the structure of the graph, which is exploited in an efficient algorithm that constructs this matrix for any instance of the problem. To solve our relaxations, we propose an algorithm that incorporates an augmented Lagrangian method into a cutting-plane framework by utilizing Dykstra's projection algorithm. Our algorithm is suitable for solving SDP relaxations with a large number of cutting-planes. Computational results show that our SDP bounds and efficient cutting-plane algorithm outperform other QCCP bounding approaches from the literature. Finally, we provide several SDP-based upper bounding techniques, among which is a sequential Q-learning method that exploits a solution of our SDP relaxation within a reinforcement learning environment. Summary of Contribution: The quadratic cycle cover problem (QCCP) is the problem of finding a set of node-disjoint cycles covering all the nodes in a graph such that the total interaction cost between successive arcs is minimized. The QCCP has applications in many fields, among which are robotics, transportation, energy distribution networks, and automatic inspection. Besides this, the problem has a high theoretical relevance because of its close connection to the quadratic traveling salesman problem (QTSP). The QTSP has several applications, for example, in bioinformatics, and is considered to be among the most difficult combinatorial optimization problems nowadays. After removing the subtour elimination constraints, the QTSP boils down to the QCCP. Hence, an in-depth study of the QCCP also contributes to the construction of strong bounds for the QTSP. In this paper, we study the application of semidefinite programming (SDP) to obtain strong bounds for the QCCP. Our strongest SDP relaxation is very hard to solve by any SDP solver because of the large number of involved cutting-planes. Because of that, we propose a new approach in which an augmented Lagrangian method is incorporated into a cutting-plane framework by utilizing Dykstra's projection algorithm. We emphasize an efficient implementation of the method and perform an extensive computational study. This study shows that our method is able to handle a large number of cuts and that the resulting bounds are currently the best QCCP bounds in the literature. We also introduce several upper bounding techniques, among which is a distributed reinforcement learning algorithm that exploits our SDP relaxations. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
11. An FPGA Implementation of the Log-MAP Algorithm for a Dirty Paper Coding CODEC.
- Author
-
Lopes, Paulo A. C. and Gerald, José A. B.
- Subjects
- *
BIT rate , *VIDEO coding , *ALGORITHMS , *GATE array circuits , *CODECS , *DECODING algorithms - Abstract
This work describes the log-MAP (BCJR) algorithm implementation of a close to capacity dirty paper coding CODEC. The CODEC consists of eight deep pipeline processors. It decodes blocks of 975 bits in 26.9 ms using less than 9.7% of low-cost FPGA (and no DSP blocks). Two pipelines, for alpha and beta, calculate the values of gamma (of the BCJR) to reduce the storage requirements. The final log-likelihood ratio (LLR) is calculated together with alpha, reusing intermediate results. The number of bits used by the different signals of the processor is easily configurable. It was set to six bits to the channel measure signals and eight bits to log of probability signals like alpha, beta, and others. The CODEC clock was 100 MHz. The achieved bit rate is 36.2 Kbps per CODEC, but multiple CODECs can be fit into a single chip. The CODEC is 3.49 dB from the channel capacity. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
12. Critical Appraisal of a Machine Learning Paper: A Guide for the Neurologist.
- Author
-
Vinny, Pulikottil W., Garg, Rahul, Srivastava, M. V. Padma, Lal, Vivek, and Vishnu, Venugoapalan Y.
- Subjects
- *
DEEP learning , *NEUROLOGISTS , *EVIDENCE-based medicine , *MACHINE learning , *BENCHMARKING (Management) , *TERMS & phrases , *ARTIFICIAL neural networks , *PREDICTION models , *ALGORITHMS - Abstract
Machine learning (ML), a form of artificial intelligence (AI), is being increasingly employed in neurology. Reported performance metrics often match or exceed the efficiency of average clinicians. The neurologist is easily baffled by the underlying concepts and terminologies associated with ML studies. The superlative performance metrics of ML algorithms often hide the opaque nature of its inner workings. Questions regarding ML model's interpretability and reproducibility of its results in real-world scenarios, need emphasis. Given an abundance of time and information, the expert clinician should be able to deliver comparable predictions to ML models, a useful benchmark while evaluating its performance. Predictive performance metrics of ML models should not be confused with causal inference between its input and output. ML and clinical gestalt should compete in a randomized controlled trial before they can complement each other for screening, triaging, providing second opinions and modifying treatment. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
13. The Folded Paper Size Illusion: Evidence of Inability to Perceptually Integrate More Than One Geometrical Dimension.
- Author
-
Carbon, Claus-Christian
- Subjects
- *
PAPER sizing , *PERCEPTUAL illusions , *SENSORIMOTOR integration , *COGNITION , *ALGORITHMS , *PSYCHOPHYSICS - Abstract
The folded paper-size illusion is as easy to demonstrate as it is powerful in generating insights into perceptual processing: First take two A4 sheets of paper, one original sized, another halved by folding, then compare them in terms of area size by centering the halved sheet on the center of the original one! We perceive the larger sheet as far less than double (i.e., 100%) the size of the small one, typically only being about two thirds larger--this illusion is preserved by rotating the inner sheet and even by aligning it to one or two sides, but is dissolved by aligning both sheets to three sides, here documented by 88 participants' data. A potential explanation might be the general incapability of accurately comparing more than one geometrical dimension at once--in everyday life, we solve this perceptual-cognitive bottleneck by reducing the complexity of such a task via aligning parts with same lengths. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
14. Canadian Association of Radiologists White Paper on De-identification of Medical Imaging: Part 2, Practical Considerations.
- Author
-
Parker, William, Jaremko, Jacob L., Cicero, Mark, Azar, Marleine, El-Emam, Khaled, Gray, Bruce G., Hurrell, Casey, Lavoie-Cardinal, Flavie, Desjardins, Benoit, Lum, Andrea, Sheremeta, Lori, Lee, Emil, Reinhold, Caroline, Tang, An, and Bromwich, Rebecca
- Subjects
- *
ALGORITHMS , *ARTIFICIAL intelligence , *DATA encryption , *DATABASE management , *DIAGNOSTIC imaging , *HEALTH services accessibility , *MACHINE learning , *MEDICAL protocols , *DICOM (Computer network protocol) , *COVID-19 pandemic - Abstract
The application of big data, radiomics, machine learning, and artificial intelligence (AI) algorithms in radiology requires access to large data sets containing personal health information. Because machine learning projects often require collaboration between different sites or data transfer to a third party, precautions are required to safeguard patient privacy. Safety measures are required to prevent inadvertent access to and transfer of identifiable information. The Canadian Association of Radiologists (CAR) is the national voice of radiology committed to promoting the highest standards in patient-centered imaging, lifelong learning, and research. The CAR has created an AI Ethical and Legal standing committee with the mandate to guide the medical imaging community in terms of best practices in data management, access to health care data, de-identification, and accountability practices. Part 2 of this article will inform CAR members on the practical aspects of medical imaging de-identification, strengths and limitations of de-identification approaches, list of de-identification software and tools available, and perspectives on future directions. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
15. ITERATIVE ALGORITHMS FOR VARIATIONAL INCLUSIONS IN BANACH SPACES.
- Author
-
ANSARI, QAMRUL HASAN, BALOOEE, JAVAD, and PETRUŞEL, ADRIAN
- Subjects
- *
BANACH spaces , *LIPSCHITZ continuity , *PAPER arts , *DIFFERENTIAL inclusions , *ALGORITHMS - Abstract
The present paper is in two folds. In the first fold, we prove the Lipschitz continuity of the proximal mapping associated with a general strongly H-monotone mapping and compute an estimate of its Lipschitz constant under some mild assumptions imposed on the mapping H involved in the proximal mapping. We provide two examples to show that a maximal monotone mapping need not be a general H-monotone for a single-valued mapping H from a Banach space to its dual space. A class of multi-valued nonlinear variational inclusion problems is considered, and by using the notion of proximal mapping and Nadler's technique, an iterative algorithm with mixed errors is suggested to compute its solutions. Under some appropriate hypotheses imposed on the mappings and parameters involved in the multi-valued nonlinear variational inclusion problem, the strong convergence of the sequences generated by the proposed algorithm to a solution of the aforesaid problem is verified. The second fold of this paper investigates and analyzes the notion of Cn-monotone mappings defined and studied in [S.Z. Nazemi, A new class of monotone mappings and a new class of variational inclusions in Banach spaces, J. Optim. Theory Appl. 155(3)(2012) 785-795]. Several comments related to the results and algorithm appeared in the above mentioned paper are given. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
16. Research on image processing algorithm of immune colloidal gold test paper detection.
- Author
-
Guang Yang, Tiefeng Wang, and Peng Zhang
- Subjects
- *
COLLOIDAL gold , *QUALITY control , *AUTOMATIC identification , *ALGORITHMS - Abstract
In order to better solve the problem of automatic identification of quality control line and detection line in the detection of gold standard test strip, this paper proposes to collect the image information of gold standard test strip after color rendering through CMOS sensor, preprocess the obtained information, transform RGB image into gray image, build cloud model in the CIELAB/HSV/HSL space, and apply the improved AdaBoost algorithm to determine the position of detection line and quality control line Place. Compared with the traditional template matching method, it improves the accuracy and accuracy of recognition. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
17. Taming Algorithmic Priority Inversion in Mission-Critical Perception Pipelines.
- Author
-
Liu, Shengzhong, Yao, Shuochao, Fu, Xinzhe, Tabish, Rohan, Yu, Simon, Bansal, Ayoosh, Yun, Heechul, Sha, Lui, and Abdelzaher, Tarek
- Subjects
- *
ALGORITHMS , *SYSTEMS design , *CYBER physical systems , *COMPUTER scheduling , *ARTIFICIAL intelligence , *ARTIFICIAL neural networks , *FIRST in, first out (Queuing theory) - Abstract
The paper discusses algorithmic priority inversion in mission-critical machine inference pipelines used in modern neural-network-based perception subsystems and describes a solution to mitigate its effect. In general, priority inversion occurs in computing systems when computations that are "less important" are performed together with or ahead of those that are "more important." Significant priority inversion occurs in existing machine inference pipelines when they do not differentiate between critical and less critical data. We describe a framework to resolve this problem and demonstrate that it improves a perception system's ability to react to critical inputs, while at the same time reducing platform cost. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. Superpolynomial Lower Bounds Against Low-Depth Algebraic Circuits.
- Author
-
Limaye, Nutan, Srinivasan, Srikanth, and Tavenas, Sébastien
- Subjects
- *
ALGEBRA , *POLYNOMIALS , *CIRCUIT complexity , *ALGORITHMS , *DIRECTED acyclic graphs , *LOGIC circuits - Abstract
An Algebraic Circuit for a multivariate polynomial P is a computational model for constructing the polynomial P using only additions and multiplications. It is a syntactic model of computation, as opposed to the Boolean Circuit model, and hence lower bounds for this model are widely expected to be easier to prove than lower bounds for Boolean circuits. Despite this, we do not have superpolynomial lower bounds against general algebraic circuits of depth 3 (except over constant-sized finite fields) and depth 4 (over any field other than F2), while constant-depth Boolean circuit lower bounds have been known since the early 1980s. In this paper, we prove the first superpolynomial lower bounds against algebraic circuits of all constant depths over all fields of characteristic 0. We also observe that our super-polynomial lower bound for constant-depth circuits implies the first deterministic sub-exponential time algorithm for solving the Polynomial Identity Testing (PIT) problem for all small-depth circuits using the known connection between algebraic hardness and randomness. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. Physics driven behavioural clustering of free-falling paper shapes.
- Author
-
Howison, Toby, Hughes, Josie, Giardina, Fabio, and Iida, Fumiya
- Subjects
- *
PHYSICS , *SET functions , *MACHINE learning , *PHENOMENOLOGICAL theory (Physics) , *CONTINUUM mechanics - Abstract
Many complex physical systems exhibit a rich variety of discrete behavioural modes. Often, the system complexity limits the applicability of standard modelling tools. Hence, understanding the underlying physics of different behaviours and distinguishing between them is challenging. Although traditional machine learning techniques could predict and classify behaviour well, typically they do not provide any meaningful insight into the underlying physics of the system. In this paper we present a novel method for extracting physically meaningful clusters of discrete behaviour from limited experimental observations. This method obtains a set of physically plausible functions that both facilitate behavioural clustering and aid in system understanding. We demonstrate the approach on the V-shaped falling paper system, a new falling paper type system that exhibits four distinct behavioural modes depending on a few morphological parameters. Using just 49 experimental observations, the method discovered a set of candidate functions that distinguish behaviours with an error of 2.04%, while also aiding insight into the physical phenomena driving each behaviour. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
20. Maximum Load Consumption Capacity Maintenance of Distributed Storage Devices Based on Time-Varying Neurodynamic Algorithm.
- Author
-
Li, Ziqiang, Qu, Youran, Yan, Meng, Pan, Bo, Mao, Qin, Ji, Cheng, Tao, Wanmin, and Zhou, Mingliang
- Subjects
- *
DISTRIBUTED algorithms , *ALGORITHMS , *STORAGE , *ENERGY storage , *DATA protection , *AUTOMATIC timers - Abstract
A charge and discharge management scheme is proposed. The stored electric energy in distributed storage devices will converge to the consistent. The consistency of the stored electric energy of the devices helps to maintain the maximum load capacity and maximum consumption capacity of distributed storage devices. The charging and discharging process is constructed as a time-varying optimization problem, and the proposed algorithm can respond to the time-varying parameters of the distributed storage devices in real time. The time-varying neurodynamic algorithms can obtain time-varying optimal solution trajectories to give the optimal charging and discharging strategy in real time. In addition, the proposed approach in this paper focuses on the privacy protection of device data. Each device can calculate the power of discharging or charging by communicating with the partially connected nodes. Numerical simulations of the proposed scheme in the paper are given to verify the effectiveness of the scheme. Numerical simulations show that our scheme can make the electric energy stored in each storage device converge and maintain the maximum load capacity or maximum consumption capacity of the whole distributed storage device. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. Research on Human Posture Estimation Algorithm Based on YOLO-Pose.
- Author
-
Ding, Jing, Niu, Shanwei, Nie, Zhigang, and Zhu, Wenyu
- Subjects
- *
HUMAN experimentation , *POSTURE , *DRONE aircraft , *ALGORITHMS , *ANGLES - Abstract
In response to the numerous challenges faced by traditional human pose recognition methods in practical applications, such as dense targets, severe edge occlusion, limited application scenarios, complex backgrounds, and poor recognition accuracy when targets are occluded, this paper proposes a YOLO-Pose algorithm for human pose estimation. The specific improvements are divided into four parts. Firstly, in the Backbone section of the YOLO-Pose model, lightweight GhostNet modules are introduced to reduce the model's parameter count and computational requirements, making it suitable for deployment on unmanned aerial vehicles (UAVs). Secondly, the ACmix attention mechanism is integrated into the Neck section to improve detection speed during object judgment and localization. Furthermore, in the Head section, key points are optimized using coordinate attention mechanisms, significantly enhancing key point localization accuracy. Lastly, the paper improves the loss function and confidence function to enhance the model's robustness. Experimental results demonstrate that the improved model achieves a 95.58% improvement in mAP50 and a 69.54% improvement in mAP50-95 compared to the original model, with a reduction of 14.6 M parameters. The model achieves a detection speed of 19.9 ms per image, optimized by 30% and 39.5% compared to the original model. Comparisons with other algorithms such as Faster R-CNN, SSD, YOLOv4, and YOLOv7 demonstrate varying degrees of performance improvement. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. Construction of secure adaptive frequency hopping sequence sets based on AES algorithm.
- Author
-
Song, Dongpo, Wei, Peng, Fu, Yongming, and Wang, Shilian
- Subjects
- *
ADVANCED Encryption Standard , *BLOCK ciphers , *COMMERCIAL trusts , *INTERNET of things , *ALGORITHMS , *MULTICASTING (Computer networks) - Abstract
Communication security has become particularly crucial with the rapid development of the Internet of Things (IoT). Frequency hopping spread spectrum (FHSS) technology, a prevalent method in wireless communication, has a wide range of applications in the Internet of Things. Enhancing the security of frequency hopping sequences is an essential means to improve the security of frequency hopping communication in the Internet of Things, as the performance of frequency hopping sequences plays a crucial role in frequency hopping systems. This paper proposes constructing secure adaptive frequency hopping sequence sets based on the advanced encryption standard (AES) algorithm. As a block cipher algorithm with superior security, the AES algorithm can provide a fundamental guarantee for the security of the proposed frequency hopping sequences. The mapping methods from ciphertext sequences to frequency hopping sequences proposed in this paper can achieve the construction of frequency hopping sequences of any frequency set size to meet the requirements of adaptive frequency hopping. In addition, we also model and analyse the problem of overlapping spectrum band of the IoT groups in the industrial, scientific, and medical (ISM) band, aiming to achieve better packet transmission performance by adjusting the frequency set size. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. Anomaly Detection in Blockchain Networks Using Unsupervised Learning: A Survey.
- Author
-
Cholevas, Christos, Angeli, Eftychia, Sereti, Zacharoula, Mavrikos, Emmanouil, and Tsekouras, George E.
- Subjects
- *
DATA structures , *MACHINE learning , *PRIVATE networks , *BLOCKCHAINS , *ALGORITHMS - Abstract
In decentralized systems, the quest for heightened security and integrity within blockchain networks becomes an issue. This survey investigates anomaly detection techniques in blockchain ecosystems through the lens of unsupervised learning, delving into the intricacies and going through the complex tapestry of abnormal behaviors by examining avant-garde algorithms to discern deviations from normal patterns. By seamlessly blending technological acumen with a discerning gaze, this survey offers a perspective on the symbiotic relationship between unsupervised learning and anomaly detection by reviewing this problem with a categorization of algorithms that are applied to a variety of problems in this field. We propose that the use of unsupervised algorithms in blockchain anomaly detection should be viewed not only as an implementation procedure but also as an integration procedure, where the merits of these algorithms can effectively be combined in ways determined by the problem at hand. In that sense, the main contribution of this paper is a thorough study of the interplay between various unsupervised learning algorithms and how this can be used in facing malicious activities and behaviors within public and private blockchain networks. The result is the definition of three categories, the characteristics of which are recognized in terms of the way the respective integration takes place. When implementing unsupervised learning, the structure of the data plays a pivotal role. Therefore, this paper also provides an in-depth presentation of the data structures commonly used in unsupervised learning-based blockchain anomaly detection. The above analysis is encircled by a presentation of the typical anomalies that have occurred so far along with a description of the general machine learning frameworks developed to deal with them. Finally, the paper spotlights challenges and directions that can serve as a comprehensive compendium for future research efforts. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. The Algorithm of Gu and Eisenstat and D-Optimal Design of Experiments.
- Author
-
Forbes, Alistair
- Subjects
- *
OPTIMAL designs (Statistics) , *EXPERIMENTAL design , *FACTORIZATION , *ALGORITHMS - Abstract
This paper addresses the following problem: given m potential observations to determine n parameters, m > n , what is the best choice of n observations. The problem can be formulated as finding the n × n submatrix of the complete m × n observation matrix that has maximum determinant. An algorithm by Gu and Eisenstat for a determining a strongly rank-revealing QR factorisation of a matrix can be adapted to address this latter formulation. The algorithm starts with an initial selection of n rows of the observation matrix and then performs a sequence of row interchanges, with the determinant of the current submatrix strictly increasing at each step until no further improvement can be made. The algorithm implements rank-one updating strategies, which leads to a compact and efficient algorithm. The algorithm does not necessarily determine the global optimum but provides a practical approach to designing an effective measurement strategy. In this paper, we describe how the Gu–Eisenstat algorithm can be adapted to address the problem of optimal experimental design and used with the QR algorithm with column pivoting to provide effective designs. We also describe implementations of sequential algorithms to add further measurements that optimise the information gain at each step. We illustrate performance on several metrology examples. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. A Lightweight Remote Sensing Small Target Image Detection Algorithm Based on Improved YOLOv8.
- Author
-
Nie, Haijiao, Pang, Huanli, Ma, Mingyang, and Zheng, Ruikai
- Subjects
- *
OBJECT recognition (Computer vision) , *ALGORITHMS , *REMOTE-sensing images , *REMOTE sensing - Abstract
In response to the challenges posed by small objects in remote sensing images, such as low resolution, complex backgrounds, and severe occlusions, this paper proposes a lightweight improved model based on YOLOv8n. During the detection of small objects, the feature fusion part of the YOLOv8n algorithm retrieves relatively fewer features of small objects from the backbone network compared to large objects, resulting in low detection accuracy for small objects. To address this issue, firstly, this paper adds a dedicated small object detection layer in the feature fusion network to better integrate the features of small objects into the feature fusion part of the model. Secondly, the SSFF module is introduced to facilitate multi-scale feature fusion, enabling the model to capture more gradient paths and further improve accuracy while reducing model parameters. Finally, the HPANet structure is proposed, replacing the Path Aggregation Network with HPANet. Compared to the original YOLOv8n algorithm, the recognition accuracy of mAP@0.5 on the VisDrone data set and the AI-TOD data set has increased by 14.3% and 17.9%, respectively, while the recognition accuracy of mAP@0.5:0.95 has increased by 17.1% and 19.8%, respectively. The proposed method reduces the parameter count by 33% and the model size by 31.7% compared to the original model. Experimental results demonstrate that the proposed method can quickly and accurately identify small objects in complex backgrounds. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. Don't Fear the Artificial Intelligence: A Systematic Review of Machine Learning for Prostate Cancer Detection in Pathology.
- Author
-
Frewing, Aaryn, Gibson, Alexander B., Robertson, Richard, Urie, Paul M., and Della Corte, Dennis
- Subjects
- *
FEAR , *ARTIFICIAL intelligence , *DIGITAL diagnostic imaging , *PROSTATE tumors , *TUMOR grading , *DIAGNOSTIC errors , *LEARNING strategies , *ALGORITHMS ,RESEARCH evaluation - Abstract
* Context.--Automated prostate cancer detection using machine learning technology has led to speculation that pathologists will soon be replaced by algorithms. This review covers the development of machine learning algorithms and their reported effectiveness specific to prostate cancer detection and Gleason grading. Objective.--To examine current algorithms regarding their accuracy and classification abilities. We provide a general explanation of the technology and how it is being used in clinical practice. The challenges to the application of machine learning algorithms in clinical practice are also discussed. Data Sources.--The literature for this review was identified and collected using a systematic search. Criteria were established prior to the sorting process to effectively direct the selection of studies. A 4-point system was implemented to rank the papers according to their relevancy. For papers accepted as relevant to our metrics, all cited and citing studies were also reviewed. Studies were then categorized based on whether they implemented binary or multi-class classification methods. Data were extracted from papers that contained accuracy, area under the curve (AUC), or κ values in the context of prostate cancer detection. The results were visually summarized to present accuracy trends between classification abilities. Conclusions.--It is more difficult to achieve high accuracy metrics for multiclassification tasks than for binary tasks. The clinical implementation of an algorithm that can assign a Gleason grade to clinical whole slide images (WSIs) remains elusive. Machine learning technology is currently not able to replace pathologists but can serve as an important safeguard against misdiagnosis. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. P2P Energy Trading of EVs Using Blockchain Technology in Centralized and Decentralized Networks: A Review.
- Author
-
Khan, Sara, Amin, Uzma, and Abu-Siada, Ahmed
- Subjects
- *
BLOCKCHAINS , *SUSTAINABILITY , *ELECTRIC automobiles , *TRANSPORTATION industry , *ELECTRIC vehicles , *ALGORITHMS , *ELECTRICITY - Abstract
Peer-to-peer (P2P) energy trading has attracted a lot of attention and the number of electric vehicles (EVs) has increased in the past couple of years. Toward sustainable mobility, EVs meet the standard development goals (SDGs) for attaining a sustainable future in the transport sector. This development and increasing number of EVs creates an opportunity for prosumers to trade electricity. Considering this opportunity, this review article aims to provide an in-depth analysis of P2P energy trading of EVs using blockchain in centralized and decentralized networks, which enables prosumers to exchange energy directly with one another. The paper is aimed to provide the reader with a state-of-the-art review on the P2P energy trading for EVs, considering different blockchain algorithms that are practically implemented or still in the research phase. Moreover, the paper presents blockchain applications, current trends, and future challenges of EVs' energy trading. P2P energy trading for EVs using blockchain algorithms can be successfully implemented considering real-time scenarios and economically benefits smart sustainable societies. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. Channel Prediction for Underwater Acoustic Communication: A Review and Performance Evaluation of Algorithms.
- Author
-
Liu, Haotian, Ma, Lu, Wang, Zhaohui, and Qiao, Gang
- Subjects
- *
DEEP learning , *UNDERWATER acoustic communication , *MACHINE learning , *ALGORITHMS , *TELECOMMUNICATION systems , *FORECASTING - Abstract
Underwater acoustic (UWA) channel prediction technology, as an important topic in UWA communication, has played an important role in UWA adaptive communication network and underwater target perception. Although many significant advancements have been achieved in underwater acoustic channel prediction over the years, a comprehensive summary and introduction is still lacking. As the first comprehensive overview of UWA channel prediction, this paper introduces past works and algorithm implementation methods of channel prediction from the perspective of linear, kernel-based, and deep learning approaches. Importantly, based on available at-sea experiment datasets, this paper compares the performance of current primary UWA channel prediction algorithms under a unified system framework, providing researchers with a comprehensive and objective understanding of UWA channel prediction. Finally, it discusses the directions and challenges for future research. The survey finds that linear prediction algorithms are the most widely applied, and deep learning, as the most advanced type of algorithm, has moved this field into a new stage. The experimental results show that the linear algorithms have the lowest computational complexity, and when the training samples are sufficient, deep learning algorithms have the best prediction performance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. A simple weighting method for inverting earthquake source parameters using geodetic multisource data under Bayesian algorithm.
- Author
-
Xi, Can, Wang, Leyang, Zhao, Xiong, Sun, Zhanglin, Zhao, Weifeng, Pang, Ming, and Wu, Fei
- Subjects
- *
EARTHQUAKES , *STANDARD deviations , *CONSTRAINT algorithms , *GEODESICS , *SEARCH algorithms , *ANGLES , *ALGORITHMS - Abstract
More accurate inversion of source fault geometry and slip parameters under the constraint of the Bayesian algorithm has become a research hotspot in the field of geodetic inversion in recent years. In nonlinear inversion, the determination of the weight ratio of the joint inversion of multisource data is more complicated. In this context, this paper proposes a simple and easily generalized weighting method for inversion of source fault parameters by joint geodetic multisource data under the Bayesian framework. This method determines the relative weight ratio of multisource data by root mean square error (RMSE) value and can be extended to other nonlinear search algorithms. To verify the validity of the method in this paper, this paper first sets up four sets of simulated seismic experiment schemes. The inversion results show that the joint inversion weighting method proposed in this paper has a significant decrease in the large residual value compared with the equal weight joint inversion and the single data source joint inversion method. The east–west deformation RMSE is 0.1458 mm, the north–south deformation RMSE is 0.2119 mm and the vertical deformation RMSE is 0.2756 mm. The RMSEs of the three directions are lower than those of other schemes, indicating that the proposed method is suitable for the joint inversion of source parameters under Bayesian algorithm. To further verify the applicability of the proposed method in complex earthquakes, the source parameters of the Maduo earthquake were inverted using the method of this paper. The focal depth of the inversion results in this paper is closer to the focal depth released by the GCMT agency. In terms of strike angle and dip angle, the joint inversion in this paper is also more inclined to the GCMT results. The joint inversion results generally conform to the characteristics of left-lateral strike-slip, which shows the adaptability of this method in complex earthquakes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
30. Fast and consistent algorithm for the latent block model.
- Author
-
Brault, Vincent and Channarond, Antoine
- Subjects
- *
ALGORITHMS , *BLOCK codes - Abstract
The latent block model is used to simultaneously rank the rows and columns of a matrix to reveal a block structure. The algorithms used for estimation are often time consuming. However, recent work shows that the log-likelihood ratios are equivalent under the complete and observed (with unknown labels) models and the groups posterior distribution to converge as the size of the data increases to a Dirac mass located at the actual groups configuration. Based on these observations, the algorithm Largest Gaps is proposed in this paper to perform clustering using only the marginals of the matrix, when the number of blocks is very small with respect to the size of the whole matrix in the case of binary data. In addition, a model selection method is incorporated with a proof of its consistency. Thus, this paper shows that studying simplistic configurations (few blocks compared to the size of the matrix or very contrasting blocks) with complex algorithms is useless since the marginals already give very good parameter and classification estimates. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. A Hybrid Swarming Algorithm for Adaptive Enhancement of Low-Illumination Images.
- Author
-
Zhang, Yi, Liu, Xinyu, and Lv, Yang
- Subjects
- *
PARTICLE swarm optimization , *IMAGE intensifiers , *HEURISTIC algorithms , *ALGORITHMS , *VISUAL perception - Abstract
This paper presents an improved swarming algorithm that enhances low-illumination images. The algorithm combines a hybrid Harris Eagle algorithm with double gamma (IHHO-BIGA) and incomplete beta (IHHO-NBeta) functions. This paper integrates the concept of symmetry into the improvement steps of the image adaptive enhancement algorithm. The enhanced algorithm integrates chaotic mapping for population initialization, a nonlinear formula for prey energy calculation, spiral motion from the black widow algorithm for global search enhancement, a nonlinear inertia weight factor inspired by particle swarm optimization, and a modified Levy flight strategy to prevent premature convergence to local optima. This paper compares the algorithm's performance with other swarm intelligence algorithms using commonly used test functions. The algorithm's performance is compared against several emerging swarm intelligence algorithms using commonly used test functions, with results demonstrating its superior performance. The improved Harris Eagle algorithm is then applied for image adaptive enhancement, and its effectiveness is evaluated on five low-illumination images from the LOL dataset. The proposed method is compared to three common image enhancement techniques and the IHHO-BIGA and IHHO-NBeta methods. The experimental results reveal that the proposed approach achieves optimal visual perception and enhanced image evaluation metrics, outperforming the existing techniques. Notably, the standard deviation data of the first image show that the IHHO-NBeta method enhances the image by 8.26%, 120.91%, 126.85%, and 164.02% compared with IHHO-BIGA, the single-scale Retinex enhancement method, the homomorphic filtering method, and the limited contrast adaptive histogram equalization method, respectively. The processing time of the improved method is also better than the previous heuristic algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. VIS-SLAM: A Real-Time Dynamic SLAM Algorithm Based on the Fusion of Visual, Inertial, and Semantic Information.
- Author
-
Wang, Yinglong, Liu, Xiaoxiong, Zhao, Minkun, and Xu, Xinlong
- Subjects
- *
MOBILE robots , *MACHINE learning , *MOBILE learning , *DEEP learning , *ALGORITHMS , *INFORMATION measurement , *PROBABILITY theory , *GEOMETRY - Abstract
A deep learning-based Visual Inertial SLAM technique is proposed in this paper to ensure accurate autonomous localization of mobile robots in environments with dynamic objects. Addressing the limitations of real-time performance in deep learning algorithms and the poor robustness of pure visual geometry algorithms, this paper presents a deep learning-based Visual Inertial SLAM technique. Firstly, a non-blocking model is designed to extract semantic information from images. Then, a motion probability hierarchy model is proposed to obtain prior motion probabilities of feature points. For image frames without semantic information, a motion probability propagation model is designed to determine the prior motion probabilities of feature points. Furthermore, considering that the output of inertial measurements is unaffected by dynamic objects, this paper integrates inertial measurement information to improve the estimation accuracy of feature point motion probabilities. An adaptive threshold-based motion probability estimation method is proposed, and finally, the positioning accuracy is enhanced by eliminating feature points with excessively high motion probabilities. Experimental results demonstrate that the proposed algorithm achieves accurate localization in dynamic environments while maintaining real-time performance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. A novel differential evolution algorithm with multi-population and elites regeneration.
- Author
-
Cao, Yang and Luan, Jingzheng
- Subjects
- *
DIFFERENTIAL evolution , *EVOLUTIONARY algorithms , *DISTRIBUTION (Probability theory) , *ALGORITHMS , *GLOBAL optimization - Abstract
Differential Evolution (DE) is widely recognized as a highly effective evolutionary algorithm for global optimization. It has proven its efficacy in tackling diverse problems across various fields and real-world applications. DE boasts several advantages, such as ease of implementation, reliability, speed, and adaptability. However, DE does have certain limitations, such as suboptimal solution exploitation and challenging parameter tuning. To address these challenges, this research paper introduces a novel algorithm called Enhanced Binary JADE (EBJADE), which combines differential evolution with multi-population and elites regeneration. The primary innovation of this paper lies in the introduction of strategy with enhanced exploitation capabilities. This strategy is based on utilizing the sorting of three vectors from the current generation to perturb the target vector. By introducing directional differences, guiding the search towards improved solutions. Additionally, this study adopts a multi-population method with a rewarding subpopulation to dynamically adjust the allocation of two different mutation strategies. Finally, the paper incorporates the sampling concept of elite individuals from the Estimation of Distribution Algorithm (EDA) to regenerate new solutions through the selection process in DE. Experimental results, using the CEC2014 benchmark tests, demonstrate the strong competitiveness and superior performance of the proposed algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. A novel improved total variation algorithm for the elimination of scratch-type defects in high-voltage cable cross-sections.
- Author
-
Yu, Aihua, Shan, Lina, Zhu, Wen, Jie, Jing, and Hou, Beiping
- Subjects
- *
CABLES , *COMPUTER vision , *CROSS-sectional imaging , *IMAGE intensifiers , *ALGORITHMS , *PARTIAL discharges - Abstract
In the quality inspection process of high-voltage cables, several commonly used indicators include cable length, insulation thickness, and the number of conductors within the core. Among these factors, the count of conductors holds particular significance as a key determinant of cable quality. Machine vision technology has found extensive application in automatically detecting the number of conductors in cross-sectional images of high-voltage cables. However, the presence of scratch-type defects in cut high-voltage cable cross-sections can significantly compromise the precision of conductor count detection. To address this problem, this paper introduces a novel improved total variation (TV) algorithm, marking the first-ever application of the TV algorithm in this domain. Considering the staircase effect, the direct use of the TV algorithm is prone to cause serious loss of image edge information. The proposed algorithm firstly introduces multimodal features to effectively mitigate the staircase effect. While eliminating scratch-type defects, the algorithm endeavors to preserve the original image's edge information, consequently yielding a noteworthy enhancement in detection accuracy. Furthermore, a dataset was curated, comprising images of cross-sections of high-voltage cables of varying sizes, each displaying an assortment of scratch-type defects. Experimental findings conclusively demonstrate the algorithm's exceptional efficiency in eradicating diverse scratch-type defects within high-voltage cable cross-sections. The average scratch elimination rate surpasses 90%, with an impressive 96.15% achieved on cable sample 4. A series of conducted ablation experiments in this paper substantiate a significant enhancement in cable image quality. Notably, the Edge Preservation Index (EPI) exhibits an improvement of approximately 20%, resulting in a substantial boost to conductor count detection accuracy, thus effectively enhancing the quality of high-voltage cable production. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. Time–Frequency Signal Integrity Monitoring Algorithm Based on Temperature Compensation Frequency Bias Combination Model.
- Author
-
Guo, Yu, Li, Zongnan, Gong, Hang, Peng, Jing, and Ou, Gang
- Subjects
- *
SIGNAL integrity (Electronics) , *TIME-frequency analysis , *ATOMIC clocks , *ARTIFICIAL satellites in navigation , *ALGORITHMS , *TIME measurements , *X chromosome - Abstract
To ensure the long-term stable and uninterrupted service of satellite navigation systems, the robustness and reliability of time–frequency systems are crucial. Integrity monitoring is an effective method to enhance the robustness and reliability of time–frequency systems. Time–frequency signals are fundamental for integrity monitoring, with their time differences and frequency biases serving as essential indicators. These indicators are influenced by the inherent characteristics of the time–frequency signals, as well as the links and equipment they traverse. Meanwhile, existing research primarily focuses on only monitoring the integrity of the time–frequency signals' output by the atomic clock group, neglecting the integrity monitoring of the time–frequency signals generated and distributed by the time–frequency signal generation and distribution subsystem. This paper introduces a time–frequency signal integrity monitoring algorithm based on the temperature compensation frequency bias combination model. By analyzing the characteristics of time difference measurements, constructing the temperature compensation frequency bias combination model, and extracting and monitoring noise and frequency bias features from the time difference measurements, the algorithm achieves comprehensive time–frequency signal integrity monitoring. Experimental results demonstrate that the algorithm can effectively detect, identify, and alert users to time–frequency signal faults. Additionally, the model and the integrity monitoring parameters developed in this paper exhibit high adaptability, making them directly applicable to the integrity monitoring of time–frequency signals across various links. Compared with traditional monitoring algorithms, the algorithm proposed in this paper greatly improves the effectiveness, adaptability, and real-time performance of time–frequency signal integrity monitoring. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. Swin-cryoEM: Multi-class cryo-electron micrographs single particle mixed detection method.
- Author
-
Fang, Kun, Wang, JinLing, Chen, QingFeng, Feng, Xian, Qu, YouMing, Shi, Jiachi, and Xu, Zhuomin
- Subjects
- *
TRANSFORMER models , *VISUAL fields , *PARTICLE swarm optimization , *INFORMATION sharing , *PROBLEM solving , *ALGORITHMS - Abstract
Cryo-electron micrograph images have various characteristics such as varying sizes, shapes, and distribution densities of individual particles, severe background noise, high levels of impurities, irregular shapes, blurred edges, and similar color to the background. How to demonstrate good adaptability in the field of image vision by picking up single particles from multiple types of cryo-electron micrographs is currently a challenge in the field of cryo-electron micrographs. This paper combines the characteristics of the MixUp hybrid enhancement algorithm, enhances the image feature information in the pre-processing stage, builds a feature perception network based on the channel self-attention mechanism in the forward network of the Swin Transformer model network, achieving adaptive adjustment of self-attention mechanism between different single particles, increasing the network's tolerance to noise, Incorporating PReLU activation function to enhance information exchange between pixel blocks of different single particles, and combining the Cross-Entropy function with the softmax function to construct a classification network based on Swin Transformer suitable for cryo-electron micrograph single particle detection model (Swin-cryoEM), achieving mixed detection of multiple types of single particles. Swin-cryoEM algorithm can better solve the problem of good adaptability in picking single particles of many types of cryo-electron micrographs, improve the accuracy and generalization ability of the single particle picking method, and provide high-quality data support for the three-dimensional reconstruction of a single particle. In this paper, ablation experiments and comparison experiments were designed to evaluate and compare Swin-cryoEM algorithms in detail and comprehensively on multiple datasets. The Average Precision is an important evaluation index of the evaluation model, and the optimal Average Precision reached 95.5% in the training stage Swin-cryoEM, and the single particle picking performance was also superior in the prediction stage. This model inherits the advantages of the Swin Transformer detection model and is superior to mainstream models such as Faster R-CNN and YOLOv5 in terms of the single particle detection capability of cryo-electron micrographs. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. Methane Retrieval Algorithms Based on Satellite: A Review.
- Author
-
Jiang, Yuhan, Zhang, Lu, Zhang, Xingying, and Cao, Xifeng
- Subjects
- *
REMOTE sensing , *METHANE , *THEMATIC mapper satellite , *GLOBAL warming , *CARBON dioxide , *ALGORITHMS , *SPATIAL resolution - Abstract
As the second most predominant greenhouse gas, methane-targeted emission mitigation holds the potential to decelerate the pace of global warming. Satellite remote sensing is an important monitoring tool, and we review developments in the satellite detection of methane. This paper provides an overview of the various types of satellites, including the various instrument parameters, and describes the different types of satellite retrieval algorithms. In addition, the currently popular methane point source quantification method is presented. Based on existing research, we delineate the classification of methane remote sensing satellites into two overarching categories: area flux mappers and point source imagers. Area flux mappers primarily concentrate on the assessment of global or large-scale methane concentrations, with a further subclassification into active remote sensing satellites (e.g., MERLIN) and passive remote sensing satellites (e.g., TROPOMI, GOSAT), contingent upon the remote sensing methodology employed. Such satellites are mainly based on physical models and the carbon dioxide proxy method for the retrieval of methane. Point source imagers, in contrast, can detect methane point source plumes using their ultra-high spatial resolution. Subcategories within this classification include multispectral imagers (e.g., Sentinel-2, Landsat-8) and hyperspectral imagers (e.g., PRISMA, GF-5), contingent upon their spectral resolution disparities. Area flux mappers are mostly distinguished by their use of physical algorithms, while point source imagers are dominated by data-driven methods. Furthermore, methane plume emissions can be accurately quantified through the utilization of an integrated mass enhancement model. Finally, a prediction of the future trajectory of methane remote sensing satellites is presented, in consideration of the current landscape. This paper aims to provide basic theoretical support for subsequent scientific research. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. Efficient Algorithm for Proportional Lumpability and Its Application to Selfish Mining in Public Blockchains.
- Author
-
Piazza, Carla, Rossi, Sabina, and Smuseva, Daria
- Subjects
- *
POLYNOMIAL time algorithms , *MARKOV processes , *BLOCKCHAINS , *ALGORITHMS , *STOCHASTIC models , *PETRI nets - Abstract
This paper explores the concept of proportional lumpability as an extension of the original definition of lumpability, addressing the challenges posed by the state space explosion problem in computing performance indices for large stochastic models. Lumpability traditionally relies on state aggregation techniques and is applicable to Markov chains demonstrating structural regularity. Proportional lumpability extends this idea, proposing that the transition rates of a Markov chain can be modified by certain factors, resulting in a lumpable new Markov chain. This concept facilitates the derivation of precise performance indices for the original process. This paper establishes the well-defined nature of the problem of computing the coarsest proportional lumpability that refines a given initial partition, ensuring a unique solution exists. Additionally, a polynomial time algorithm is introduced to solve this problem, offering valuable insights into both the concept of proportional lumpability and the broader realm of partition refinement techniques. The effectiveness of proportional lumpability is demonstrated through a case study that consists of designing a model to investigate selfish mining behaviors on public blockchains. This research contributes to a better understanding of efficient approaches for handling large stochastic models and highlights the practical applicability of proportional lumpability in deriving exact performance indices. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. A scalable blockchain based framework for efficient IoT data management using lightweight consensus.
- Author
-
Haque, Ehtisham Ul, Shah, Adil, Iqbal, Jawaid, Ullah, Syed Sajid, Alroobaea, Roobaea, and Hussain, Saddam
- Subjects
- *
DATA management , *INTERNET of things , *NETWORK performance , *BLOCKCHAINS , *SCALABILITY , *ALGORITHMS - Abstract
Recent research has focused on applying blockchain technology to solve security-related problems in Internet of Things (IoT) networks. However, the inherent scalability issues of blockchain technology become apparent in the presence of a vast number of IoT devices and the substantial data generated by these networks. Therefore, in this paper, we use a lightweight consensus algorithm to cater to these problems. We propose a scalable blockchain-based framework for managing IoT data, catering to a large number of devices. This framework utilizes the Delegated Proof of Stake (DPoS) consensus algorithm to ensure enhanced performance and efficiency in resource-constrained IoT networks. DPoS being a lightweight consensus algorithm leverages a selected number of elected delegates to validate and confirm transactions, thus mitigating the performance and efficiency degradation in the blockchain-based IoT networks. In this paper, we implemented an Interplanetary File System (IPFS) for distributed storage, and Docker to evaluate the network performance in terms of throughput, latency, and resource utilization. We divided our analysis into four parts: Latency, throughput, resource utilization, and file upload time and speed in distributed storage evaluation. Our empirical findings demonstrate that our framework exhibits low latency, measuring less than 0.976 ms. The proposed technique outperforms Proof of Stake (PoS), representing a state-of-the-art consensus technique. We also demonstrate that the proposed approach is useful in IoT applications where low latency or resource efficiency is required. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. Image convolution techniques integrated with YOLOv3 algorithm in motion object data filtering and detection.
- Author
-
Cheng, Mai and Liu, Mengyuan
- Subjects
- *
TRACKING algorithms , *FILTERS & filtration , *VIDEO surveillance , *ALGORITHMS , *IMAGE segmentation , *RESEARCH personnel , *JOGGING - Abstract
In order to address the challenges of identifying, detecting, and tracking moving objects in video surveillance, this paper emphasizes image-based dynamic entity detection. It delves into the complexities of numerous moving objects, dense targets, and intricate backgrounds. Leveraging the You Only Look Once (YOLOv3) algorithm framework, this paper proposes improvements in image segmentation and data filtering to address these challenges. These enhancements form a novel multi-object detection algorithm based on an improved YOLOv3 framework, specifically designed for video applications. Experimental validation demonstrates the feasibility of this algorithm, with success rates exceeding 60% for videos such as "jogging", "subway", "video 1", and "video 2". Notably, the detection success rates for "jogging" and "video 1" consistently surpass 80%, indicating outstanding detection performance. Although the accuracy slightly decreases for "Bolt" and "Walking2", success rates still hover around 70%. Comparative analysis with other algorithms reveals that this method's tracking accuracy surpasses that of particle filters, Discriminative Scale Space Tracker (DSST), and Scale Adaptive Multiple Features (SAMF) algorithms, with an accuracy of 0.822. This indicates superior overall performance in target tracking. Therefore, the improved YOLOv3-based multi-object detection and tracking algorithm demonstrates robust filtering and detection capabilities in noise-resistant experiments, making it highly suitable for various detection tasks in practical applications. It can address inherent limitations such as missed detections, false positives, and imprecise localization. These improvements significantly enhance the efficiency and accuracy of target detection, providing valuable insights for researchers in the field of object detection, tracking, and recognition in video surveillance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
41. Evaluation of a Condition Monitoring Algorithm for Early Bearing Fault Detection.
- Author
-
Gruber, Hannes, Fuchs, Anna, and Bader, Michael
- Subjects
- *
ROLLER bearings , *BREAKDOWNS (Machinery) , *OUTLIER detection , *TRACKING algorithms , *FAILED states , *ALGORITHMS , *ABSOLUTE value , *FAST Fourier transforms - Abstract
Roller bearings are critical components in various mechanical systems, and the timely detection of potential failures is essential for preventing costly downtimes and avoiding substantial machinery breakdown. This research focuses on finding and verifying a robust method that can detect failures early, without creating false positive failure states. Therefore, this paper introduces a novel algorithm for the early detection of roller bearing failures, particularly tailored to high-precision bearings and automotive test bed systems. The featured method (AFI—Advanced Failure Indicator) utilizes the Fast Fourier Transform (FFT) of wideband accelerometers to calculate the spectral content of vibration signals emitted by roller bearings. By calculating the frequency bands and tracking the movement of these bands within the spectra, the method provides an indicator of the machinery's health, mainly focusing on the early stages of bearing failure. The calculated channel can be used as a trend indicator, enabling the method to identify subtle deviations associated with impending failures. The AFI algorithm incorporates a non-static limit through moving average calculations and volatility analysis methods to determine critical changes in the signal. This thresholding mechanism ensures the algorithm's responsiveness to variations in operating conditions and environmental factors, contributing to its robustness in diverse industrial settings. Further refinement was achieved through an outlier detection filter, which reduces false positives and enhances the algorithm's accuracy in identifying genuine deviations from the normal operational state. To benchmark the developed algorithm, it was compared with three industry-standard algorithms: VRMS calculations per ISO 10813-3, Mean Absolute Value of Extremums (MAVE), and Envelope Frequency Band (EFB). This comparative analysis aimed to evaluate the efficacy of the novel algorithm against the established methods in the field, providing valuable insights into its potential advantages and limitations. In summary, this paper presents an innovative algorithm for the early detection of roller bearing failures, leveraging FFT-based spectral analysis, trend monitoring, adaptive thresholding, and outlier detection. Its ability to confirm the first failure state underscores the algorithm's effectiveness. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. A Fast Positioning Method for Docking Vehicles in Mixed Traffic Scenarios.
- Author
-
Yi Xu, Jinxin Yu, Lei Wang, Dong Guo, Shaohong Ding, Teng Sun, Juan Ni, Shuyue Shi, Xiangcun Kong, Song Gao, and Yuqiong Wang
- Subjects
- *
PIXELS , *TRACKING radar , *DYNAMIC positioning systems , *TOWING , *CAMERAS , *LUGGAGE , *ALGORITHMS - Abstract
Positioning technology is one of the key steps of automatic docking. At present, vehicle docking technology is widely used in airport baggage cars, towing trailers and other fields, and the docking process is mostly done manually. In order to realize the automatic docking of vehicles in mixed scenes, this paper proposes a fast positioning method for docking vehicles in mixed traffic scenes. First, using the regression relationship between the template size in the camera's field of view and the actual distance, FFT template matching and threshold judgment methods are employed to locate, track, and match the template, as well as verify the threshold results, in order to achieve template long-distance positioning and ROI extraction. Then, based on the regression relationship between AprilTag pixels in the camera's field of view and actual distance, the experiment of AprilTag minimum recognition pixel is designed. Finally, the real-time pixels of AprilTag are calculated from the real-time ranging results, and compared with the minimum recognition pixel of AprilTag, it is judged whether the recognition conditions are met, so as to carry out the subsequent AprilTag positioning. The experimental results demonstrate that the method proposed in this paper can achieve rapid and accurate positioning of a target docking vehicle at a long distance, while reducing the mismatch rate. On this basis, by designing the identification conditions of AprilTag, the rapid identification and positioning of AprilTag is realized, which reduces the traversal time of the algorithm. The feasibility of this positioning method is verified by the attitude angle error experiment, which provides supporting conditions for the subsequent docking path planning and vehicle control. [ABSTRACT FROM AUTHOR]
- Published
- 2024
43. Simplified V/f Control Algorithm for Reduction of Current Fluctuations in Variable-Speed Operation of Induction Motors.
- Author
-
Son, Dong-Hyeok and Kim, Sung-An
- Subjects
- *
CURRENT fluctuations , *INDUCTION motors , *HIGHPASS electric filters , *MOTOR drives (Electric motors) , *ALGORITHMS - Abstract
This paper introduces a straightforward control strategy aimed at the reduction of current fluctuations within the low-frequency domain of open-loop V/f control in induction motor drives. Traditional control techniques necessitate the addition of a current compensator based on motor parameters and the use of digital filters such as band-pass or high-pass filters. These methods, however, rely on precise motor parameters and involve complex filter design and implementation. The proposed control is capable of suppressing current fluctuations without controlling the slip of the induction motor. The proposed control strategy generates the forced rotation angle and command input voltage using the V/f block and outputs the d-axis voltage using a proportional integral controller to keep the d-axis current constant at zero. The difference between the command input voltage and the d-axis voltage is applied as the q-axis voltage and then applied through SVPWM. In order to verify the effectiveness of the proposed control, the proposed control is implemented and analyzed using power simulation based on the results of the analysis of the causes of current fluctuations in the induction motor. Finally, the effect of suppressing current fluctuations of the induction motor is verified through experimental results. In the 10~19 Hz range, where the conventional V/f control method resulted in current fluctuation rates exceeding 10% and peaking at 113.3% at 13 Hz, the proposed method suppressed the fluctuation rate to below 8.6% across all frequencies. This paper validates the effectiveness of the proposed control strategy through these results. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. Canadian Association of Radiologists White Paper on Ethical and Legal Issues Related to Artificial Intelligence in Radiology.
- Author
-
Jaremko, Jacob L., Azar, Marleine, Bromwich, Rebecca, Lum, Andrea, Alicia Cheong, Li Hsia, Gibert, Martin, Laviolette, François, Gray, Bruce, Reinhold, Caroline, Cicero, Mark, Chong, Jaron, Shaw, James, Rybicki, Frank J., Hurrell, Casey, Lee, Emil, and Tang, An
- Subjects
- *
ARTIFICIAL intelligence laws , *ACQUISITION of property , *ALGORITHMS , *ARTIFICIAL intelligence , *AUTONOMY (Psychology) , *CONCEPTUAL structures , *MEDICAL ethics , *MEDICAL practice , *MEDICAL specialties & specialists , *PRIVACY , *RADIOLOGISTS , *DATA security - Abstract
Artificial intelligence (AI) software that analyzes medical images is becoming increasingly prevalent. Unlike earlier generations of AI software, which relied on expert knowledge to identify imaging features, machine learning approaches automatically learn to recognize these features. However, the promise of accurate personalized medicine can only be fulfilled with access to large quantities of medical data from patients. This data could be used for purposes such as predicting disease, diagnosis, treatment optimization, and prognostication. Radiology is positioned to lead development and implementation of AI algorithms and to manage the associated ethical and legal challenges. This white paper from the Canadian Association of Radiologists provides a framework for study of the legal and ethical issues related to AI in medical imaging, related to patient data (privacy, confidentiality, ownership, and sharing); algorithms (levels of autonomy, liability, and jurisprudence); practice (best practices and current legal framework); and finally, opportunities in AI from the perspective of a universal health care system. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
45. VIBRANT-WALK: An algorithm to detect plagiarism of figures in academic papers.
- Author
-
Parmar, Shashank and Jain, Bhavya
- Subjects
- *
PLAGIARISM , *COMPUTER algorithms , *ALGORITHMS , *COMPUTER vision , *RANDOM walks - Abstract
Detecting plagiarism in academic papers is crucial for maintaining academic integrity, preserving the originality of published work, and safeguarding intellectual property. While existing applications excel at text plagiarism detection, they fall short when it comes to image plagiarism. This paper introduces a novel algorithm, named "VIBRANT-WALK," designed to detect image plagiarism in academic manuscripts. The challenge of identifying plagiarized images is formidable, requiring a unique approach. Traditional Computer Vision algorithms, proficient in image similarity tasks, face limitations in determining whether an image has been previously used in an article. To address this, the proposed algorithm leverages a repository of all published article pages, focusing on absolute identicality rather than image similarity. The algorithm comprises two stages. In the first stage, a "Vibrancy Matrix" is created through image preprocessing, aiding in contour determination. The second stage involves pixel-by-pixel comparison with images from published manuscripts. To enhance efficiency, the algorithm initiates comparisons from the pixel with the highest score in the Vibrancy Matrix, followed by pixel comparisons through random walks, significantly reducing complexity. To conduct the study, a custom dataset was compiled from 69 research articles, capturing snapshots of each page and figure. Overall, we present 485 unique test cases where we can test the accuracy and efficiency of the algorithm. The lack of publicly available datasets necessitated this approach. The proposed algorithm outperformed the existing models and algorithms in this field by achieving an overall accuracy of 94.8% on the collated dataset, identifying 460 instances of plagiarism out of the 485 test cases. The algorithm also demonstrated a 100% accuracy rate in avoiding false positives. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
46. A Parameterization Approach for the Dielectric Response Model of Oil Paper Insulation Using FDS Measurements.
- Author
-
Yang, Feng, Du, Lin, Yang, Lijun, Wei, Chao, Wang, Youyuan, Ran, Liman, and He, Peng
- Subjects
- *
DIELECTRICS , *HIGH voltages , *ALGORITHMS , *ELECTRIC capacity , *ELECTRIC potential - Abstract
To facilitate better interpretation of dielectric response measurements--thereby directing numerical evidence for condition assessments of oil-paper-insulated equipment in high-voltage alternating current (HVAC) transmission systems--a novel approach is presented to estimate the parameters in the extended Debye model (EDM) using wideband frequency domain spectroscopy (FDS). A syncretic algorithm that integrates a genetic algorithm (GA) and the Levenberg-Marquardt (L-M) algorithm is introduced in the present study to parameterize EDM using the FDS measurements of a real-life 126 kV oil-impregnated paper (OIP) bushing under different controlled temperatures. As for the uncertainty of the EDM structure due to variable branch quantity, Akaike's information criterion (AIC) is employed to determine the model orders. For verification, comparative analysis of FDS reconstruction and results of FDS transformation to polarization--depolarization current (PDC)/return voltage measurement (RVM) are presented. The comparison demonstrates good agreement between the measured and reconstructed spectroscopies of complex capacitance and tan δover the full tested frequency band (10-4 Hz-10³ Hz) with goodness of fit over 0.99. Deviations between the tested and modelled PDC/RVM from FDS are then discussed. Compared with the previous studies to parameterize the model using time domain dielectric responses, the proposed method solves the problematic matching between EDM and FDS especially in a wide frequency band, and therefore assures a basis for quantitative insulation condition assessment of OIP-insulated apparatus in energy systems. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
47. Reply to "Describing center of pressure movement in stabilometry by ellipse area approximation" from Agnieszka Gołąb concerning the paper "A Review of Center of Pressure (COP) Variables to Quantify Standing Balance in Elderly People: Algorithms and Open Access Code"
- Author
-
Quijoux, Flavien and Nicolaï, Alice
- Subjects
- *
OLDER people , *EQUILIBRIUM testing , *ALGORITHMS , *WAVELETS (Mathematics) - Abstract
Letter to the Editor concerning "Describing center of pressure movement in stabilometry by ellipse area approximation" from Agnieszka Golab. Reply to "Describing center of pressure movement in stabilometry by ellipse area approximation" from Agnieszka Golab concerning the paper "A Review of Center of Pressure (COP) Variables to Quantify Standing Balance in Elderly People: Algorithms and Open Access Code" Our choice was actually to present the formula of the prediction ellipse area in the article, as it indeed does not strongly depend on the sample size as the confidence ellipse area does. [Extracted from the article]
- Published
- 2022
- Full Text
- View/download PDF
48. The contribution of cause-effect link to representing the core of scientific paper—The role of Semantic Link Network.
- Author
-
Cao, Mengyun, Sun, Xiaoping, and Zhuge, Hai
- Subjects
- *
COMPLEXITY (Philosophy) , *CAUSATION (Philosophy) , *SEMANTICS , *RESEARCH , *PHILOSOPHY - Abstract
The Semantic Link Network is a general semantic model for modeling the structure and the evolution of complex systems. Various semantic links play different roles in rendering the semantics of complex system. One of the basic semantic links represents cause-effect relation, which plays an important role in representation and understanding. This paper verifies the role of the Semantic Link Network in representing the core of text by investigating the contribution of cause-effect link to representing the core of scientific papers. Research carries out with the following steps: (1) Two propositions on the contribution of cause-effect link in rendering the core of paper are proposed and verified through a statistical survey, which shows that the sentences on cause-effect links cover about 65% of key words within each paper on average. (2) An algorithm based on syntactic patterns is designed for automatically extracting cause-effect link from scientific papers, which recalls about 70% of manually annotated cause-effect links on average, indicating that the result adapts to the scale of data sets. (3) The effects of cause-effect link on four schemes of incorporating cause-effect link into the existing instances of the Semantic Link Network for enhancing the summarization of scientific papers are investigated. The experiments show that the quality of the summaries is significantly improved, which verifies the role of semantic links. The significance of this research lies in two aspects: (1) it verifies that the Semantic Link Network connects the important concepts to render the core of text; and, (2) it provides an evidence for realizing content services such as summarization, recommendation and question answering based on the Semantic Link Network, and it can inspire relevant research on content computing. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
49. Quantifying the impact of scholarly papers based on higher-order weighted citations.
- Author
-
Bai, Xiaomei, Zhang, Fuli, Hou, Jie, Lee, Ivan, Kong, Xiangjie, Tolba, Amr, and Xia, Feng
- Subjects
- *
CITATION analysis , *SCHOLARLY publishing , *BIBLIOMETRICS , *SIMULATION methods & models , *ALGORITHMS - Abstract
Quantifying the impact of a scholarly paper is of great significance, yet the effect of geographical distance of cited papers has not been explored. In this paper, we examine 30,596 papers published in Physical Review C, and identify the relationship between citations and geographical distances between author affiliations. Subsequently, a relative citation weight is applied to assess the impact of a scholarly paper. A higher-order weighted quantum PageRank algorithm is also developed to address the behavior of multiple step citation flow. Capturing the citation dynamics with higher-order dependencies reveals the actual impact of papers, including necessary self-citations that are sometimes excluded in prior studies. Quantum PageRank is utilized in this paper to help differentiating nodes whose PageRank values are identical. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
50. A collaborative approach for research paper recommender system.
- Author
-
Haruna, Khalid, Akmar Ismail, Maizatul, Damiasih, Damiasih, Sutopo, Joko, and Herawan, Tutut
- Subjects
- *
CITATION analysis , *SCIENCE & state , *SOCIAL network analysis , *SOCIAL networks , *COMPUTER networks - Abstract
Research paper recommenders emerged over the last decade to ease finding publications relating to researchers’ area of interest. The challenge was not just to provide researchers with very rich publications at any time, any place and in any form but to also offer the right publication to the right researcher in the right way. Several approaches exist in handling paper recommender systems. However, these approaches assumed the availability of the whole contents of the recommending papers to be freely accessible, which is not always true due to factors such as copyright restrictions. This paper presents a collaborative approach for research paper recommender system. By leveraging the advantages of collaborative filtering approach, we utilize the publicly available contextual metadata to infer the hidden associations that exist between research papers in order to personalize recommendations. The novelty of our proposed approach is that it provides personalized recommendations regardless of the research field and regardless of the user’s expertise. Using a publicly available dataset, our proposed approach has recorded a significant improvement over other baseline methods in measuring both the overall performance and the ability to return relevant and useful publications at the top of the recommendation list. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.