4,245 results
Search Results
2. HTPosum:Heterogeneous Tree Structure augmented with Triplet Positions for extractive Summarization of scientific papers
- Author
-
Zhu, Zhenfang, Gong, Shuai, Qi, Jiangtao, and Tong, Chunling
- Published
- 2024
- Full Text
- View/download PDF
3. Integrity verification for scientific papers: The first exploration of the text
- Author
-
Shi, Xiang, Liu, Yinpeng, Liu, Jiawei, Cheng, Qikai, and Lu, Wei
- Published
- 2024
- Full Text
- View/download PDF
4. Developing a fuzzy optimized model for selecting a maintenance strategy in the paper industry: An integrated FGP-ANP-FMEA approach
- Author
-
Behnia, Foroogh, Zare Ahmadabadi, Habib, Schuelke-Leech, Beth-Anne, and Mirhassani, Mitra
- Published
- 2023
- Full Text
- View/download PDF
5. MARec: A multi-attention aware paper recommendation method
- Author
-
Wang, Jie, Zhou, Jingya, Wu, Zhen, and Sun, Xigang
- Published
- 2023
- Full Text
- View/download PDF
6. OpenMetaRec: Open-metapath heterogeneous dual attention network for paper recommendation
- Author
-
Xiao, Xia, Huang, Jiaying, Wang, Haobo, Zhang, Chengde, and Chen, Xinzhong
- Published
- 2023
- Full Text
- View/download PDF
7. HetTreeSum: A Heterogeneous Tree Structure-based Extractive Summarization Model for Scientific Papers
- Author
-
Zhao, Jintao, Yang, Libin, and Cai, Xiaoyan
- Published
- 2022
- Full Text
- View/download PDF
8. Mutually reinforced network embedding: An integrated approach to research paper recommendation
- Author
-
Mei, Xin, Cai, Xiaoyan, Xu, Sen, Li, Wenjie, Pan, Shirui, and Yang, Libin
- Published
- 2022
- Full Text
- View/download PDF
9. PSRMTE: Paper submission recommendation using mixtures of transformer
- Author
-
Nguyen, Dac Huu, Huynh, Son Thanh, Dinh, Cuong Viet, Huynh, Phong Tan, and Nguyen, Binh Thanh
- Published
- 2022
- Full Text
- View/download PDF
10. Extraction and evaluation of formulaic expressions used in scholarly papers
- Author
-
Iwatsuki, Kenichi, Boudin, Florian, and Aizawa, Akiko
- Published
- 2022
- Full Text
- View/download PDF
11. Citation recommendation using semantic representation of cited papers’ relations and content
- Author
-
Zhang, Jinzhu and Zhu, Lipeng
- Published
- 2022
- Full Text
- View/download PDF
12. HTPosum:Heterogeneous Tree Structure augmented with Triplet Positions for extractive Summarization of scientific papers
- Author
-
Zhu, Zhenfang, primary, Gong, Shuai, additional, Qi, Jiangtao, additional, and Tong, Chunling, additional
- Published
- 2023
- Full Text
- View/download PDF
13. Integrity Verification for Scientific Papers: The first exploration of the text
- Author
-
Shi, Xiang, primary, Liu, Yinpeng, additional, Liu, Jiawei, additional, Cheng, Qikai, additional, and Lu, Wei, additional
- Published
- 2023
- Full Text
- View/download PDF
14. HetTreeSum: A Heterogeneous Tree Structure-based Extractive Summarization Model for Scientific Papers
- Author
-
Jintao Zhao, Libin Yang, and Xiaoyan Cai
- Subjects
Artificial Intelligence ,General Engineering ,Computer Science Applications - Published
- 2022
15. Mutually reinforced network embedding: An integrated approach to research paper recommendation
- Author
-
Xin Mei, Xiaoyan Cai, Sen Xu, Wenjie Li, Shirui Pan, and Libin Yang
- Subjects
Artificial Intelligence ,General Engineering ,Computer Science Applications - Published
- 2022
16. PSRMTE: Paper submission recommendation using mixtures of transformer
- Author
-
Dac Huu Nguyen, Son Thanh Huynh, Cuong Viet Dinh, Phong Tan Huynh, and Binh Thanh Nguyen
- Subjects
Artificial Intelligence ,General Engineering ,Computer Science Applications - Published
- 2022
17. Citation recommendation using semantic representation of cited papers’ relations and content
- Author
-
Lipeng Zhu and Jinzhu Zhang
- Subjects
Information retrieval ,Computer science ,Language change ,General Engineering ,Computer Science Applications ,Artificial Intelligence ,Content (measure theory) ,Similarity (psychology) ,Selection (linguistics) ,Graph (abstract data type) ,Macro ,Representation (mathematics) ,Citation ,GeneralLiterature_REFERENCE(e.g.,dictionaries,encyclopedias,glossaries) - Abstract
Citation recommendation can help researchers quickly find supplementary or alternative references in massive academic resources. Current research on citation recommendation mainly focuses on the citing papers, resulting in the enormous cited papers are ignored, including the relations among cited papers and their citation context cited in citing papers. Moreover, cited paper’s content is often denoted with its original title and abstract, which is hard to acquire and rarely considers different citation motivations. Furthermore, the most appropriate method for semantic representation of cited papers’ relations and content is uncertain. Therefore, this paper studies citation recommendation from the perspective of semantic representation of cited papers’ relations and content. Firstly, four forms of citation context are designed and extracted as cited papers’ content considering citation motivations, as well as co-citation relationships are extracted as cited papers’ relations. Secondly, 132 methods are designed for generating semantic vector of cited paper, including four network embedding methods, 16 methods by combining four text representation algorithms with four forms of citation content, and 112 fusion methods. Finally, similarity among cited papers is calculated for citation recommendation and a quantitative evaluation method based on link prediction is designed, to find the most appropriate form of citation content and the optimal method. The result shows that doc2vecC (Document to Vector through Corruption) with the form of CS&SS (Current Sentences and Surrounding Sentences) performs best, in which the AUC (Area Under Curve) and MAP (Macro Average Precision) reach 0.877 and 0.889 and have increased by 0.462 and 0.370 compared with the worst-performing method. This performance is slightly improved by parameters adjustment, and a case study is performed whose results have further proved the effectiveness of this method. In addition, among four forms of cited papers’ content, CS&SS performs best in almost all methods. Furthermore, the fusion methods not always perform better than the single methods, where doc2vecC (CS&SS) performs better than the best fusion method GCN (Graph Convolutional Network). These results not only prove the effectiveness of citation recommendation from the perspective of cited paper, but also provide helpful and useful suggestions for method selection and citation content selection. The data and conclusions can be extended to other text mining-related tasks. Simultaneously, it is a preliminary research which needs to be further studied in other domains using emerging semantic representation methods.
- Published
- 2022
18. Multi-objective closed-loop supply chain network design: A novel robust stochastic, possibilistic, and flexible approach.
- Author
-
Hosseini Dehshiri, Seyyed Jalaladdin, Amiri, Maghsoud, Olfat, Laya, and Pishvaee, Mir Saman
- Subjects
- *
SUPPLY chains , *PAPER products , *CARBON emissions , *ENVIRONMENTAL auditing , *SUSTAINABLE development , *REMANUFACTURING - Abstract
• Offering a novel fuzzy robust approach for closed-loop supply chain network design. • Examining the hybrid uncertainties and flexibility of constraints in the problem. • Considering economic, responsibility, and environmental subjects in modeling. • Using the interactive fuzzy programming approach to solve the multi-objective model. • Introducing a new application for stone paper closed-loop supply chain network design. Nowadays, the production of stone paper, in addition to its widespread utilization in various fields, does not require water consumption, cutting down trees, and stone paper products are easily recyclable and recoverable. Due to the importance of developing and using stone paper and paying attention to environmental matters, Closed-Loop Supply Chain Network Design (CLSCND) is very important for stone paper products. Moreover, due to epistemic, randomness uncertainties, the uncertainties in Objective Function (OF), and the flexible constraints for CLSCND in the real world, this paper introduces a novel Mixed Robust Stochastic, Possibilistic, and Flexible Programming (MRSPFP) approach based on credibility theory. The different attitudes of the Decision-Makers (DMs) are addressed by a more flexible measurement of the optimistic and pessimistic parameters using the criterion of credibility. In the study, a comprehensive procedure is proposed for stone paper CLSCND, to minimize costs, increase responsiveness by minimizing transit time between different facilities, and regarding environmental concerns into account through minimizing carbon emissions. The model is solved utilizing an interactive fuzzy programming solution procedure and the Best-Worst Method (BWM). The results of sensitivity analysis, the effect of changing the problem parameters, and the performance of the proposed model are investigated and compared. The results show that the MRSPFP model has a better performance compared to other previous models. The MRSPFP model performs better in strategic decisions that require high investment costs due to the minimization of the absolute deviation of OF from its mean. Also, the applied results of the study show that CLSCND has a good capability and potential for sustainable development in the field of stone paper. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
19. Toward energy-efficient online Complete Coverage Path Planning of a ship hull maintenance robot based on Glasius Bio-inspired Neural Network.
- Author
-
Muthugala, M.A. Viraj J., Samarakoon, S.M. Bhagya P., and Elara, Mohan Rajesh
- Subjects
- *
SHIP maintenance , *PAPER arts , *ENERGY consumption , *ROBOTS , *NAVAL architecture - Abstract
Regular Ship hull maintenance is an essential for sustainability. The maintenance work of ship hulls that involve human labor suffers from many shortcomings. Maintenance robots have been introduced for drydocks to eliminate these shortcomings. An energy-efficient Complete Coverage Path Planning (CCPP) is a crucial requirement from a ship hull maintenance robot. This paper proposes a novel energy-efficient CCPP method based on Glasius Bioinspired Neural Network (GBNN) for a ship hull inspection robot. The proposed method accounts for a comprehensive energy model for path planning. This energy model reflects the energy usage of a ship hull maintenance robot due to changes in direction, distance, and vertical position. Furthermore, the proposed method is effective for dynamic workspaces since it performs online path planning. These are the major contributions made to state of the art by the work proposed in this paper. The behavior and the performance of the proposed method have been compared against state of the art through simulations considering Hornbill, a multipurpose ship hull maintenance robot. The validation confirms the ability of the proposed in realizing a complete coverage of a given dynamic workspace. According to the statistical outcomes of the comparison, the performance of the proposed method significantly surpasses that of the state-of-the-art methods in terms of energy usage. Therefore, the proposed method contributes to the development of energy-efficient CCPP methods for a ship hull maintenance robot. • A novel coverage method based on Glasius Bioinspired Neural Network is proposed. • The proposed method is intended for a multipurpose ship hull maintenance robot. • A comprehensive energy model is utilized by the proposed method for path planning. • The proposed method is significantly efficient than the state of the art methods. • The paper contributes to the development of a ship hull maintenance robot. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
20. Generating survey draft based on closeness of position distributions of key words.
- Author
-
Sun, Xiaoping and Zhuge, Hai
- Subjects
- *
TEXT summarization , *CURVES - Abstract
Automatically generating a survey draft is a challenge to text summarization research because it needs to select important sentences from important references in a large set of candidate papers for composing sections that are in line with section titles and different sections discuss the most relevant reference papers of different number, which are beyond the capability of previous text summarization approaches as they assume that all candidate papers should be included into one summary. This paper proposes an approach to generating survey draft according to a pattern consisting of sections with titles given by the user who requests the survey. The problem of generating each section can be divided into the following sub-problems: (1) rank the input scientific documents (in short documents) according to the title of a section, (2) determine the number of documents that are most relevant to the title, and (3) rank and select sentences from the selected documents according to the title. A position closeness distance of key word is proposed to rank a set of documents by measuring how closely two key words within section title are distributed within each document, which is used to rank the documents. The rationale is that the positions of the neighboring key words of a section title should be closer in more relevant documents than other words. As different sections have different number of selected documents, a method is proposed to determine the number of documents to be included into the current section based on the slope shape of the sorted rank curve of documents according to the section title. Based on the duality property of the closeness, ranks of sentences within a document can be directly obtained when the document is ranked according to the title of section, and both the importance and coherence of selected sentences can be reflected without extra calculation for ranking sentences. Experiments and manual evaluation show that the proposed methods achieve significant improvements compared with other approaches. The proposed approach is significant in applications as different surveys can be generated according to different patterns given by different users. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. Fuzzy ontology datatype learning using Datil
- Author
-
Huitzil, Ignacio and Bobillo, Fernando
- Published
- 2023
- Full Text
- View/download PDF
22. Modeling supply-chain networks with firm-to-firm wire transfers
- Author
-
Silva, Thiago Christiano, Amancio, Diego Raphael, and Tabak, Benjamin Miranda
- Published
- 2022
- Full Text
- View/download PDF
23. “Taps”: A trading approach based on deterministic sign patterns
- Author
-
Liu, Xi and Thomakos, Dimitrios D.
- Published
- 2021
- Full Text
- View/download PDF
24. Improved network intrusion classification with attention-assisted bidirectional LSTM and optimized sparse contractive autoencoders
- Author
-
Bi, Jing, Guan, Ziyue, Yuan, Haitao, and Zhang, Jia
- Published
- 2024
- Full Text
- View/download PDF
25. A novel cross-domain adaptation framework for unsupervised criminal jargon detection via pre-trained contextual embedding of darknet corpus
- Author
-
Ke, Liang, Xiao, Peng, Chen, Xinyu, Yu, Shui, Chen, Xingshu, and Wang, Haizhou
- Published
- 2024
- Full Text
- View/download PDF
26. Matching cost function analysis and disparity optimization for low-quality binocular images.
- Author
-
Hongjin, Zhang, Hui, Wei, and Huilan, Luo
- Subjects
- *
COST functions , *COST analysis , *GEOMETRIC analysis , *ENERGY function , *IMAGE analysis , *MATCHING theory - Abstract
State-of-the-art dense stereo matching algorithms have achieved excellent performance, demonstrating a capability to attain precise matching in most areas. However, current such methods rarely achieve this when images are captured under poor conditions. To improve the accuracy of the algorithm in such cases, this paper introduces a post-optimization algorithm to rectify matching errors and enhance outcomes. The main research areas of this paper include three aspects. (1) Disparities are classified into reliable and unreliable results based on the analysis of geometric matching relationships, local features in the images, and components within the matching cost function; (2) Subsequent analysis of horizontal image features identifies local characteristic indices calculated through integration along the horizontal axis, which establish specific matching criteria, forming the foundation for a cost volume that encompasses these distinct matches; (3) A redefined matching cost function is applied to previously classified unreliable results to rectify matching errors. This energy function is based on the cost volume above. Experimental results validate the efficacy of the proposed post-optimization algorithm, reducing the average matching errors from 8.66% to 5.85%. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. A blind signature scheme for IoV based on 2D-SCML image encryption and lattice cipher.
- Author
-
Gao, Mengli, Li, Jinqing, Di, Xiaoqiang, Li, Xusheng, and Zhang, Mingao
- Subjects
- *
IMAGE encryption , *PUBLIC key cryptography , *CIPHERS , *DISCLOSURE , *MAP design , *DATA transmission systems - Abstract
Today's Internet of Vehicles (IoV) faces many security risks in the data transmission process, and image data is more vulnerable to security threats in the transmission process due to its special characteristics such as large amounts of information and high visibility. Therefore, to guarantee the dependability of data transmission in the IoV environment, this paper designs a blind signature scheme for IoV based on two-dimensional sine cosine cross-chaotic mapping (2D-SCML) image encryption and lattice cipher (BSS-IoV). The innovation of this scheme is that it aims at blind signature of image information, blinds it before sending the information, and combines the lattice public key encryption algorithm to better ensure the safe and reliable transmission of information and reduce the risk of information disclosure. To further ensure the security of the scheme, an image encryption algorithm based on 2D-SCML and pixel splitting (2PS-IEA) is proposed, which is used to blind the information and thus reduce the risk of information leakage on the one hand, on the other hand, it is used in the signature process to ensure the security of the signed information. The 2D-SCML is derived from the cross-model structure proposed in this paper. Through simulation results and experimental analysis, the values of NPCR and UACI, respectively, 99.6094% and 33.4635%, are close to ideal values. And 50% of the cut image can also recover the rough information, which indicates that the signature scheme has the security against differential attacks, cut attacks and noise attacks. Moreover, the security analysis shows that the scheme has the anti-tamper, anti-repudiation and traceability. • Designed blind signature scheme for IoV using image encryption and lattice cipher. • Proposed a crossover model structure is proposed. • Designed a 2D-SCML mapping with superior performance based on this model. • Devised an image encryption algorithm based on 2D-SCML and pixel splitting. • The signature scheme and encryption scheme are analyzed experimentally. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. Bandit algorithms: A comprehensive review and their dynamic selection from a portfolio for multicriteria top-k recommendation.
- Author
-
Letard, Alexandre, Gutowski, Nicolas, Camp, Olivier, and Amghar, Tassadit
- Subjects
- *
RECOMMENDER systems , *FUZZY sets , *ALGORITHMS , *REINFORCEMENT learning - Abstract
This paper discusses the use of portfolio approaches based on bandit algorithms to optimize multicriteria decision-making in recommender systems (accuracy and diversity). While previous research has primarily focused on single-item recommendations, this study extends the research to consider the recommendation of several items per iteration. Two methods, Multiple-play Gorthaur and Budgeted-Gorthaur, are proposed to solve the algorithm selection problem and their performances on real-world datasets are compared. Both methods provide a generalization of the Gorthaur method, which enables it to operate with any Multi-Armed Bandit (MAB) and Contextual Multi-Armed Bandit (CMAB) algorithm as meta-algorithm in a multi-item recommendation scenario. For Multiple-play Gorthaur, an empirical evaluation shows that the use of Thompson Sampling for algorithm selection (Gorthaur-TS) yields better results than the original EXP3 method (Gorthaur-EXP3) and the exclusive use of the optimal algorithm in the portfolio in contextual recommendation problems. Additionally, the paper includes a theoretical regret analysis based on the TS sketch proof applied for this variant of the method. Concerning Budgeted-Gorthaur, experiments show that it allows more flexibility to achieve a suitable trade-off between criteria and a broader coverage of the Pareto set of solutions, overcoming a natural limit of "a-priori" methods. Finally, this paper provides a detailed review, including pseudocodes and theoretical bounds, for all the fundamental MAB and CMAB algorithms used in this study. • Bandit literature lacks formal algorithm review, hindering clarity and comparability. • There is no silver bullet: no algorithm can be the best performer in every instance. • Recommender systems need to balance accuracy, diversity, multi-item recommendations. • Optimal algorithm balances criteria, matching decision maker's preferred trade-off. • Dynamic selection ensures safe performance when optimal algorithm is unknown. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. Human activity recognition with smartphone-integrated sensors: A survey.
- Author
-
Dentamaro, Vincenzo, Gattulli, Vincenzo, Impedovo, Donato, and Manca, Fabio
- Subjects
- *
HUMAN activity recognition , *MACHINE learning , *FEATURE selection , *FEATURE extraction , *DETECTORS - Abstract
• Newbie study using standard ML techniques with HAR Application and Discussions. • Activities found in Literature with the corresponding reference. • Co-occurrences between activities and sensors with the corresponding reference. • Summary and comparison among the different datasets found in Literature. • Summary of the experimentation settings with performance scores found in Literature. Human Activity Recognition (HAR) is an essential area of research related to the ability of smartphones to retrieve information through embedded sensors and recognize the activity that humans are performing. Researchers have recognized people's activities by processing the data received from the sensors with Machine Learning Models. This work is intended to be a hands-on survey with practical's tables capable of guiding the reader through the sensors used in modern smartphones and highly cited developed machine learning models that perform human activity recognition. Several papers in the literature have been studied, paying attention to the preprocessing, feature extraction, feature selection, and classification techniques of the HAR system. In addition, several summary tables illustrating HAR approaches have been provided: most popular human activities in the literature with paper references, the most popular datasets available for download (Analyzing their characteristics, such as the number of subjects involved, the activities recorded, and the sensors with online-availability), co-occurrences between activities and sensors, and a summary table showing the performance obtained by researchers. =The paper's goal is to recommend, through the discussion phase and thanks to the tables, the current state of the art on this topic. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
30. A new community detection method for simplified networks by combining structure and attribute information.
- Author
-
Cai, Jianghui, Hao, Jing, Yang, Haifeng, Yang, Yuqing, Zhao, Xujun, Xun, Yaling, and Zhang, Dongchao
- Subjects
- *
DENSITY - Abstract
Complex networks have a large number of nodes and edges, which prevents the understanding of network structure and the discovery of valid information. This paper proposes a new community detection method for simplified networks. First, a similarity measure is defined, the path and attribute information can reflect the potential relationship between nodes that are not directly connected. Based on the defined similarity, an Importance Score(IS) is constructed to show the importance of each node, it reflects the density around each node. Then, the simplification processes can be realized on complex networks. On the simplified network, this paper proposes a novel community detection method, in which the community structure of the simplified network is detected. The experiments were conducted on real networks and compared with several widely used methods. The experimental results illustrate that the proposed method is more advantageous and can visually and effectively uncover the community structure. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. Convolutional neural networks for quality and species sorting of roundwood with image and numerical data.
- Author
-
Achatz, Julia, Lukovic, Mirko, Hilt, Simon, Lädrach, Thomas, and Schubert, Mark
- Subjects
- *
CONVOLUTIONAL neural networks , *CROSS-sectional imaging , *RECOMMENDER systems , *IMAGE recognition (Computer vision) , *FEATURE selection , *HUMAN error , *MACHINE learning - Abstract
Roundwood sorting is still a manual process in many Swiss sawmills, requiring employees to visually inspect and categorize thousands of logs per day. The heavy workload can be both physically and mentally taxing and can lead to increased rates of human error. State-of-the-art automation systems like X-ray log scanners are expensive and difficult to integrate into existing process lines. This paper proposes a novel recommendation system that leverages recent advances in image classification to automate roundwood classification by quality and species. The system integrates a camera to capture cross-sectional images of logs and record numerical data, such as length, taper, and diameter. The analysis of the resulting dataset highlights the challenges of data imbalance and noise, which makes classification difficult and, in some cases, impossible. However, by using selected datasets with reduced noise, state-of-the-art Convolutional Neural Networks (CNNs) can extract quality and species features. Quality models learn from a manually selected and simplified dataset, featuring samples that experts can clearly classify based on the image's information. Species models are trained on a label-noise-reduced dataset, reflecting real-world complexity. The accuracy on the selected dataset for three quality classes is 80%. The species determination is less challenging and reaches 91% accuracy on a synchronized dataset for the main species spruce and fir. Overall, this paper highlights the potential of Machine Learning in augmenting the roundwood sorting processes and presents a novel system that can improve the efficiency and accuracy of the process. [Display omitted] • Automation of roundwood sorting: Replaces manual sorting with image-based AI. • Integrated camera system in roundwood sorting to collect labeled dataset. • Species prediction: 91% accuracy in spruce–fir distinction. • Quality prediction on complexity reduced dataset: 80% accuracy between three main quality levels. • Efficient, adaptable & scalable system which is easy to integrate into existing process lines. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. A malware detection model based on imbalanced heterogeneous graph embeddings.
- Author
-
Li, Tun, Luo, Ya, Wan, Xin, Li, Qian, Liu, Qilie, Wang, Rong, Jia, Chaolong, and Xiao, Yunpeng
- Subjects
- *
MALWARE , *GENERATIVE adversarial networks , *COMPUTER security , *COMPUTER software industry , *CLASSIFICATION algorithms , *INFORMATION networks - Abstract
The proliferation of malware in recent years has posed a significant threat to the security of computers and mobile devices. Detecting malware, especially on the Android platform, has become a growing concern for researchers and the software industry. This paper proposes a new method for detecting Android malware based on unbalanced heterogeneous graph embedding. First of all, most malware datasets contain an imbalance of malicious and benign samples, since some types of malware are scarce and difficult to collect. Thus, as a result of this problem, the classification algorithm is unable to analyze the minority samples through sufficient data, resulting in poor downstream classifier performance, in light of the fact that adversarial generation networks possess the characteristic of completing data, an algorithm for generating graph structure data is presented, in which nodes are generated to simulate the distribution of minority nodes within a network topology. Then, considering that heterogeneous information networks have the characteristics of retaining rich node semantic features and mining implicit relationships, heterogeneous graphs are used to construct models for different types of entities (i.e. Apps, APIs, permissions, intents, etc.) and different meta-paths. Finally, a new method is introduced to alleviate the over-smoothing phenomenon of node information in the propagation of deep network. In the deep GCN, we first sample the leader nodes of each layer node, and then add a residual connection and an identity map in order to determine the characteristics of the high-order leader. In this paper, a self-attention-based semantic fusion method is also applied to adaptively fuse embedded representations of software nodes under different meta-paths. The test results demonstrate that the proposed IHODroid model effectively detects malicious software. In the DREBIN dataset, which consists of 123,453 Android applications and 5,560 malicious samples, the IHODroid model achieves an accuracy of 0.9360 and an F1 score of 0.9360, outperforming other state-of-the-art baseline methods. • A new generative adversarial network model has been proposed for balancing data. • Heterogeneous graphs are used for modeling malware detection. • A new method is introduced to alleviate the over-smoothing phenomenon. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. MV-Checker: A software tool for multi-valued model checking intelligent applications with trust and commitment.
- Author
-
Alwhishi, Ghalya, Bentahar, Jamal, Elwhishi, Ahmed, and Pedrycz, Witold
- Subjects
- *
ARTIFICIAL intelligence , *PROPOSITION (Logic) , *BLOCKCHAINS - Abstract
Intelligent applications are highly susceptible to uncertainty and inconsistency due to the intense and intricate interactions among their autonomous components (or agents), making their verification theoretically and practically challenging. This paper presents the design and implementation of a new open-source and scalable software tool for modeling and verifying intelligent applications with commitment and trust protocols under both uncertainty and inconsistency settings, using reduction-based multi-valued model checking techniques. The proposed tool is equipped with original and novel algorithms that transform our logics of multi-valued commitment (mv-CTLC) and multi-value trust (mv-TCTL) that we recently introduced to their classical two-valued commitment (CTLC) and trust (TCTL) logic versions as well as to Computational Tree Logic (CTL). Moreover, the tool transforms the mv-CTL to CTL, and it is applicable for the classical model checking by transforming the classical logics of trust and commitment to CTL. To demonstrate the practicality and applicability of the proposed tool in real settings, we present and report experimental results over two blockchain-based applications in the healthcare domain. Finally, we provide discussions and comparisons between the proposed approaches regarding scalability and efficiency. Moreover, we provide packages of more than 11 experiments, including the ones we conduct in this paper and enhanced experiments from previous works. Our findings ensure that the proposed approaches and the software tool that implements them are highly efficient and scalable, giving accurate results under varying conditions. • Design and implementation of an open-source scalable tool for systems model checking. • System modeling with multi-valued logic capturing uncertainty and inconsistency. • Practical demonstration using blockchain-based applications in healthcare domain. • Extensive experiments showing scalability, efficiency and reliability of the tool. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. Conceptual clustering with application on FCA context.
- Author
-
Kovács, László
- Subjects
- *
K-means clustering - Abstract
Conceptual clustering is one of the key approaches for automatic concept generation from input contexts. In this paper, we propose an extension of the dominating k-means method. The proposed method introduces a flexible distance metric that enables the approximation of both Euclidean and meet(or join)-based similarity calculations. To increase the approximation accuracy, the method combines the k-means method with a Quality Threshold component. The paper shows that the method can also be used for approximation of formal concept lattices. Based on the performed tests, the method provides an efficient alternative conceptual clustering approach. • Novel extension of the k-means-based conceptual clustering algorithm. • Novel parameterized distance metric to cover different aggregation approaches. • Tool for flexible concept set reduction in formal concept analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. Nataf-KernelDensity-Spline-based point estimate method for handling wind power correlation in probabilistic load flow.
- Author
-
Shaik, Mahmmadsufiyan, Gaonkar, Dattatraya N., Nuvvula, Ramakrishna S.S., Muyeen, S.M., Shezan, Sk. A., and Shafiullah, G.M.
- Subjects
- *
WIND power , *PROBABILITY density function , *MONTE Carlo method , *SOLAR energy , *RENEWABLE energy sources , *WIND speed - Abstract
Modern power systems integrated with renewable energies (REs) contain many uncertainties. The proposed method introduces a novel approach to address the challenges associated with wind power generation uncertainty in probabilistic load flow (PLF) studies. Unlike conventional methods that use wind speed as an input, the paper advocates for utilizing wind generator output power (WGOP) as an input to the point estimate method (PEM) in solving PLF. The uniqueness lies in recognizing the distinct behavior of wind power uncertainty, where not all random samples of wind speed contribute to actual wind power production. The paper suggests a Nataf-KernelDensity-Spline-based PEM, combining the Nataf transformation, Kernel density estimation (KDE), and cubic spline interpolation. This innovative integration effectively manages wind power correlation within the analytical framework. By incorporating spline interpolation and kernel density estimation into the traditional PEM, the proposed method significantly enhances accuracy. To validate the effectiveness of the proposed approach, the method is applied to IEEE-9 and IEEE-57 bus test systems, considering uncertainties related to load, wind power generation (WPG), solar power generation (SPG), and conventional generator (CoG) outages. Comparative analysis with Monte Carlo simulation (MCS) results demonstrates that the proposed method outperforms the conventional PEM in terms of accuracy. Overall, the paper contributes a pioneering solution that not only highlights the importance of using WGOP as an input in PLF but also introduces a sophisticated method that surpasses traditional approaches, improving accuracy in power system studies involving renewable energy integration. The accuracy of the proposed method is validated by comparing its results with those obtained through Monte Carlo simulation (MCS), where the proposed method yields more accurate results than the conventional PEM. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. Deep semi-supervised learning for medical image segmentation: A review.
- Author
-
Han, Kai, Sheng, Victor S., Song, Yuqing, Liu, Yi, Qiu, Chengjian, Ma, Siqi, and Liu, Zhe
- Subjects
- *
SUPERVISED learning , *DEEP learning , *IMAGE segmentation , *DIAGNOSTIC imaging , *COMPUTER vision , *IMAGE analysis - Abstract
Deep learning has recently demonstrated considerable promise for a variety of computer vision tasks. However, in many practical applications, large-scale labeled datasets are not available, which limits the deployment of deep learning. To address this problem, semi-supervised learning has attracted a lot of attention in the computer vision community, especially in the field of medical image analysis. This paper analyzes existing deep semi-supervised medical image segmentation studies and categories them into five main categories (i.e., pseudo-labeling, consistency regularization, GAN-based methods, contrastive learning-based methods, and hybrid methods). Afterward, we empirically analyze several representative methods by conducting experiments on two common datasets. Besides, we also point out several promising directions for future research. In summary, this paper provides a comprehensive introduction to deep semi-supervised medical image segmentation, aiming to provide a reference and comparison of methods for researchers in this field. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. Fusion of theory and data-driven model in hot plate rolling: A case study of rolling force prediction.
- Author
-
Dong, Zishuo, Li, Xu, Luan, Feng, Meng, Lingming, Ding, Jingguo, and Zhang, Dianhua
- Subjects
- *
HOT rolling , *ARTIFICIAL neural networks , *MODEL theory , *MANUFACTURING processes , *SEARCH algorithms - Abstract
As one of the most critical variables in the hot rolling process, the accuracy of rolling force prediction is directly associated with production stability and product quality. Purely data-driven approaches, however, are severely constrained by the quantity and quality of data, posing challenges for further enhancing the accuracy of rolling force prediction. In this paper, a theory fusion deep neural network (DNN) modelling approach was proposed and applied to the prediction of rolling force during hot plate rolling. In terms of model establishment, the novel NN structure was designed in consideration of the rolling mechanism, and senior variable inputs were added at shallow locations in the network to reduce the loss of critical information. In terms of model training, the method of using rolling theory to guide the initialization of the model was proposed to enable the model to learn the theoretical features more completely in the pre-training phase. Finally, a method to optimize the overall structure of the model using the sparrow search algorithm (SSA) was proposed to ensure the best prediction performance. The model was tested with the data in the developed platform, and the results indicated that the proposed method achieves the best accuracy and stability in this paper, and the response relationship between model inputs and output was consistent with existing theoretical knowledge. Thus, the model can be trusted and flexibly applied to the actual manufacturing processes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. Scheduling optimization of underground mine trackless transportation based on improved estimation of distribution algorithm.
- Author
-
Li, Ning, Wu, Yahui, Ye, Haiwang, Wang, Liguan, Wang, Qizhou, and Jia, Mingtao
- Subjects
- *
MINES & mineral resources , *DISTRIBUTION (Probability theory) , *TRANSPORTATION costs , *PARTICLE swarm optimization - Abstract
The trend in underground mine development is trackless transportation, and the scheduling optimization of underground mine trackless transportation is a current research hotspot. This paper proposes a truck scheduling optimization method for underground mine trackless transportation based on an improved estimation of distribution algorithm to address the truck scheduling problem in the underground mine trackless transportation process. The transportation process of transport trucks in underground mines is analyzed. The dispatching model of transport trucks in underground mines is constructed based on the requirements of reducing transportation costs and increasing transportation efficiencies, taking into account the truck meeting situation in the ramp section and minimizing the total shift transportation distance and the total waiting time of transport trucks as the objective functions. The improved estimation of distribution algorithm is used to solve the truck scheduling model, resulting in the optimal ore blending and scheduling schemes. The comparative analysis employs a genetic algorithm, particle swarm optimization algorithm, and immune algorithm. The results demonstrate that, compared to other algorithms, the improved estimation of distribution algorithm proposed in this paper has superior performance in terms of convergence speed and the search for the optimal solution. The total number of transportation tasks associated with the optimal ore allocation scheme is at least 82, and the waiting time associated with the optimal scheduling scheme is reduced to 7.5 min. The operation time chart of transport trucks calculated by the optimal dispatching scheme can clearly depict the location of each transport truck at any time during a shift's working time, which has significant guiding significance for the actual truck transportation in the mine. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. An intrusion detection algorithm based on joint symmetric uncertainty and hyperparameter optimized fusion neural network.
- Author
-
Wang, Qian, Jiang, Haiyang, Ren, Jiadong, Liu, Han, Wang, Xuehang, and Zhang, Bing
- Subjects
- *
INTRUSION detection systems (Computer security) , *CONVOLUTIONAL neural networks , *PARTICLE swarm optimization , *FEATURE selection , *ALGORITHMS , *HUMAN fingerprints , *COMPUTER network security - Abstract
Intrusion Detection System (IDS) can ensure the network security by identifying network intrusions according to the abnormal traffic data. However, the intrusion detection data has the problem of high dimensionality and changes with network and attack environments, which leads to the poor performance and poor portability of intrusion detection algorithms. Therefore, this paper proposes an intrusion detection algorithm based on joint symmetric uncertainty and hyperparameter optimized fusion neural network. Firstly, a feature selection method based on symmetric uncertainty and approximate Markov blanket is proposed, which fully considers the correlation and redundancy of features, and also the correlation between combined features and the class label, so as to reduce the data dimensionality. Secondly, the CNN-LSTM classifier fused with Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) is used to extract the spatial features and temporal features to improve the classification performance. Finally, the Particle Swarm Optimization (PSO) algorithm is improved and used to automatically optimize the hyperparameters of the classifier, so that the classifier can be applied to different intrusion detection datasets with better generalization ability and portability. Experiments have verified the effectiveness and superiority of the proposed algorithm on multiple evaluation indicators. • An effective algorithm for intrusion detection is proposed in this paper. • Feature selection is based on symmetric uncertainty and approximate Markov blanket. • A fusion neural network is constructed to extract the spatial and temporal features. • The PSO algorithm is improved to automatically optimize the hyperparameters. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. EEG sensor driven assistive device for elbow and finger rehabilitation using deep learning.
- Author
-
Mukherjee, Prithwijit and Halder Roy, Anisha
- Subjects
- *
ASSISTIVE technology , *ELECTROENCEPHALOGRAPHY , *DEEP learning , *ELBOW , *REHABILITATION , *MOTOR imagery (Cognition) , *DETECTORS , *DATA recorders & recording - Abstract
[Display omitted] In today's world, a large number of people suffer from motor impairment-related challenges. Rehabilitation is the main method used to overcome these difficulties. The goal of the paper is to develop a deep learning-based electroencephalogram (EEG) sensor-controlled assistive device for the rehabilitation of elbow and finger movements. We have introduced an innovative finger and elbow movement rehabilitation method using an EEG sensor. The EEG sensor's recorded EEG signals, attention values, and meditation values have been used for this purpose. This rehabilitation technique helps a person perform basic finger movement rehabilitation motions, such as finger extension and flexion. Also, basic elbow movement rehabilitation exercises, i.e., elbow extension and elbow flexion, can be performed by using this rehabilitation technique. In this research, an EEG sensor records the prefrontal lobe's EEG signals, attention value, and meditation value of a person while the person performs motor imagery. A deep learning-based CNN-TLSTM (Convolution Neural Network-tanh Long Short-Term Memory) model with attention mechanism has been designed for decoding the EEG sensor recorded data. The trained deep learning model decides the course of action of the rehabilitation device. The designed model achieves an accuracy of 99.6%. A working prototype model of the rehabilitation device has been developed, and the overall success rate of the model is found to be 98.66%. The novelty of the paper lies in i) designing an attention-based CNN-TLSTM model for motor imagery classification and ii) developing a low-cost EEG sensor-driven rehabilitation device for finger and elbow movement rehabilitation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
41. A novel evaluation method for renewable energy development based on improved sparrow search algorithm and projection pursuit model.
- Author
-
Leng, Ya-Jun, Zhang, Huan, and Li, Xiao-Shuang
- Subjects
- *
ENERGY development , *RENEWABLE energy sources , *SEARCH algorithms , *EVALUATION methodology , *CARBON emissions - Abstract
With global climate change posing a major threat to human society, a growing number of countries have taken "carbon-neutral" as a national strategy and proposed a vision of carbon-free future. As an important supplement to traditional fossil energy, renewable energy is the main force to reduce the use of high-carbon energy and carbon dioxide emissions, which will become the trend of social development in the future. Finding the optimal renewable energy source is of particular significance for achieving the net zero emissions. However, the existing evaluation methods of renewable energy sources have obvious shortcomings. In terms of weight calculation methods, such as the randomness of the subjective method is strong and the index weights do not reflect the small changes of the evaluation matrix, which affect the reliability and accuracy of the evaluation result. The existing ranking methods can only achieve the complete ranking of the different objects, but cannot classify the renewable energy technical alternatives into different grades. Given this background, this paper proposes a novel evaluation method for renewable energy plans based on improved sparrow search algorithm and projection pursuit model. Firstly, this paper improves the traditional sparrow search algorithm from three aspects: population initialization, population update and population variation. Then, the projection pursuit model is constructed, and the improved sparrow search algorithm is applied to optimize the projection target to find the optimal projection direction, so as to determine the weight values of each evaluation index. Finally, the weighted rank-sum ratio method is used to select the best renewable energy technical plan, which can not only realize the complete ranking of different plans, but also classify the technical plans into different levels. Based on the actual renewable energy development data from a province in China, experiments were carried out to investigate the effectiveness of the proposed method. Experimental results show that the proposed method performs better than some existing evaluation methods of renewable energy technical plans. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. Elastic net-based high dimensional data selection for regression.
- Author
-
Chamlal, Hasna, Benzmane, Asmaa, and Ouaderhman, Tayeb
- Subjects
- *
FEATURE selection , *RESEARCH personnel , *VITAMIN B2 , *PREDICTION models - Abstract
High-dimensional feature selection is of particular interest to researchers. In some domains, such as microarray data, it is quite common for a group of highly correlated explanatory variables to be of equal importance for inclusion in the predictive model. This paper proposes a new hybrid feature selection approach that integrates feature screening based on Kendall's tau and Elastic Net regularized regression (K -EN). K -EN as an approach that embeds the Elastic Net, has the advantage of the grouping effect, which automatically includes all the highly correlated variables in the group. The K -EN approach offers insightful solutions to high-dimensional regression problems and improves Elastic Net performance since the screening phase is preceded by a step that further reduces the number of explanatory variables by removing those that disagree with the target based on Kendall's tau. The use of Kendall's tau further enhances Elastic Net performance, as it is robust enough to handle heavy-tailed distributions, non-parametric models, outliers, and non-normal data with greater ease. K -EN is therefore a time-saving approach. The proposed algorithm is evaluated on four simulation scenarios and four publicly available datasets, including riboflavin, eyedata, Longley, and Boston Housing, and achieves 0.2528, 0.0098, 0.1007, and 0.4121 respectively as the Mean Squared Error (MSE). K -EN's MSEs are the best compared to those achieved by the state-of-the-art approaches reviewed in this paper. In addition, K -EN selects up to 100% of relevant features when run on simulated data. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
43. Formulation and heuristic method for urban cold-chain logistics systems with path flexibility – The case of China.
- Author
-
Leng, Longlong, Wang, Zheng, Zhao, Yanwei, and Zuo, Qiang
- Subjects
- *
HEURISTIC , *PERISHABLE goods , *EVOLUTIONARY algorithms , *CUSTOMER satisfaction , *AUTOMOTIVE fuel consumption , *CARBON emissions ,TRUCK fuel consumption - Abstract
The focus of this paper is on achieving a win-win situation regarding the economic, environmental, and social impacts of the cold chain logistics terminal distribution system. This paper proposes three multi-objective models to investigate the above effects by incorporating soft time windows, heterogeneous fleets, and path flexibility, with defining the objectives of examining logistics costs, fuel consumption, carbon emissions, quality damage to perishable commodities, and customer satisfaction using six evaluation functions. To solve the proposed models, an efficient optimization framework is developed by combining domain operators with versatile multi-objective evolutionary algorithms (MOEA) to obtain Pareto solutions. Extensive experiments are conducted to test the validity of the concerned model and algorithms. The results demonstrate that: (1) the proposed algorithm is effective in solving the proposed model; (2) the proposed multi-path strategy can effectively improve the performance of cold-chain logistics systems compared to single-path strategies; (3) evaluation functions that assess customer satisfaction greatly affect the performance of cold-chain logistics systems; and (4) the trade-off relationship between the objectives should be investigated to define the model. The paper also provides valuable managerial insights for improving the efficiency and sustainability of cold-chain logistics operations. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. An effective metaheuristic technology of people duality psychological tendency and feedback mechanism-based Inherited Optimization Algorithm for solving engineering applications.
- Author
-
Wang, Kaiguang, Guo, Min, Dai, Cai, Li, Zhiqiang, Wu, Chengwei, and Li, Jiahang
- Subjects
- *
OPTIMIZATION algorithms , *METAHEURISTIC algorithms , *CONSTRAINED optimization , *ENGINEERING , *INFORMATION resources , *BENCHMARK problems (Computer science) - Abstract
Nature- and society-inspired metaheuristic algorithms have recently become the most promising technological model. To solve more complex optimization problems and complicated engineering applications, this paper proposes a new people duality psychological tendency and feedback mechanism-based Inherited Optimization Algorithm(IOA), which is inspired by people showing positive-negative duality cognitive tendency and adaptive feedback behavior when selecting information resources with different identity attributes. The IOA algorithm contains the symmetric two exploration phases. The exploitation phase adaptively regulates the dualistic psychological balance of people in inheriting the information resources with better existence value through a feedback regulation mechanism controlled by the profitability awareness to increase population diversity. This paper qualitatively and quantitatively evaluates the optimization performance of IOA on 84 benchmarks, including swarm convergence behavior, effectiveness, convergence, robustness, and significance. The scalability of the IOA is investigated using the CEC2017 suites. The algorithm performance in solving constrained optimization is verified on 8 engineering problems. All statistical results of the IOA are compared with the most promising 12 metaheuristics, which shows that the absolute computational efficiency of IOA on four types of functions is 95%, 96.67%, 80.95%, and 76.92%, respectively, the average rank (rank sum ratio) of IOA is 1.08 (1.19%) among the 13 algorithms, ranking first. The Wilcoxon signed rank test results on the CEC2017 suites show that IOA contains 1437 significance indicators out of 1440 comparisons, with the proportion of significant differences 99.79%, which suggests the proposed IOA maintains efficient search efficiency. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
45. A comprehensive review of cyberbullying-related content classification in online social media.
- Author
-
Teng, Teoh Hwai, Varathan, Kasturi Dewi, and Crestani, Fabio
- Subjects
- *
FOLKSONOMIES , *SOCIAL media , *SOCIAL networks , *CYBERBULLYING , *WORKFLOW , *MACHINE learning - Abstract
The emergence of online social networks (OSN) platforms removes communication barriers that are essential to human life, catalyzing social networking growth. However, this emergence has given rise to a negative impact when someone abuses the platform to commit cyberbullying activities. Hence, it is crucial to work on automated cyberbullying-related classification to mitigate the societal phenomena in OSN. The research on the automated classification model for cyberbullying was pioneered over the last decade with growing interest among researchers. It is helpful to track its growth over the decades to elucidate the state-of-arts techniques applied in this field. This paper presents a large amount of literature germane to cyberbullying classification from past to present to provide a comprehensive review. A total of 126 papers were reviewed. This paper emphasizes text-based cyberbullying and multi-modal cyberbullying. The review was presented around the machine learning workflow, encompassing four core sections: dataset analysis, pre-processing analysis, feature analysis, and technique analysis. Based on the critical analysis, limitations are addressed along with the future works that can be conducted to fill the gap in previous research. Furthermore, the review also examined the ethical implications associated with the implementation of these techniques. This review paper is expected to assist readers in fully comprehending the current trend, architecture, and techniques applied to the field. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
46. Supervised discretization of continuous-valued attributes for classification using RACER algorithm.
- Author
-
Toulabinejad, Elaheh, Mirsafaei, Mohammad, and Basiri, Alireza
- Subjects
- *
DECISION trees , *NAIVE Bayes classification , *CLASSIFICATION algorithms , *DISCRETIZATION methods , *ALGORITHMS , *CLASSIFICATION , *LOGISTIC regression analysis - Abstract
In the contemporary world, data pervades every facet of human life, and the information contained in this data plays a pivotal role in shaping decision-making and advancing technology. Among the plethora of techniques available, classification methods are highly effective tools for extracting valuable insights from vast volumes of data. The R ule A ggregation C lassifi ER (RACER) is a novel rule-based classification algorithm known for its exceptional performance. A notable limitation of RACER lies in its inability to handle continuous features. In this paper, we address the aforementioned limitation by employing various supervised discretization methods, including CAIM, MDLP, Decision Tree (CART), and ChiMerge. The impact of these methods on RACER's accuracy and understandability is evaluated across nine datasets from the UCI repository. Additionally, the paper conducts a comparative analysis of RACER's accuracy against well-known classifiers such as Naive Bayes, Logistic Regression, SVM, LightGBM, and Decision Tree. The findings indicate that RACER achieves the highest average accuracy when we utilize MDLP as the discretization method, surpassing its initial average accuracy. Moreover, RACER demonstrates superior understandability by generating the lowest number of rules when employing ChiMerge and Decision Tree for discretizing numerical features. Furthermore, RACER outperforms the other five classifiers when employing MDLP. • The discretization unit enables the RACER algorithm to handle continuous data. • The discretization unit improves the classification accuracy of the RACER. • The discretization unit increases the understandability of the RACER rules. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
47. Enhancing local citation recommendation with recurrent highway networks and SciBERT-based embedding.
- Author
-
Dinh, Thi N., Pham, Phu, Nguyen, Giang L., and Vo, Bay
- Subjects
- *
LANGUAGE models , *DEEP learning , *NATURAL language processing , *COMPUTATIONAL linguistics - Abstract
When writing academic papers, referencing statements, claims and previous studies is always an important activity. However, it is considered challenging for scientists to find relevant and appropriate scientific articles which are closely related to their current works in order to reference them in their research. As a consequence of the rapid growth in scientific papers being published every year, researchers might easily get overwhelmed within a huge number of resources. One way to help them find the desired references more easily is to use context-aware citation recommendation. The citation recommender system can automatically provide a list of suitable papers as references based on specified inputs which reflect the researchers' interests. Among the outstanding achievements of deep learning and natural language processing in recent years, the utilization of deep neural learning architectures have supported to address the problem of citation recommendation. As the result, the neural citation recommendation area has received much attention from the academic community, with the aim of enhancing the precision and correctness of the results of existing citation recommendation systems. Following this research direction, in our paper we present a novel context-aware citation recommendation model, called RHN-DualLCR (Recurrent Highway Networks – Dual Local Citation Recommendation), which integrates Recurrent Highway Networks (RHN), an improved model of the original Bidirectional Long Short-Term Memory (BiLSTM) architecture, and uses a SciBERT-based (Science text of Bidirectional Encoder Representations from Transformers) embedding layer to build up the efficiency of the state-of-the-art local citation recommendation model, which enriches context representation with global information. Our research demonstrates its originality and relevance because we used one of the latest achievements of deep models (RHN model) and natural language processing (SciBERT) applied to the citation recommendation problem. We have conducted experiments on the RHN-DualLCR model on 3 widely known datasets for the citation recommendation problem: ACL-200 (Association for Computational Linguistics), ACL-600 and RefSeer, and also used 2 common evaluation standards Mean Reciprocal Rank (MRR) and the Recall@K (R@K for short) to evaluate the performance of our model. Experimental results show that our proposed model is 3% to 16% better than the original models or state-of-the-art models. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
48. RA-HGNN: Attribute completion of heterogeneous graph neural networks based on residual attention mechanism.
- Author
-
Zhao, Zongxing, Liu, Zhaowei, Wang, Yingjie, Yang, Dong, and Che, Weishuai
- Subjects
- *
COMPLETE graphs , *INFORMATION networks , *ATTENTION , *GRAPH algorithms - Abstract
Heterogeneous graphs, which are also called heterogeneous information networks, analyze the different types of nodes in an information network and the different types of links between them to accurately tell the difference between different semantics. In recent years, there have been several GNN-based models to process heterogeneous graph data and achieve good performance. The model faces the challenge of first considering how to deal with the challenges posed by embedding different types of nodes in a heterogeneous graph; secondly, analyzing the node attribute information, which requires satisfying all nodes with attributes, which is not easy to achieve due to the existence of individual nodes and their neighbors that do not carry attributes. Previous network structures have added attributes to nodes by handcrafted methods, thus neglecting the overall learnability of the model, which in turn leads to poor performance. This paper analyzes the reasons for this phenomenon and aims to design a learning-competent heterogeneous graph neural networks(HGNN) framework. The understanding in this study embeds different types of nodes into the same feature space for node embedding, using the topological embedding of heterogeneous graphs as a guide to complete the process of complementing non-attributed nodes through learnable ways in the model and the use of residual attention mechanisms to handle attributes between nodes. Therefore, this paper proposes a general framework for Attribute Completion of Heterogeneous Graph Neural Network Based on Residual Attention Mechanism (RA-HGNN) , and combines it with other GNN models to enable end-to-end execution of the entire model. Experimental verification was completed on real-world data sets to prove the feasibility of the model, and the experimental results showed state-of-the-art performance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
49. Multi-lead-time short-term runoff forecasting based on Ensemble Attention Temporal Convolutional Network.
- Author
-
Zhang, Chunxiao, Sheng, Ziyu, Zhang, Chunlei, and Wen, Shiping
- Subjects
- *
RUNOFF , *LEARNING strategies , *LEAD time (Supply chain management) , *WATERSHEDS - Abstract
In the realm of ecological management and human activities within river basins, short-term runoff forecasting plays a pivotal role. Addressing this need, this paper introduces an innovative framework for short-term runoff forecasting: the Ensemble Attention Temporal Convolutional Network (EA-TCN). The cornerstone of this innovation lies in the effective amalgamation of Temporal Convolutional Network (TCN), lightweight attention mechanism, and ensemble learning strategy. This integration synergistically enhances the model's overall performance in terms of accuracy, efficiency, and robustness. TCN forms the foundation of this framework, where its efficient architecture, characterized by shared parameters and parallel computation, significantly boosts computational efficiency. Its employment of causal and dilated convolutions adeptly captures long-term dependencies within time series inputs. The incorporated lightweight attention mechanism further augments the TCN, enabling EA-TCN to precisely discern complex relationship in temporal data, particularly exhibiting remarkable temporal robustness across various forecasting horizons—a feat challenging for conventional forecasting approaches. Additionally, the integration of the Snapshot ensemble method within the framework allows for simulating the effect of training multiple models through one single training process, thus further elevating the model's accuracy and robustness. Rigorous ablation and comparative experiments conducted on the US Columbia River dataset substantiate our claims. The results not only validate the individual merits of each component within EA-TCN but also illuminate the significant advantages of their collective application. Our comprehensive assessment unequivocally demonstrates the framework's exceptional performance in short-term runoff forecasting, positioning it as a state-of-the-art solution in this field. We will further discuss its impact of vocation education in this industry. • This paper applied the modified TCN to short-term runoff forecasting. • EA-TCN adopts a lightweight plug-and-play attention module in the time dimension. • The Snapshot ensemble method is also applied to our proposed model. • EA-TCN can make accurate predictions for multiple different lead times. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
50. An intelligent instantaneous selective method, through compacted ZnO nanoparticle pellets, based on the concept of a virtual electronic nose, for different volatile organic compounds.
- Author
-
Bouricha, Brahim, Souissi, Riadh, and El Mir, Lassaad
- Subjects
- *
ELECTRONIC noses , *NANOPARTICLES , *CHEMICAL detectors , *PRINCIPAL components analysis , *ZINC oxide , *VOLATILE organic compounds , *ETHANOL , *TOLUENE - Abstract
This paper exercises a novel procedure with virtual e-nose (VEN) systems using a single compacted nanoparticle ZnO chemical sensor. The pellets are formed by a nano-powders synthesized via a simple sol–gel method. Furthermore, in this paper task we show the transient differences in the dynamic response curves for ZnO pellet when exposed to volatile organic compounds (VOCs) namely ethanol, methanol, isopropanol, acetone and toluene. VOCs are categorized using the transient response of a single sensor at four different operating temperatures, offering diverse features that came from the reaction mechanism of the target molecule. The relevant attributes of responses were run through Ascending Hierarchical Classification integrated with Principal Component Analysis. Three clusters classified for three specific features subsets were distinguished. A new mathematical iteration of this hybrid process was performed and leads to good HAC output stability. The result is delivered automatically with three-digit sorts in a specified order after thorough implementation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.